Data Analytics Chat

What It Really Takes to Adopt Generative AI at Scale

• Ben Parker • Episode 71

Most organisations are experimenting with generative AI. Very few are turning it into a real business impact.

In this episode of Data Analytics Chat, we speak with Nayan Paul, Managing Director and Chief Architect for Generative AI at Accenture, about what it really takes to move from pilots to production.

They explore why GenAI adoption is fundamentally a leadership and operating-model challenge, not just a technology challenge, covering business ownership, data readiness, governance, and the balance between speed and responsibility.

This is a practical, honest conversation for leaders who want GenAI to become a real capability inside their organisation, not just another experiment.


00:00 Introduction to Generative AI Challenges
01:08 Guest Introduction: Nayan Paul from Accenture
01:46 Nayan Paul's Role and Responsibilities
05:08 Early Experiments and Lessons Learned
07:02 From Experimentation to Value Creation
10:14 Business Adoption and Technology Enablement
16:15 Framework for Successful AI Implementation
29:24 Balancing Speed with Responsibility
34:09 Advice for Moving from Curiosity to Impact
38:43 Conclusion and Final Thoughts

Thank you for listening!

nayan:

We reached a position, we reached a place where there were 3000 experiments running in a particular sandbox at a particular time. We have built generative AI solutions that are cutting edge, but eventually we have seen no adoption and if we do not have any adoption, eventually it becomes, yet another experimentation. So I've seen people thinking of generative AI as a magic wand where it can fix things where people do not know what is broken. Generative ai. Is the first technology in my mind, in human history where it has been personally adopted first, then, being adopted in a corporate world, right?

ben:

Almost every organization today is experimenting with generative ai, but very few are seeing meaningful impact at scale. The hardest part isn't the technology, it's the work behind the scenes, data foundations, operating models, and how decisions get made. In today's episode of Data Analytics Chat, I'm joined by Naan Paul, who's the managing Director for Chief Generative AI Center of Excellence at Accenture. And today we'll discuss what it really takes to adopt generative AI inside an organization. What separates experimentation from impact where companies get stuck, and what leaders should prioritize over the next six to 12 months. Naan great to have you here.

nayan:

Thank you so much, Ben. Thank you.

ben:

Thank for hosting me and I'm really excited to be here. Thank you. So I guess, Anne, before we dive in, do you wanna give listeners a quick introduction to who you are and the work you are leading today?

nayan:

Absolutely. My name is Naan Paul. I'm based out of us, and for me, generative AI has been a very organic growth. Five years ago, my organization

ben:

to

nayan:

opened up the Center of Excellence. It's a global entity, and as a part of this COE, as a part of the global organization. I supposed to understand help customers build

ben:

when,

nayan:

generative ai. So for me, I have three major three ways that I contribute as a part of. One is being part of the consulting work. I work with my customers. I try to understand their North Star vision. I try to understand their gaps. I try to understand what it'll take for them, like you said, to move from an experimentation mindset to a productionized mindset. Two, I work with my products teams. I work with my partners to understand what it means to build solutions, jumpstart kits, frameworks, the entire scaffolding bit, because when we are talking about enterprise adoption, when we are talking about scalability. It is very important to do it right and do it in a repeatable way. We talk about building the right foundation. We talk about how generative AI needs to be designed in a way that we are not just chasing use cases and demos. We are chasing outcomes. So what it means to build those foundational capabilities. And three, because of how we are working very closely with our partners. It's about aligning our roadmap to the roadmap of our partners. What are, for example, Microsoft offering AWS offering, GCP offering. Now, what is Nvidia offering and how as Accenture, we are incorporating those understandings, incorporating those newness into our storyline, into our adoption, and how can we bring the latest thinking to our customers? So this is my day-to-day. This is how you know we have approached generative ai. From, like you said, change management to having ownership to ensuring that we are about the bigger picture and.

ben:

Brilliant. Just so the listeners know, obviously, so are you specialized in one industry Accenture, or is it across the board?

nayan:

So within Accenture, we have multiple groups. However, my group as I was talking about, is part of Center of Advanced ai. So Center of Advanced AI is a very horizontal entity. We are supposed to be the tip of the sphere where we are supposed to understand, incorporate, and in vibe what is the latest and greatest technology offering. So my team and I, we specialize mostly across domains, across verticals. We work with our functional leads. We work with our domain experts, the subject matter experts, but we bring in the technology bits. We understand what is the rate latest offering, what it means to scale, what it means to secure. We talk about responsible AI security safety breaks. So we will bring in mostly the horizontal technology. We talk and adoption and others that we work with, like our business partners, our SMEs, our subject domain experts, they will bring in the functional knowledge. So together we will solve problems for.

ben:

Brilliant. So you'll have a good blend of use cases, which would be great for the listeners today. And I guess something before we move on to the data topic, has there been a experience that you've gone through that's most shaped how you approach leading AI initiatives in organizations today?

nayan:

A hundred percent. Like it has been a growth cycle. It has been a lot of lessons learned for us. I remember five years ago when we started off this journey, our intention was how do we ensure that we are educating people? So we created our own sandboxes. We actually advertised, we actually, put in a lot of money where we asked our account teams, like we, we have multiple account teams. We asked, sorry we asked our account teams to onboard. Into these sandboxes. We provided, tenancy, we provided Azure services. We were early on bringing in, LLM models. We were exposing those LLM models and we are requesting all these account teams to go, and experiment. So the initial days, it was all about getting familiar, ensuring that people are not looking away. Giving them a safe space, giving them all the right tools and technologies, giving them the right support to ensure that people can bring in their thought process, they can bring in their ideas and just experiment, right? So we focused on ensuring that, just keep on experimenting, whatever thoughts you have, irrespective of whether these outcomes are going to add value or not. Just experiment. We reached a position, we reached a place where there were 3000 experiments running in a particular sandbox at a particular time. So what it did was we ensured that different account teams who are working with their respective clients, they were able to tell a story, they were able to showcase something. And again, showing is believing. And for the first one and a half years, we are doing a lot of this experimentation. We were not productionizing anything, but that was okay with us. Again, as time went by, we realized that we have to shift the gear. We have to understand what it means to just stop experimenting and start focusing on value. So for the next one, one and a half years we actually promoted, we talked about how we have to change our mindset from being, just experimenting to. Identifying, bubbling up the right use cases, focusing on what is important, and use those lessons learned from experimentation to start providing real value outcomes. And when we started building those real value outcomes, we saw the real challenges. We talked about data readiness. We talked about the operating model. You talked about the scalability, adoption. You talked about the real ROIs, and we've stumbled upon those challenges too. As we stumbled upon those challenges, we tried to understand whether it is a technology problem, whether it is an adoption problem, whether it is a responsible problem in terms of ownership, and we started mitigating a lot of those. So over a period of time. Now, if we look back, we did the right approach in my opinion. We didn't just go ahead and start saying, Hey, we are not going to start, or we are not going to enable anybody. To use generative AI unless you really provide a value. No, we said, don't worry about the first initial phase, just experiment. Just get yourself comfortable. Just ensure that you are technology aligned. And then over a period of time, we learned our lessons. We brought in business, we got the right buy-ins, and again, we organically grown into. What and how we can deliver values. So if you look back into those five years, it has been a very incremental journey for us. And now when we talk about business process automation, when we talk about augmentation, when we talk about, identifying those key functional capabilities within an organization and look into those from the lens of automation and augmentation, we understand what it takes, because of how the journey has been propagated for us. Long story, Ben, again, to your point. Yes. Now the important bits are ensuring that we are not just stuck into that POC world. We are not just looking into experimentation. We are not stuck, getting stuck into that mindset. But the journey had to start somewhere, and the journey for us started five years ago, and now it's all about value. It's all about looking into, as a company and saying, what is, what are those important things? Which are making us lose time, which is making us lose, money or growth opportunities. So now we are looking into generative AI opportunities from the lens of saving time, making money or growing as an organization. So again, to reach that particular milestone, it was a lot of hard work. It was a lot of technology adoption from my team and it was a lot of business alignment that we worked together with.

ben:

Yeah, and I love the fact where you say, now it's about creating value because look, we're going for a transformational period now, isn't it? With businesses, and if you've got every business department coming to with you with a use case, you can't do it all, can you? You need to be choosing the one that's going to make the biggest ROI or the biggest impact for your business right now. You can't be doing everything.

nayan:

A hundred percent. And I think that's a mind shift. Shift change to your point. We couldn't have just talked about value from day one to your point, but now when we talk about how we are approaching generative ai, it's all about, who is owning it. It cannot own generative ai. If you give it the responsibility of owning generative ai, they'll just focus on architecture tools and what is the right technology stack. That is not the right measure of generative ai. Again, you mentioned it, and again, I think everybody understands it now. The real value of generative AI now is how is it transforming business? What is that underlying ROI? We are not just talking about automation because yes, automation is a good measure, but in my opinion, it's all about augmentation first, which means can we identify those important low risk, high impact use cases? Can we find out those business scenarios where the augmentation or automation can work hand in hand and allow businesses to own solution that can reduce, for example, the effort, the time, or even FTEs, right? And when we start looking into generative BI use cases from the lens of what can organizations do, how the decision making can be made faster, how the decision making can be de-risked. Those are the right measures to pick up generative use cases now, and that is how we are approaching it now. So again, the experimentation phase was important. Maybe it was important three years ago, four years ago, but now with all the technology changes with the way in which we are actually approaching generative ai, the way in which we should be approaching generative AI is business adoption and the ownership. The ownership at any point of time at this day and age is all about business. How are they looking into business? How are they looking into augmentation on automation and what business processes they feel comfortable, augmenting with generative ai. So I think those are the right questions we need to ask right now.

ben:

Brilliant. So I guess most leaders feel pressure to do something with generative ai. In your experience, what separates organizations that. From those that create value.

nayan:

It's a multidimensional question to be honest. And you are, you're so right to point it out. So I have seen organizations where there is a lot of IT dependency. A lot of focus has gone into the CTO office where the intention is, do you have the right infrastructure? Do we have the right technology stack? What is that right? Technology stack. Can we set up the right technology foundation and can we onboard business and allow them to come and experiment? And that is a methodology where a lot of companies have gone into, and for the last two, two and a half years, they have been stuck into this POC loop, right? Technology is driving Gen ai. There is no adoption. Business is trying to prove a value. But even if there are certain values that are getting proved, there is no adoption because. It's not coming from the business. So what has happened right now is there's an organizational shift. We talked about change management. We talked about how it is an important management decision. So we are seeing and talking to a lot of our customers where we are educating them about, strategic way of adopting generated ai. And that strategic way of adopting generative AI is business first. So we have to bring business and technology in the same table. We have to talk to them, together, we have to talk to them about what value, like I talked about, as organizations, if you're looking into our organization and understanding why we are losing time, money, and growth opportunities, it has to come from business. And business can then tier down and tier up what kind of use cases they're looking at. And typically, again, technology can augment and help those decisions. They can help technologically. To tier down and say, oh, in my tier one I have these kind of use cases, which are high impact, but these are high risk. In the middle tier, we do have those use cases where we can have pretty significant opportunities of growth and we do not need, high volume, high risk speeder. We can go into a, tier three where the value of generative value use cases might not be significant, but we are also not looking into significant risks. So again, it's a measure of what is that fine line, what as an organization we are comfortable with a lot of time when we tear down and look into how we are going to. Position our use cases into these buckets. And what does it mean to onboard a new project and what does it mean to have ownership of the project in terms of outcome? But how does technology enable them through, like you said, data readiness, platform readiness technology adoption ensuring that the right guardrails, the right responsible ai, the safety nets are put in there. So again, technology will bring in those as an augmentation. But businesses driving the value by identifying the right use cases and technology can help them identify the right use cases in terms of risk profile. So once you do that, once you marry up the business and technology, then it becomes an easy, smoother, right? Then it becomes a game of adoption. So you have seen it more than again, in multiple ways. We have seen this multiple ways. A lot of time we have built brilliant, wonderful outcomes. We have built generative AI solutions that are cutting edge, but eventually we have seen no adoption and if we do not have any adoption, eventually it becomes, yet another experimentation. And it becomes yet another effort where, the organization is not getting a lot of value outta it. So again, like I said, it's a combination. But to my point, what I was trying to emphasize was it is a function of business adoption, business ownership, augmented by technology enablement.

ben:

So I guess is there is there an early signal where this experiment is heading towards value rather than a proof of concept?

nayan:

Absolutely. And again, no, I come from a technology background. I can talk about how we have done it, right? So what we have done is we have created a framework and we call it a funnel framework, right? Again, the analogy is very simple. We have hundreds of ideas that actually come into our generative AI platform at any point of time, and out of hundreds of ideas, probably five of them, them actually go into production so we have created that setup where we have project onboarding and we have the use case onboarding process, which means. We work with our ciso, we work with our legal team. We work with our business to create a self-service form. We document and understand what use cases people want to bring in, and we allow our business to come and submit those forms. So as business owners, they can actually justify and discuss what is their value, what they want to solve for. What we do internally is we have actually converted those business request forms into technology workflows. We worked with our c cso, legal our engineering department, and we start providing weightage to these kind of ideas, and these weightage is actually convert to something meaningful. We can automatically bucket and tier those. So we can automatically know if a particular idea is high risk, low risk, high value, low value, and we create a rub rubric. So if a particular project is worth investing on, it actually gets onboarded into something we call it as a fail fast sandbox. And it is very important to your point, to have a concept of fail fast. So where we are differentiating ourselves from a technology point of view, we do not want to get stuck in that field loop forever. So we have technology enablement. Now we think of building generative AI solutions through a common foundation. When we onboard a particular use case into that platform, we will automatically see whether that use case is going somewhere. If we are providing value, success is good. If not we need to fail fast. We need to stop that buck right there. We do that through assessments. We have functional assessments where it can be human in the loop, it can be experts. We have technical assessments where we evaluate, scalability cost. We look into responsible AI biasness that comes out of the answers, the accuracy measures, the ground truth evaluations. We look into the finops we look into the observability, metricses, we look into the, all the logs and try to understand whether. That particular idea that we are trying to prove in that fail fast sandbox, is it worth investing on? So it has become like a standard for us where we are not going to keep an idea into a sandbox, which we call it as a fail fast sandbox for a long period of time. We have to prove a value out of it. And the value, like I said, is proven through technology assessment and human. If we can prove that value goes into scaled ideation, that is where we can start putting in the belts and whistles and we can convert that idea, which was a POC into something more meaningful like an MVP, and then we hand it over to the hands of business for a very small period of time. Again, shadow it and all. So we want to ensure that we are restricting business to have a good look and feel into what we have built, because again, they are the owners. In the scale ideation phase, we are understanding whether the outcome is, aligned to what the initial thought experiment was. And if it is, then again, we go through those entire assessment phase, and if everything looks good, then we start talking about packaging and putting into production through automation. So for us, it's a very stage gated process. There is a demand funnel, there is a fail fast sand sandbox where we try to establish value quickly. We do assessments, we do scaled ideation, we do reassessments, and then we decide whether it has to go into production. So it becomes a factory. And more importantly, we are continuously monitoring those factor, which means whoever is putting an idea into this funnel, whoever is talking about what is their business value, right to the assessments to fail fast, the experimentations, the outcomes, everything is documented, tracked, trace, and evaluated, and we call that as a control claim. So our control plane becomes a 360 view of what ideas are actually onboarded into a generative AI platform. How these ideas are actually converted, evaluated, validated. So tomorrow, if there is a decision that we have to take to our a RB saying, Hey, we have gone through the entire hoops. We have actually an idea that we want to productionize. We can trace that lineage. We can understand what all this experimentation has gone in, what kind of scenarios you have tested. What is the value? What is the ROI? What is the cost? We can track every single thing in the entire journey, and that journey can now be converted into something meaningful and then put into production. So again, every organization, I'm sure has their own process, have their own framework, have their own methodology of taking an idea into production. This is ours, but this kind of a methodology is very important and critical. So instead of saying, how do we go ahead and put an idea into production? Think to it from the lens of multiple stage gates. And for us, these stage gates is something that we ha that has evolved through a lots of, lot of lessons learned. We also have been stuck into PC land for a long time, and now over a period of time as we are learning our lessons. This kind of a factory model has become a standard practice for us, and that is what we have been following for onboarding new ideas and productionizing it, in a much more repeatable.

ben:

Yeah, no and again, like you said, they're repeatable. That's you. If you're constantly doing it, you're gonna keep getting. You're gonna get results in the day'cause you're following a the same process every time. Which I love that. So then so you mentioned around like adopting generative ai. So I guess when it comes to what's the, I guess what's the biggest misconception businesses have when they start adopting generative ai? That, and that would catch them off guard.

nayan:

So again, when I have been talking to business internal and external, I've seen three kinds of ideologies coming through, right? So one is, there are business who thinks that generative AI can be a silver bullet, right? So when we have talked to them and understood what kind of use cases do you want to focus on? Do you have a well established process? Can we look into the documentation? Can we look into your existing process? So to augment, to reinvent your process, we have to understand your process. So where is your process? Who can help us understand and document those processes? Where is that tribal knowledge? People have been working on this process for 20 years. How are you getting it done? Is there a day in a life that we can actually focus on? Sorry. Where is the data coming from? What kind of data sources we are talking about? How ready is the data? And when we start asking these kind of questions, a lot of time businesses will come back with a very blank face and say, we have not thought about it. We believe that generative AI can actually, get everything done. We may or may not have all the processes established. We do not know where all the documentations are. We do not understand where our data comes from. We do not know how clean and how ready our generative AI use case data can be. But our thought is that is what generative AI is for. It's going to automatically fix all the broken things and come up with magical outcome. So I've seen people thinking of generative AI as a magic wand where it can fix things where people do not know what is broken. The other end of the spectrum is absolutely opposite, where we have gone and talked to our business partners and they're like, no, we are not going to adopt generative ai because every time I talk to you guys talk about augmentation. You do not talk about a hundred percent automation, which means, for them it becomes a binary game. If something, if people still need to invest some time as a human in the loop and look into the outcome generated by generative ai, for them it is not good enough. They're like, oh, we still have people. People have been here for 20 years. They know the process in and out. We do not think generative AI can do anything about it. It's going to be too much of a, effort for us, generative ai, instead of making our lives easier, it's going to add more work for us. So you have seen those kind of scenarios in the business where they're very apprehensive about changes. They feel that, they are subject matter experts and for them, generative AI becomes yet another tool and burden, which may or may not prove that value that they imagine it would. So these are the two ends of the spectrum. Where either, people think of this as a magic wand or people think of this as yet another tool that would add more chaos and value. So as technology partners, I think what our biggest challenge is to bring these kind of discussions into a common board, talk about what is real, right? And the reality is, it is, augmentation will come before automation. We will find out the right challenges. We might not automate everything. We can start with 60% automation, where a lot of these repeatable, redundant manual things can be automated and it can be just 60% and, it's okay to start with 60%, right? So the incremental value of generative ai, not everything is going to be fixed in day one. It takes some kind of an effort to get the data readiness done, platform readiness done. Technology is going to be an enabler, but somebody still has to define those processes. Somebody has to look into those business process with a fresh sense and a fresh lens and say, all right, I see your business process. This business process is too cumbersome because, we have been doing this for 20 years, but now with technology, we can actually streamline the process. We can fast track the decision making, right? So I think, ensuring that we can bring people into the same mindset. Giving them a true sense that it's going to be a very hydrating journey. It's not going to be a magic wand where in day one we can change everything. Data readiness is a thing. Understanding the business problem is a thing as long as we are actually talking in the right sense, giving them the right approach and educating them about what exactly is going to follow and what is that 30, 60, 90 date program. How these 30, 60, 90 day program is going to show them the incremental value and how over a period of time they're gonna see a lot more, value coming out of these initiatives. That is important because day one, if people are thinking that it's going to magically change the workforce or immediately show some ROI, I think that's a sticker problem. And it has happened not because of the organizational shift. I think that is in general, right? Generative ai. Is the first technology in my mind, in human history where it has been personally adopted first, then, being adopted in a corporate world, right? People have been using chat GPT before they started building their agent solution, which means the thing that I can ask any questions and get the right answer. So in their mind, generative AI was already something magical. So now that we are incorporating those things in into a corporate world. Understanding that it still requires a lot of work in terms of data technology, adoption, business value, business process. It comes as a sticker shock for a lot of our customers.

ben:

So I guess, who needs to own generative AI inside an organization? Is it technology, data, product, or the business?

nayan:

Absolutely. It has to be business. We talk about the fact that the accountability of generative AI use cases, it has to be on the use case level. It has to be on a business level. It cannot be just on committees. It cannot be just on, technology enablement teams, right? If you put it in the hands of technology, they would just focus on the latest and greatest technology. They'll be running behind technology. They'll be talking about the right things to do in terms of technology adoption. But not ROI, not outcome, not growth. If you put it in the hands of data, it'll go into a into a rapid hole where we talk about is my data, right? How do I get my data right now, what are the different ways in which the data needs to be onboarded? How do you provide the access control, the security, and the, whole line yards of data governance. If you put it in the hands of products, they'll think of generative AI yet as another tool, right? Again, we'll talk about. How we can just convert generative AI into a tool. So the right ownership has to be on business. Business has to drive the value. They have to own the value, and like I said, the data, the technology, and the product teams, they have to enable business. At the end of the day, it has to be business. Who owns it?

ben:

Okay. No, that's interesting And yeah, good to see. Going back to obviously looking at adoption again. You've got obviously security risk and ethics, often slow adoption. So how can you balance moving fast with being responsible without, like paralyzing the organization?

nayan:

Yeah, it's a great question. Again I hope, and there is a clear answer to this. Every organization struggles with this. Even our organization struggles, time to line, understanding where to draw the line. But one thing is clear, Ben. One thing that is clear is we need to separate out the policy from permission, and that has worked for us. So what I mean by that is AI policies will evolve. We cannot just wait for AI policies to be mature before we productionize anything. In that case, we'll never productionize anything. So in my mind, as long as we start with the right permissions, which means are we picking up the right use case? Does this right? Does the use case have. The right data governance, the right data auditability that is needed. Are we not opening up too much data from a permission point of view when we are actually exposing the data through generative AI use case? Do the people have the right permission? Are there rule based access control, attribute based access control, system based access control? Who gets to access what kind of data? So as long as we are providing outcome with a need to know basis, as long as we are providing outcome with what is the bare minimum data we can actually synthesize and put it in the hands of whoever's seeking the data, we should be good. So again, what I was trying to put with this is it's important to have technology permissions being very robust and solid. That is the reason we talked about data readiness and technology enablement for business, which means when a generative use case is there and this generative AI use case, for example, talks to five different systems, we need to ensure that the right permissions are in place for all these five systems. People can interact, access those five systems. And again, the amount of data is actually propagated point. organization have figured out all the AI policies? Probably not. Should we be waiting for all the AI policies to be in place before we can productionize not at all. So it's a fine line. So as long as we are talking about responsible AI incorporated into the design patterns, as long as safety is not an afterthought, as long as the right permissions and access controls are in place. As long as we do have AI policies, which is still evolving and we are continuously measuring, monitoring those AI policies, even though they're not profit should be. Okay. So I'm a firm believer where as long as we are providing the right access, the right permissions, and as long as we are working with the right people within the organization, we talked about security and CISO and forming those policies. We should be good. No, we should not just wait for something to become perfect because in that case, we just chase our tails.

ben:

Have you seen teams that move too fast and that sort and pay the price? Pay the price for bypassing governance or compliance? Guardrails.

nayan:

I've seen people doing that and I've seen where organizations would have a very quick time to provide an MVP. They might actually move into an MVP phase way sooner than any other company, but eventually they'll get stuck in that MVP phase. They will not productionize things, and that is where we run the risk of shadow it. I've seen many a times where their AI policies, their responsible AI policies, their safety breaks, their security and governance, was nothing in there, was nothing in place. They still, pushed for creating those MVPs. Eventually the MVPs were success from a business point of view, but nobody was giving them permission to go to production. So what happened was it become, yet, it became yet another MVP that was getting stuck in this shadow IT business was using those tools on a sporadic basis. It was never going into production. There was no adoption. But then there will be small fragments, there will be small. Groups of business who are using it on their day-to-day basis, because now, MVP became a thing. So again, if you get stuck into this kind of a scenario where there is a shadow, it nobody's making your asset or solution go into production, it's not an ideal thing. And when you talk about multiple use cases getting stuck in that MVP and the shadow, it grows and grows, it becomes a nightmare for an organization. So again what we talk to our customer is very clear. Do not think about security. Do not think about AI policies as an afterthought. Always ensure that you have the right guardrails in place. From day one. Your policies will mature. We'll come back and reframe some of the policies, but start with those permissions. Start with the policies that is approved, and ensure that we are not cutting commerce in terms of governance risk and mitigation at least.

ben:

Okay, brilliant. No, I like that. And so then if say for a company listening today and they wanted to move from sort of gen AI curiosity to real impact, say in the next six to 12 months, what would be your advice? What to prioritize first?

nayan:

So I think it should be a dual velocity approach. Again we talk about this many times, so we talk about there are two things that needs to happen in parallel. So one is from a technology point of view, think about building a common gen. AI found. Common Gen AI foundation are those repeatable, reusable, standardized governance and ready to monitor, blocks that will ensure that, if you're building one use case or if you're building a hundred use cases, you have that common set of repeatable capability. So for example, data onboarding, data readiness, semantically generation, your agents framework, your scaffolding, your responsible ai, your control planes, your governance, and monitoring your testing framework your adoption framework. All of these are repeatable, reusable technology enablers where you can build them in a way that these are common across all use cases. So the most important thing that I go and talk to my customer is that dual velocity, where at the bottom part of dual velocity, you need to focus on these repeatable, reusable modules that makes generative AI adoption technologically faster and standardized. The other part of dual velocity is that business adoption, which means work with business in parallel. So again the key word here is dual velocity. The word dual, which means. We are not waiting on technology to finish their work. We are not waiting on technology to start building their foundation. But at the same time, while technology is building their foundation, we need to ensure that there are right business owners. We work with our business partners, try to understand all the use cases that are important, critical within the organization, and we tiered them up, right? Like I said, we find out which are these use cases, the good that go into bucket one, bucket two, and bucket three. We try to provide value. So value realization, value evaluation of those use cases are very critical. And then while the technology enablement is there, we identify the top two. Top three, right? Do not dilute your generative AI adoption. Focus on the top two. Focus on the top three first. Find out those use cases, which are maybe low hanging fruits. Maybe provide better value at low risk. Then just start incorporating, right? The worst thing that can happen to an organization is to wait on the side and hope things to standardize. Instead of that, just dive in. Dive in with some low risk use cases. Focus on those use cases. Do not dilute your portfolio. Do not go after 10, 20, 30 different use cases on day one. Find those one, two, or three. Use case. Have a timeline. Say that in six months time, I need to productionize it. Have the buy-in from business and ensure that technology gonna be supporting. So as long as you're doing this dual velocity, as long as you are finding those right use cases, grooming those use cases, ensuring that these use cases are good to go, and technology is building those repeatable, standardized framework and the platform, then if you combine them together, it becomes a solid way of solving multiple use cases. And in the six months period, when you build that solution, after six months, you now have created yourself a machine. You have created yourself a process where, you know the fourth and the fifth and the sixth use cases, they have no time to market because you already have a solid Gen I foundation. There are common, repeatable, standardized building blocks that are already in place. You already have the right engineering in place and you also have the generative AI adoption process in place, right? So you combine them together, like I said, through dual velocity that becomes a building block. So again. Are all organizations following them? Probably not. But again, if I have to provide 2 cents to anybody, again, through you who are listening in my opinion, this kind of a dual velocity methodology and everybody can find their own methodologist. But again, in my opinion, this kind of a dual velocity where think from repeatability, think from reusability, things from, standardization from day one. Find out those use cases, have business, own it, and deliver through technology is the one approach that has worked. And as long as we are putting those approaches in, in, like I said, six months to 12 months timeframe and ensuring that there is something tangible coming out of it, that's the right approach.

ben:

Brilliant. That's some great in insights and yeah, listeners listening will find that, yeah, truly your value. And so I think we'll leave it there today. So thank you for your conversation.

nayan:

Thank you so much, Ben, for having me here. It hasn't been, it has been an absolute pleasure. Again, thank you for all these insightful questions and good luck.

ben:

Hopefully you're all finding these conversations useful. Please make sure you follow or subscribe data analytics chat so you don't miss future episodes. Thanks for listening, and I'll see you in the next episode.