Data Analytics Chat
🎧 Welcome to Data Analytics Chat – the podcast where data meets real careers.
Data isn’t just numbers; it’s a journey. Each episode, we explore a key topic shaping the world of data analytics while also discussing the career paths of our guests.
This podcast brings together top experts to share:
- Insights on today’s biggest data trends
- The challenges they’ve faced (and how they overcame them)
- Their career journeys, lessons learned, and advice for the next generation of data professionals
This is for anyone passionate about data and the people behind it.
👉 Hit subscribe and join us on the learning journey.
Connect with host - https://www.linkedin.com/in/ben---parker/
Data Analytics Chat
How To Make Successful Decisions In AI
In this episode of Data Analytics Chat, Ben speaks with Durai Rajamanickam, a senior AI leader, about what it really takes to turn AI ambition into something organisations can trust and scale.
They discuss why many AI initiatives fail long before the technology becomes the issue, the risks of hype-driven decision-making, and the importance of clear goals, strong business ownership, and measurable outcomes.
The conversation explores the tension between speed and governance, when to build vs buy, and why trust and governance are not obstacles to progress, but foundations for sustainable AI adoption.
This is a practical, experience-led discussion for leaders who want to build AI capabilities that last.
00:00 Introduction: The AI Hype and Strategy Revisions
01:09 Guest Introduction: Meet Durai Rajamanickam
02:09 AI Initiatives: Technical Capabilities and Investments
03:46 Build vs Buy: Evaluating AI Tools
05:36 Common AI Missteps: Hype and Misdiagnosis
07:41 Successful AI Implementation: Business and Technical Alignment
09:11 Governance and Trust: The Key to Sustainable AI
09:47 Real-World Examples: When AI Falls Short
12:22 Balancing Speed and Governance in AI
15:22 Decision-Making in AI: Trust and Autonomy
21:03 Final Thoughts: Advice for AI Leadership
23:04 Conclusion: Wrapping Up the Discussion
Thank you for listening!
We have seen examples in the market where people went bet on this AI and went with full steam ahead, and then now they're looking back to see, how they can revise their strategy, bring back, human in the mix. For example, I will improve your efficiency to 34 to 40% workforce reduction. It's very lucrative for any leader, any decision makers. Speed gives you that sense of accomplishment. Governance does not. Governance is slow. It may appear to slow you down, but in my experience when we built the governance and risk council models before even we start off with any AA project, it paid off. Yes, productivity gain is great, but not at the expense of customer trust issues and other things
ben parker:Everyone says they want to use ai, but very few leadership teams can answer a much harder question. Which decisions should AI be involved in, and which ones should it stay out of? In this episode of Data Analytics Chat, I'm joined by RAI Rasman income senior AI leader, and we will discuss how to make successful decisions in ai. to 18 months.
duraj:Hi, Ben. Yeah. Nice to chat with you and it's great to join your podcast.
ben parker:So before we dive in, could you give listeners a quick introduction to who you are and the work you're leading today?
duraj:Sure. Ben, as you see that I am in data and AI space for the last two decades, and started off with the data engineering supercomputing and then here I am leading some of the AI initiatives, especially around, it's not about building models, it's more about making AI useful to the business enabling the business and reaching the targeted ROI. These are the area I'm currently focusing on. And I do consulting in these areas.
ben parker:Brilliant. So obviously there's a lot going on in the field. What, what's, what do you feel what experience has most shaped how you approach AI initiatives in organizations today?
duraj:Yeah, we can put that in two major buckets. One yes. Every organization wants to build their technical capabilities around ai be it agent ai, be it the AI tools that enabling higher productivity in the business team. That's one area. And the second I see most of the focus is on. Where do I invest and is there a potential future proof there? And if I invest heavily today and am I going to be relevant down the line in six to eight months? So that's that's another area of focus I see pretty much across different places and. This area is crowded by a lot of tools. So there is a potential consolidation that can happen down the line. So that is another area of concern basically. So putting it all together it boils down to two areas. One is, i'm investing what is going to be my ROI and how do I take my team from where they are today to mostly a enabled team. And again, it comes with the notion of how do I be efficient with minimum the. Task force, right? That's been another focus. So a technical and business lens. It's, it go, goes to the same mold. It initiatives initially, if you remember, 10 to 30, 20 to 30 years ago. When it has come, it boils down to only two areas, either business and technical capabilities. I think we are not far from that. With respect to AI too.
ben parker:Yeah. And I think it's interesting how businesses go about this.'cause obviously, like you said, there's there's a lot of clouds out there. A lot of tools. But also then is it worth building a. It obviously takes more time, but a bigger in-house capability, which is gonna take longer, but it's unique to your business. There's a lot of different avenues. Businesses need to go down, isn't there?
duraj:Yeah. And in fact along these lines, I also built certain tools. It's all in the infinite a.com. For for any AI professional consulting person or business teams can evaluate, where and how exactly they should focus, whether build versus buy, as you mentioned, should I build the capability inside or should I buy the tools and improve my capabilities? And the second is. Calculating the ROI is very complicated. It doesn't always reflect, it says just formula, right? It we have to account some of the empirical relation based on the in-house productivity, in setting technical capabilities and also the business functions and what are we automating and. Where do we really go for a hundred percent deterministic automation, where if that isn't a human evaluation needed, where do I place them? This becomes the major area of focus where all the, in most of the industries I have seen are leaders I have, I've been working with is really concerned, you cannot just take a decision and then implement and then look back, right? We have seen examples in the market where people went bet on this AI and went with full steam ahead, and then now they're looking back to see, how they can revise their strategy, bring back, human in the mix. So that's where I'm seeing the focus and. The current situation with respect to AI investments or AI decisions,
ben parker:Yeah. So when leaders say we want to use ai, what problems do you most often see them Misdiagnosing.
duraj:The organizations today I wouldn't say misdiagnosing, but the thing is the hype around AI and too much of information may lead to certain hasty decisions, right? Okay, I am going to do this and I'm going to improve my efficiency and ROI and I will showcase the credibility on this. I jump right in. And then there are a lot of vendors and a lot of the consulting organization is enabling this hype around it, right? That's where I'm seeing the, most of the problems rise actually. And
ben parker:So you mean, sorry, you mean like too much? There's too much hype and I guess getting people excited, then
duraj:they yes. And the conference is filled with a lot of promises that whether they keep it or not, the promises are always given at the leadership level and it's very lucrative, right? For example, I will improve your efficiency to 34 to 40% workforce reduction. It's very lucrative for any leader, any decision makers. But the reality is AI will not bring clarity to the business automatically, right? Just because you have plenty of data it's just not going to be aligned with the decision you wanted to improve, right? It's not about the intelligence, it's more about agreement with the business team. Are we agreeing on right level of goals? Success measures and who owns the outcome? Without this clarity getting into the ai will definitely lead to, addressing a problem that we don't intend to address. That's how I see that.
ben parker:So how can a leadership team tell early that they're solving the wrong prob wrong problem with ai?
duraj:See again they're changing the strategy. They're coming back i'm revising whatever they have made the decision, so there's always option to do a course correction, right? It's not they went ahead and they'll continue with that. And people have taken complete 360 hearing work around on that. But here is what I have seen pretty much is where successful implementations are. There is a very good business team's participation in terms of defining the goals and success metrics. It goes early on. It's not an it, I needless to say it is yeah. It is not about an IT decision, right? People know that it is a business decision. Organization knows this the real differences, are they making these goals and success metrics closer to the reality than buying the hype or by going for over promises that offered by vendor and other consulting organizations. Right? The reality check is important, working with the business, understanding these outcomes and what it involves. For example, if you're automating the entire workflow, business workflow, right? Without human checks and balances just because you wanted to improve productivity, it'll lead to some problems because there are issues like hallucination and issues like grounding. Not all of them are addressed a hundred percent in the technical parlance, right? So that's where I'm seeing the leaders are making the right decision when we work with business in the early stage of initiatives. Continue to check with them and continue to work with them. And that's how it's getting implemented and run successfully.
ben parker:Okay. Interesting. So would the, would like misunderstandings, where would they typically show up first? Could it, would it be in the strategy, in the data or operating model?
duraj:It's strategy. That's where it's very important. If you see industry, level of strategy will always lead to right level of stakeholder engagement, implementation plan. And then productionization, strategy should not be taken like any other IT strategy or any other the business strategy we are hearing because as far as the aa, everything is involved and I'm seeing people sometimes ignore governance as, okay. One off checklist we can deal with in the later governance comes first. Likewise you need to have that 360 D strategy here for any AI initiative to be successful.
ben parker:Okay. Then, can you walk us through a decision involving AI that looked good on paper but didn't work out as expected?
duraj:Yeah. In some cases I've seen that the model is tested, very well tested, but once it is deployed, in a lot of friction it slows the teams down and, customer experience is not subtle and it's not about the model. It's not that they've been fully considered how people would interact with it. And how the output shape up everyday decisions. So this is where the difference is what is in paper and what versus the reality. The main thing is. The operations in the business operations and the human, those details really matters. Sometimes the early stage of a, I don't think so, people are doing this now, but in the early stage of the hype, right? People banked on the models. Okay? I'm bringing this model, I'm building this in nose rag and everything, so all are going to be very good. And it's not because the core operation details were missed out or not considered it's seen as more of a technical excellence rather than working together with the business team. I think the industry has shifted from that, at least in the late stage of 2025 people aware of this, that models are the only answers to the solution. There should be. The entire operations involved, human details involved than just the model. I think that awareness is already there.
ben parker:Yeah, I guess it's, again, it's part of the journey isn't, it's learning. You have to go through this experience to change how you evaluate the ai.
duraj:And some companies, they evaluate and learn. Some are learning from others but not all the experiences are shared in the public domain. Some are closely kept. I think it would help a lot if this information is democratized would save significant amount of time and money for others to learn. But I think that's too much to ask individual companies. But. Yeah. Hopeful that the kind of learning is democrat democratized across industry and across experts and consultants and whoever is working on the ai.
ben parker:Yeah, no, I mean it's, there's no shortcut shortcuts in life. It's, you just gotta go through it and it's a learning, like everything's learning. So where do you see businesses making their biggest trade off right now? Is it like speed versus governance? Is it innovation versus trust, or is it more centralized versus embedded teams? Where would you say you're seeing most of this right now?
duraj:It's again, the trade off between speed and versus governance, right? I think many organizations now they're navigating the balance between how fast I can go and how do I build that trust. The building trust comes from right level of transparency and governance behind it, right? But it slows you down. So it's the right trade off. The, which other the organization is doing well. They're the one actually putting the trust and oversight as part of progress. And they don't see, that's essentially slowing them down. They're seeing that is very critical than moving quickly. So once it is established, once that framework is in place. As we expand the business, as we do more AI initiatives, it is all part of the framework. We'll have more confidence and we can move with the agility. But without doing that straight away, I'm jumping into AI and wanted to see all the outcomes. It may give a temporary benefits, right? But, problem with that. So that's what I'm seeing that, the trade off here.
ben parker:Yeah. And so I guess, so speed, like for leaders, is that the trade off? That's like most underestimated then?
duraj:Yes. Yeah. Speed is energetic. Speed gives you that sense of accomplishment. Governance does not. Governance is slow. It may appear to slow you down, but in my experience when we built the governance and risk council models before even we start off with any AA project, it paid off. Because everybody knows their role and everyone knows what's expected and what are the operational risk involved. It's just not the investment risk alone. You have operational risk, you have customer engagement risk, and then customer experience risk. So risk has multiple domains here, right? So all of them has to be considered, and that comes only in the right level of governance framework. If this is established there will be a lot more trust for business to onboard with any automation. And otherwise you remember when, in beginning of 2025 or late 2024, there will be a lot of expensive pos were done and then thrown out eventually. I think industry is moving from that along with a lot of learnings. I'm seeing a lot of organizations taking initiatives on this core governance and trust building exercise. I think that will continue the rest of the 2026 and that would be the year of, building AI governance and AI trust. And then you bring a lot of automations into your enterprise.
ben parker:Yeah, and I guess for organizations, if you, obviously you companies wanna move quick because obviously they wanna get their products out and obviously whatever projects they're doing, but obviously they must, if you're doing it too fast, it's gonna create you such a risk or a backlash within your company, isn't it?
duraj:Okay.
ben parker:So what's a decision? Leaders delay for too long when it comes to ai.
duraj:It, it comes to the level of automation or the authority AI should actually have, right? That's the. That's a clarification question, and that's very important. When this is not clear team will be uncertain about how much they should rely on the tools heavily, right? And also how effectively you are using tool. You're investing heavily on these tools, but how much you want to rely on them. And this uncertainty will slow down and impact the confidence of the team and, even the technology sound the, this decision is very important, this clarity is important. What did again, this goes back to the same point that I made earlier when setting up the goals and setting up what is the level of outcome? I'm. Planning for this particular initiative, it can improve over the projects, but in the initial period when the team is engaged on setting up the right goals and right level of outcome, expectations and level of autonomy that you wanted to build with the ai, it's very important decision to make. The more delay, the more impact it'll be for the team and for the investment they're making and effectiveness of the, effective use of the tools that they're invested in.
ben parker:And is it normally the delay? Is it normally like in action between business and tech, agreeing on initiatives?
duraj:Correct. Yeah, both. Both us agree and again, the level of autonomy, if you look at it, it's not an IT decision, it's a business decision. Yeah. They should say okay, I. Of autonomy. I haven't seen, or I haven't come across like this is a hundred percent of a complete business process. Have a autonomy that's too risky, right? With respect to the consumers, with respect to the company's internal operations. It's better to level set that I'm going with this particular initiative with 40% of autonomy, 60% of autonomy and remaining will be human in the loop. And then operators in the loop, and decision makers in the loop. So that's that's an important clarification is required for the team to, reach that level of goals. Yeah, that, that goes back to the same answer.
ben parker:Yeah, no. Cool, cool. Okay. Then I guess, how do you personally decide when to trust AI outputs and when to override them?
duraj:It's. But it's important question. Personally, technique depends on the situation, right? I will look at, from a reversible perspective I can make a mistake, but can I reverse it and course correct it and how well I can understand the model's reasoning capabilities, right? I come from Sal. Thinking perspective. That's my background too. So it's very important on, on, it just, certain issues are made with mere correlations, or is that the right level of causation? Is there but not everything is built with the causal inference in the backend, but this level of thinking is required, is what I believe. First even if I fail, can I reverse it? And is that model is behaving reasonably, and do I understand that reasons right? That's, that would be my personal side of how I decide with the whether I trust AI outputs or not. And if the stakes are high I would prefer AI to support the judgment, then replace my decisions. And it builds gradually and gradually. I can build the autonomy in the AI and then start trusting it, right? But takes time. This is how I would approach personally, like even in my day-to-day decisions, I use AI tools, of course. Like Snapchat, GPT or for my coding I use cloud but it's nothing beyond my decision, right? So it's, it enables, it enhances my productivity. But do I trust them a hundred percent? No. I take my own decisions on certain outcome setting deliverables I make.
ben parker:So are there like particular or specific signals that would tell you that the system is helping versus misleading?
duraj:Yes. You can find them early on where the system trying to, do things that you are not expecting it to. We can, it, if it'll mislead us if we think, oh, it gets automated, okay, let me, let the system do that. I will restrict my own enthusia with that, right? I have to curb my enthusia with. All the outcomes. So I have specific goals and I have specific task, and that's what I want the AI to do. I don't want anything more than that. If it does, I will not take it. That's the discipline that we have to follow. At least that's what I learned from my experience. So that's the signal. You can see if anything goes beyond what you asked we have to take a vigilance lens and then look at that again. And then. Take only what is that we asked for it. The goals. That's why thinking, setting the goals and sticking to that goals is very important. Not excess, not low.
ben parker:It's important then, isn't it, for obviously to partner with ai, but you still gotta have the good knowledge of the data, what it means for your business, so you stay aligned.
duraj:I see a lot of tools in the market that does a claim, a lot of automations, but I look through the skeptical lens only because there is it's not just the hallucination alone, right? Sometimes not understanding of what I need for automation versus what is claimed to be understood and they're automating it, right? That's where the signal comes. Yes, productivity gain is great, but not at the expense of customer trust issues and other things. So that's where I would be very careful in what outcome I am taking it and what from I would use it for my customers.
ben parker:Yeah. Brilliant. And then if you were advising a leadership team today, what's the one AI decision they'll regret not making in the next 12 to 18 months? I.
duraj:Yeah. It's going to evolve. There's no such like a, specific skill right now, but I think many teams wish they had spent more time defining how yay decisions or won't. Evaluated across organization. I think that's important to look at. And then there's no feature proof for the tools. It'll keep evolving every two months. There is some new level of automation
ben parker:Yeah.
duraj:coming but the clarity around the responsibility and learning how that require it takes time to build and that foundation should be in place to make everything else feel more manageable. That's how I'm seeing the leadership team also evolve. That's our advice. They will look at it. And this return of investment is a glorified term in the a, a space. But in the end, what matters is everybody's interested in efficiency gain and if that's what we are driving towards, then. That cannot come alone. It has to come with right level of trust in the governance. That would be my advice, Ben,
ben parker:Yeah, no, I think obviously you mentioned at the start like there's so much hype and I think it's obviously strong for leaders when you're speaking with businesses to manage expectations because there's so much hype and ROI, like you said, yeah, businesses want this, but obviously you just touched on it, projects are not done quick. It takes time. You need to think things through and also you need to go through a learning phase'cause. The AI into an in a business is you are embedding it in a new company sort of thing. So it needs to be embedded appropriately and Yeah, effectively, basically to make sure you get, you are getting the right data.'cause you could, if you go down the wrong rabbit path you are in, you're gonna be in trouble
duraj:Yeah, that's true. Yeah, that's it.
ben parker:Really appreciate your time. Thank you for the conversation.
duraj:No problem, Ben. It, I enjoyed the conversation, lot of insights that I'm also getting while discussing this. So thanks for putting together this I think, yeah.
ben parker:Okay, cool. And listeners, thanks for listening. Hopefully you're finding the conversations useful. Again, please make sure you follow or subscribe to Data Analytics Chat so you don't miss future episodes. Thanks for listening and we'll see you next time.
duraj:Thanks, Ben. Bye.