Data Analytics Chat
🎧 Welcome to Data Analytics Chat – the podcast where data meets real careers.
Data isn’t just numbers; it’s a journey. Each episode, we explore a key topic shaping the world of data analytics while also discussing the career paths of our guests.
This podcast brings together top experts to share:
- Insights on today’s biggest data trends
- The challenges they’ve faced (and how they overcame them)
- Their career journeys, lessons learned, and advice for the next generation of data professionals
This is for anyone passionate about data and the people behind it.
👉 Hit subscribe and join us on the learning journey.
Connect with host - https://www.linkedin.com/in/ben---parker/
Data Analytics Chat
Challenges, Successes, and Transformations of Generative AI
In this episode of Data Analytics Chat, host Ben Parker welcomes Viji Krishnamurthy, VP and Head of AI and Generative AI at Oracle. With over two decades of experience spanning startups, Samsung, and Oracle, Viji shares her journey from academic research to leading enterprise AI initiatives.
Viji discusses pivotal career moments, the transition from scientist to product leader, and the importance of continuous learning and collaboration.
The data topic looks into the challenges, successes, and transformations of generative AI in the enterprise. The conversation explores the evolving role of data professionals, the challenges of implementing generative AI at scale, and the cultural shifts required for organisations to fully harness AI’s potential.
Viji highlights the need for robust data platforms, evaluation frameworks, and feedback-driven learning, while offering practical advice for aspiring leaders and insights into the future of business.
00:00 Becoming Managers of Agents: The New Workplace Paradigm
01:19 Welcome & Introduction to Data Analytics Chat
02:08 Viji Krishnamurthy’s Career Journey
03:47 Influential Mentors and Defining Moments
07:41 Transitioning from Scientist to Product Leader
10:55 High-Value Problem Solving in Data Science
13:23 Overcoming Career Challenges and Breaking Stereotypes
18:19 The Evolving Role of Data Professionals
21:50 Key Skills and Mindsets for Leadership
27:43 Continuous Learning and Staying Ahead in AI
32:28 Quantum Physics and the Future of Intelligence
38:08 Advice for Aspiring Leaders
41:08 Generative AI: Enterprise Challenges
44:08 Success Stories and Business Value
48:52 Addressing Risks: Hallucinations, Bias, and Trust
52:09 Cultural and Organisational Shifts for GenAI Adoption
57:05 The Future: Agents, Automation, and Collaboration
60:00 Closing Thoughts and Predictions
Thank you for listening!
Like each one of our work is essentially becoming we managers of multiple agents where each agent is, bringing a specific type of expertise to us, and we are utilizing their expertise to make our day-to-day work better. If you put a hundred people in a room and then ask for, I ask for ideas on a particular problem, right? You'll get a large variety of ideas. Whereas if you ask a hundred times the questions to the generative AI model, you are not gonna get that large a distribution of ideas We are all becoming managers of agents, right? So we are gonna have and you could imagine yourself having a bunch of agents always active if you're a knowledge worker, active in your laptop, and that is assisting you with the work that you're doing
ben parker:welcome to Data Analytics Chat, the podcast where we discuss the world of data, AI and the careers shaping it. My guest today is Fiji Krishna Murty, who is a VP head of AI and generative AI at Oracle. In this episode, we'll explore her career journey, the challenges and the insights she has learned during her career. Today, the data topic will look at the challenges, successes, and transformations of generative ai. Fiji, welcome to the podcast.
viji krishnamurthy:Thank you for having me.
ben parker:I'm glad to have you on and yeah, we've got a hot topic, I should say, in the market right now.
viji krishnamurthy:Yeah, sounds good. Looking forward to it, Ben. Yeah.
ben parker:So do you wanna start off with sharing your career journey for the listeners?
viji krishnamurthy:Yeah, happy to. I I did, I studied for a while and I after completing my PhD I started my career as a scientist working on variety of problems ml, mostly ML problems at the time, and integrated those problems into variety of enterprise applications. after working as a scientist building those kind of applications, I moved into iot which was at the beginning phase of its cycle. And because, IOT has a large amount of data. I wanted to work in iot area to see how we can transform the data into actionable insights. So I worked in iot for a while, both at startups as well as at Samsung. And then I got an opportunity to lead sciences and product for enterprise iot application at Oracle. And I joined Oracle to lead the sciences and also worked on a number of, iOT applications covering asset monitoring, production, manufacturing, and transportations and logistics, et cetera. And then I, after working in iot applications and supply chain applications and enterprise ERP applications at Oracle, and I got an opportunity to lead AI services which we were just starting to build as a part of Oracle Cloud infrastructure. I led AI services product and then and then we moved into generative AI services. Which we started to build. So I led the generative AI outbound for a while. And then now I lead generative AI for industry applications at Oracle. So I've been at Oracle for about nine years nine plus years. And prior to that, I've worked at Samsung in startup as well as Phillips. That's my career.
ben parker:Brilliant. And what got you into the industry? Out of interest?
viji krishnamurthy:So between like right after, you mean right after PhD instead
ben parker:Yeah. What was your, like interest, like how did you fall into the industry or?
viji krishnamurthy:No. Right after PhD, like majority of my classmates go to academia and I was one of the very few people who went to industry. I like, during PhD I learned so much of math techniques both fundamentals as well as, as well as, how to apply different machine learning techniques and how to do neural network based model building, et cetera. So I was more curious about how I can apply these techniques into practical problems at enterprise enterprises, and that's why I chose to. Join industry versus academia. But nowadays I continue to contribute in academia as a guest lecturers in local universities. And I also go back to my alma matter once in a while and then connect with students.
ben parker:Okay. Amazing. so you've progressed up the career ladder move into leadership. Was there. Part in your career where someone opened the door for you and gave you an opportunity, or was it natural progression?
viji krishnamurthy:I think definitely, all of our career is influenced by the opportunities and also people opening up the door. For us, I had multiple multiple influencing individuals across my career. Both in my academic career as well as in industry career, right? For example, the reason that I ended up doing PhD was that my master's thesis professor was very influential in convincing me that I should do PhD because I was very interested in understanding. Techniques and combining the techniques to solve various kinds of problems. And he was very encouraging of me to go to PhD. And similarly, during my industry career, like a lot of, a lot of my managers have been very influential in terms of shaping my career. For example, one of my managers said that when I was a scientist leader, one of my managers said that, you should be you are really good at understanding customers problems and converting them into match problems. Maybe you should try doing a product management for a while because that'll really help in, help in shaping your career and shaping your understanding of industry problems better. Which helped me. Which ended up influencing me to switch from sciences leader to a product leader, which actually ended up helping me to grow into kind of a general manager type role where I now I lead both sciences and product team members as well as engineering team members in in, in identifying, discovering and solving problems of various kinds. Definitely a lot of people have contributed to where I'm today.
ben parker:Okay. Amazing. So was it, was that, would that be a defining moment or would there be something else that you, that's really impacted your career?
viji krishnamurthy:I would say I have there are two defining moments that really shaped my career as well as the way that I look at my career as well. One is right off my PhD like I had the opportunity to work in various kind of industries, but I chose to work in the optoelectronics microelectronics industry which ended up being a very def define like in a really, one of the defining moments because this, that industry is characterized by the large amount of data, right? Because, the wafers take so many steps to get processed. And you have so much of process data coming from every single equipment. And how do you use all of the data to predict the quality of the product or the kind of product that you can build how you can allocate certain products for different industries. All of these kind of ML and heuristics based models that I was able to build really shaped my, understanding of how math technique could be used for solving variety of problems, both manufacturing, distribution, pricing as well as a product fit type of problems. And that continues to shape my shape my understanding even today. That's one of I would say that's a defining moment for me. The, because I was exposed to such a complex enterprise processes. It is easier for me to understand other industries and have an equivalence framework in my mind as to like how how a e-commerce or a banking or a how a different, healthcare industry operates. And I'm able to understand it far better. So I would really say that's a defining moment. The second defining moment is that, every ML in, in heuristics based and AI applications that I've, I solutions that I built was always, I was integrating with er, different kind of applications from SAP and Oracle and others, and the opportunity to work at Oracle and embed those ML and AI solutions inside the product have been. Has been a really good defining moment for me as well because the opportunity to come to the product team and then build various kinds of ML and AI and now gen AI solutions that in that is part of the banking application. Healthcare applications and transportation and communications applications or retail or back office applications like URP cm and CX applications have been definitely one of the defining moments in my career.
ben parker:Brilliant. I think it's probably that product role has probably had a massive influence.'cause it's a lot of talk in the sort of industry that obviously data scientists, a lot of the, there's heavy talk, there's tools out that do the heavy lifting now and it's becoming like data science is becoming more like business focus. It, how can we, IM impact the company more because it's not as technical now it's more product focus. So do you think that was like giving you a lot more not all round sort. Approach to apply development and creating solutions.
viji krishnamurthy:Yeah, definitely. Data science yeah, research is all fun and fun to do, but the research needs to meet the enterprise problem at hand right at some point of time for it to be really useful. So I've always both as a scientist, myself as well as a scientist manager, I've always looked to my team as well as myself to work on work on problems of value like problems that generate value. It could take a, in some cases it could take a little longer research face before you can actually experiment on real problems to make an impact. But otherwise as a scientist, you're always, as a scientist in enterprise environment, you're always trying to find. What enter enterprise problems are of high value and how do I use existing techniques and a combination of techniques to really solve that problem. And even after you, you create that solution, what is the gap? And is, are that gaps or those gaps can be filled by the business users easily. And if that is the case is the solution still a viable solution to make it as a part of the product feature? This is the kind of process we follow. And I think it continues to apply as the models becoming bigger, better, and better. And able to do more general skills like the way that we have with the generative AI models.
ben parker:Brilliant. And I like how you said like focus on the high value. Project because that's been make the biggest impact. And I think a lot, there's a lot of projects that need to be done, but I think end day you still need to stick to business basics. Like obviously you even improving efficiencies or increasing profit, et cetera, you need to make the most impactful decisions.'cause then that's gonna move the needle the quickest, isn't it?
viji krishnamurthy:Yeah, absolutely. Absolutely. I think it's always, there's a lot of problems to solve, but you're always gonna prioritize based on the value that it can provide, right? So
ben parker:What were the tough challenges you faced along the way?
viji krishnamurthy:I think, the tough challenges as a part of my career would be like, for example, I'm always I'm always in the position of proposing novel solutions to the problems at hand. And that means that the the responsibility to prove that it works is on me or my team. That is one of the challenges that you always continue to face because, I've been in ML and AI and generative AI now in terms of either digital. Developing solutions, working with customers working with strategy customers on solving their use cases with these technologies. And as a result or introducing these kind of ML and AI based origin AI based features in the product. As a result this challenge continues to be part of my career in every phase of my career. But I enjoy this challenge in a way because it helps me to learn about the new technologies and keep pace with the new technologies and also see a also help my team and the extended team with educational sessions helping them to understand how this can really solve the problem. At hand and also how, understand the cost of resistance and addressing them with a shared sense of goal and also shared benefit from the targeted outcome. This is challenge that, I enjoy enjoy working with on the daily basis. It's a challenge, but it's an enjoyable challenge. The second one is also second challenge is like I'm one of the I come from a somewhat of a unique background compared to the mainstream AI leader because I have a PhD in and I understand and I read papers, I understand these techniques, and I read papers and I experiment with them these model techniques. And I've also been a product manager building products that are grounded on ai, ml gen ai technologies. And also I am, very rarely you see a scientist learn turned into a product manager and then into a general manager kind of a role that I'm in. Generally I get, the, whoever I talk to generally they box me into a sciences leader or a product leader. And it's a challenge in a way that, you know they expect you to be one or the other. But I also enjoy getting bo boxed in a way. And then I jump out of the box by showing how I fall in the intersection of the sciences product and enterprise challenges. And it's it's something that I do with pretty much everyone I talk to. Customers, partners as well as internal, external teams. So I think these two challenges are are something that I handle on the on the daily basis as well as in my career along the way. And and they also shaped me in helping to look for the right problems and the right way to present the problems as well as my ideas
ben parker:I think it's funny how people get pigeonholed into certain spaces. Everyone's unique. Everyone's got different talents, isn't there? Like it's, everyone's come. I mean it's bit, you look at data roles now, it's so diverse. Like people in lead leadership positions, products, technical, hot positions. It's so broad now, isn't it? Like different roles? I think it's, I think it's more of a learning, isn't it? You need to understand people better to know what their capabilities are.
viji krishnamurthy:Yeah. And there is room for everybody in this right, particularly with the way that the models are getting better and better. We need more and more critiquing like understanding and critiquing capabilities from us, like from the users of the model. And as a result, I think there's so much of so much of space for everybody to participate actually in this.
ben parker:Yeah, no, I agree. I think, it comes across all parts of life, isn't it? Like people get pigeonholed, it's just if it's from people's experiences, if they've experienced certain individuals done something, this, the memory sticks. But I think it's, people need to be a bit more open-minded.'Cause ev, everyone's different. Everyone's got different experiences in life as well.
viji krishnamurthy:Yeah. Yeah. And it is, we are all a pattern recognition and machines, right? Because that's what is, that is a system one, right? You talk to somebody and then you immediately recognize a pattern and then you box them into that. But, and then the system two kicks in and then helps you to learn more about that individual and then start to see, okay, they are they're not in that box. They are, they're able to contribute here. They're able to contribute there, and they have knowledge here. So that's the that's how we act as individual. And I think I expect that to happen in my interactions. And I actually work around it.
ben parker:So you jump out the box and surprise a.
viji krishnamurthy:Yeah, I think because if I'm talking to product leader they don't expect me to know a lot about the sciences. And if I talk to a sciences leader they don't expect me to know too much about the sciences because they think I'm a product leader. So it's it's a sort of you but, and then after a little bit of a talking, everyone, like people get to see, okay, you can do this as well as that. And then that sort of helps us to make progress on the problem at hand or the topic that we wanna make progress on. So overall, I think I'm getting better and better at actually jumping out of it quicker, quick enough that we can make the interactions a lot more interesting and and productive for all of us.
ben parker:Amazing. I like that. But I also think it's as, I mean as data's, data's a hot topic now, isn't it? 15 years ago, everyone wanted to be a software developer. Now everyone wants to be in the data field and it's, the roles are gonna keep evolving. Technology are gonna evolve, so people are gonna be wearing multiple hats. So it's gonna be a lot more surprises,
viji krishnamurthy:yeah, definitely. I think all of us are needing to like I think this is something that comes up quite a bit, right? How do we see the data scientist, applied, scientist, and a product leader, all of these roles evolving? Right? And if he expect all of us to be managers of multiple agents then we are essentially transforming our work from actually generating content and solving particular problems. We are actually going. To going into distilling the work so that LLMs can understand the work or generative AI models can understand the work, produce content for us. And we become evaluators of evaluators and critique of that work and so that we can keep critiquing and correcting and then making the work as a higher and higher quality. So there's it applies to pretty much everybody. And if you're a data scientist and you used to really code quite a bit of data transformations and all of those, you're not coding anymore those things, right? You are actually working with agents. It's really transform that data into the form that you could use. Similarly if you're a software developer, you are using generative AI models for developing the code. For a, you are distilling a particular expectation, provided the generative AI model, they are producing the code for you, and then you are becoming a critique working with the agent to make it better and better. Same thing, if you are a marketing content generator or if you are a customer success or a customer support person. Like each one of our work is essentially becoming we managers of multiple agents where each agent is, bringing a specific type of expertise to us, and we are utilizing their expertise to make our day-to-day work better. So that's how everybody's work is getting transformed. No, same thing applies to data engineer, data scientists, applied scientists, everybody.
ben parker:Yep. It is const constant evolution. So what, what key skills or mindsets have helped you progress into leadership?
viji krishnamurthy:Definitely continuous learning. Because during my it's been 20 years since I graduated from PhD and when I was doing PhD like neural networks, we are very hard to build because, we didn't have enough compute. But, and then, we compute evolved with cloud and now we, we can build bigger and bigger models with transformers. So learning continuously on what is feasible and experimenting with those learnings has been one of the key skills and I highly encourage everybody to do the second. Sharing and educating constantly, right? Because, I work with mixed talent team like, team with software development experience who are new to ai, and there are scientists who are experts in ML and ML ops but learning more in the gene ai. There are leaders who have, who are SMEs in particular market vertical. It could be hospitality or retail or banking. But learning how new to gene AI and see how gene AI can really help them. So I, because I work with mixed talent internally as well as externally with partners, I sharing and educating has been sharing, educating constantly has been one of the key skills for me to be good at what I do. And then the third is passionate about solving problems of value with tech. Which means understanding, putting ourselves in the shoes of the customer internal, external customer, understanding their problem, understanding what is the value of solving the problem, and then figuring out how that problem can be solved with a combination of tech that is at hand and and always open to helping others on the prob on the problems at hand have been one of the key skills, I would say, because I'm, I like, like I'm one of the very approachable leaders in, and I'm a very collaborative leader. So I talk to. S scientists as well as engineers and product managers across the company and also sales team members. I learned from them, help them contribute in their in their day-to-day problems that they're working on. And that way it also helps me to learn and shape my ideas on where this technology can help. And what are the gaps in the technology as well?
ben parker:Brilliant. And so would you, obviously with the continuous improvement development, do you set yourself time each day or is it ad hoc? How do you go about it?
viji krishnamurthy:Oh, great question. So I read about, I make sure that I read about five to seven research papers a week. So that means I have fixed time slots in my calendar. For example, my Saturday mornings or reading papers. And also I read papers on my Wednesday nights usually because it, it helps me to noodle on what I'm reading on Wednesday for a couple of days and see whether there are some new ideas that I can develop from there. And then similarly, whatever I read on Saturday and Saturday helps, like also I use some time on my Saturday to. Experiment on the ideas with the model with the models of various kinds. Like I use pretty much models across the board and see whether they can solve the problem, whether they actually, have a hiccup. And whether that gap can be filled by some creative ideas. That's what I explore. I also read orthogonal books like, I feel that my ideas are also shaped by reading books in, neuroscience. And I'm also a novice learner of quantum physics. And these things really helped me to learn things that are usually out of bounds for me. And usually I don't have a foundational understanding of those areas, but it helps me to come up with come up with new ideas.
ben parker:And it is like quantum physics. Do you think that's gonna be, is that the next big thing, do you think?
viji krishnamurthy:I think so. I think so. But there are some fundamental problems of controllable behaviors that are going to be quite challenging to really solve. In, so I am not like, it's definitely if you're thinking about intelligence systems if in the arc of intelligence systems building. We are at, we are in the portion of the arc where we are building better and better systems that mimic human behavior, right? Human digital behavior, whatever we have actually put it on the digital space. We are able to train the model and we are able to make these models understand the patterns memorize some of the content and then reproduce in a way that it looks like human produced content, right? That's where we are. And the next space is that, how are we going to enable mimicking human behavior in physical space, right? That's where the robotics and, autonomous vehicles and all of this are going in, right? We'll continue to keep building that intelligence and then we have to also think about like how, we are going to get these systems to right now most of the systems are doing system one type they're not doing a reflection based work. They're basically reacting to the question. So how do you really make the make these intelligence system to do system two level work, which is like sit back, think about it. Collaborate with others, learn, get ideas from various places, like a high entropy kind of a behavior. It take, like we, if you put a hundred people in a room and then ask for, I ask for ideas on a particular problem, right? You'll get a large variety of ideas. Whereas if you ask a hundred times the questions to the generative AI model, you are not gonna get that large a distribution of ideas because it's not a very high entropy system like the way that the human be human creativity is right. But to be able to really solve very tough problems we need really highly creative ideas. And how do you really encourage these, like really build models to really be that creative at the same time creative even though that is, it is that creative, it is still producing high quality ideas that can be combined and furthered further for better and better ideas, right? We need that to be able to solve some of the tough problems we have. We'll be doing that. And then if we keep doing that, we'll reach a point where the system is basically you can kick off kick off. A million of these models together and ask them to work together to solve one of the really tough problems like, cancer cure or, or space-based problems, right? That's the direction we are trying to head to. And then it, as a part of working on this, we need the kind of compute and exploration that is above and beyond what we are doing. Is what we have is a conjecture. And if that is the case, if you're gonna build a completely new architecture and we need a different kind of compute, et cetera, do we need quantum based compute? And that's where the quantum compute related work is coming in. But if based on my naive understanding of quantum, it is really like it's still, we are at a pretty early phase of quantum because we are able to solve simple problems, but even a little bit complex problems or resulting in high margin of error that is unaccept. So it's going to be a, something that we'll keep watching and seeing that, what happens during our lifetime.
ben parker:Yeah, no. Fascinating. And there's only a small amount of businesses that are investing in this space at the minute. So it's, we'll, yeah, we'll see where the future lies. And I guess just going back to your Aussie I love that you're spending time each week to learn. I think it's so important. Is there been a book that's shaped your path or your success has really stood out and impacted you?
viji krishnamurthy:Lots of lots of, lot of books. Because I read about a book a month on the average. But the, recent one I would say is, couple of books have been really instrumental. One is I really like these couple of these books. One is the Alignment Problem which is a little older not a recent yearbook, but it's one of those books that I go back too often and and read. I like the way like You, I like it. It helps me every time I read, I, it helps me to come up with a new set of questions on ai alignment Problem by Brian Christian which is a good book to read. The other book that I find it really I found it really interesting, and I go back quite a bit on the AI space, is the the brief History of intelligence by max Bennett. I read it maybe a couple of years ago, and I go back quite a bit because it helps me to. Learn more about the evolution as to like how the brain evolves and how the language came later. But, and then how how we are how we have gotten the system to, like our current models, right? We are able to teach it to learn the languages well before able to. Do the simulation planning and reasoning which is very interesting because, we have evolved to have simulation planning reasoning mentalizing before language, whereas the systems that we are building are going in a little bit of a different they're taking a different path. So it, it helps me to come up with new questions that I need to explore whenever I'm rereading them. On the quantum physics, I have I'm also reading quite a bit of books on that because there are some common questions they ask and ai both Quantum and the ai I know space is asking. So there's quite, quite a bit, quite a number of books actually shaped my thought process. And I still, I feel like they continue to shape, like I read. I read a number of books on these and I also get have a lot of friends who are recommending me books all the time, so that helps me to find the books that I want to read as well.
ben parker:Yeah. No, I think reading's so great. It's just, you just, you're looking for that one snippet and it could just, you can apply that to your work and then it can make massive difference. It's, yeah, it's, I think, yeah. Cons, constant development, isn't it, really? So
viji krishnamurthy:agree.
ben parker:What advice would you give someone aspiring to become a leader?
viji krishnamurthy:Yeah, I would say I talk to a lot of graduate students periodically, and this is what I tell them have your own sense of success, right? Your own definition of success because chasing standard, career ladder and all of those may not really result in the kind of fulfillment that we all need. Like I, I focus on what would make most of my daily life enjoyable. Because not a hundred percent of that I'm having fun is impossible, but can, is most of my daily life interesting and enjoyable and really, triggering curious ideas for me. And that is something that I, that helps you to create. What is your own sense of for me that is my sense of success. So encourage people to have their own sense of success. Second is I really think knowing your own strength and then strategizing around your own strength to find your own career path is is. Would be the, I would encourage people to do that because I think there is room to define our own career in any industry. Like you don't have to really follow the specific path. Having your own, like bringing your own strength and building around your strength is the right way to go. The third is I already talked about this. Learning, imagining and experimenting constantly really helps helps us to help. It has helped me immensely and I think that's one of the advice that I would definitely give it to everyone. Continue to learn, continue to imagine, ID imagine solutions. Combining the ideas together and then experimenting how they pan out.
ben parker:Brilliant. Some great advice there. Okay, cool. So let's move on to the data topic. And it's obviously gonna be an interesting one. Yeah. Everyone's talking about it and everyone's trying to solve it. So what we've generated AI then, so what has been the big challenges enterprise face when implementing at scale?
viji krishnamurthy:So I would say there are, three broad challenges that we have, right? One is getting the model to understand and work with the enterprise content, right? Enterprise content is quite diverse and the current model is not built on. That current model is built on the, online data. And online data is not. Exactly representative of the enterprise content. Compared to the online data, enterprise data is a lot more cleaner and there are a lot more. But at the same time it's not used used in the pre-training phase of the model. And during the pre-training phase of the model understands some understand certain knowledge and also it also understands the way to put together this knowledge as well. There is an algorithmic knowledge that is gained during this pre-training phase, as well as some information is also learned during this pre-training phase. But when we are really feeding the enterprise data we are feeding the enterprise data as an external knowledge, either as a prompt or as a knowledge base for it to retrieve and then use. And this is working okay for some of the use cases, but this is insufficient, right? Like, how do you really, make these models to understand the enterprise knowledge. And use that effectively. And and, to really produce the high quality output is one of the challenges that we face with these models. And and this is definitely a challenge that everyone is working on and we are also working on. The second is second challenge is deploying these systems in such a way that they learn continuously. Like today, these models don't learn continuously, right? And in an enterprise environment, right? If I'm talking to an agent and I'm pro I've provided a particular guidance on what I expect when I'm asking a question to the agent, I want that agent to remember the next time I'm talking to them, I'm talking to it about the same topic, right? You want it to continuously learn not only from an individual, but also from the community of the users, of the agents, so that. It gets better and better and helps with higher efficiency on the enterprise problems. And today, we don't have, and this is the active research area there's quite a bit of fine tuning oral based work as well as institution tuning and other MA methods are getting evolved. But but it is still one of the challenges to have a continuous learning system. The third challenges consistency and reliability and accuracy of the output, right? These models you have to make sure that they produce the variety of output that is needed, but at the same time, it is not changing every time you ask the, you ask the same question as well. We are addressing industry as well as we are addressing this with number of evaluators, verifiers both verifiers based on ML models rules-based systems as well as other LLM based verifiers. But this this is this is also one of the big challenges, like how do you really produce consistent, reliable output that and also provide some confidence indication of that output to the user so that the user could actually. Act on it or or further their exploration with citations or rephrasing the question, et cetera. So I think these three challenges are actively being worked on. But these three definitely exist today.
ben parker:Okay. And with, obviously we can discuss the I look at some success stories in next, but would there be any common themes where people are overcoming their challenges? Are they doing any certain things? Are they moving quicker? Is there any common themes there?
viji krishnamurthy:Great question, right? So I think, one thing that I'm seeing, one of the themes that I'm seeing is the there is a new type of data platforms that are emerging. In the sense that we used to have data platform before that basically helps to pool all the data and then make them available. Like today, I think the data platforms are becoming like, starting to focus on data enrichment preparing the data for agent use. These kind of new focus is coming on the data platform so that the data is not only prepared for human understanding and human level use whereas how do we really create extensive annotations and extensive description of the data so that agents can be exposed to the data and it can be they can be utilized for answering questions. So the data platform site that I see emergence of new ideas coming up. The other area I see is the evaluation platform i'm putting this as a very broad bucket, the evaluation platform. Essentially how do you evaluate a particular agent's output? It could be from different dimensions, it could be that like you are distilling the output into three or four different categories, and then you have verifiers for each one of those categories. And how do you really build those verifiers? Can those verifiers be user defined functions? Or it could be based on other ML models, AI models. And how do you really build them and how do you maintain them and how do you use this kind of evaluators on the continuous basis for tracking the performance and improvement, right? So that is evaluators and also how do you simulate like, given that I have created a agent, I put it on a simulation mode and then ask create another agent to question this agent and then create, completely simulate various kinds of questions and see how well the evaluators are rating the output of the agent and whether the agent is production ready or there are topics in which the agents is underperforming. This kind of, evaluation platform is going beyond just the evaluation. Enabling evaluation as well as tracking the actual KPIs that is coming out of the evaluators. And then rating whether the agents are doing well at time, equal to zero, as well as during the entire production mode. That kind of evaluator platform is also evolving quite a bit. I think the third area that I think is actually, an early phase, but I would love to see more and more effort is how do we cohesively collect data, user feedback, data, user feedback, get data could be just thumbs up and thumbs down. It could also be detailed feedback or it could be rephrasing the, of the question, which is an indication that the user's intent is not understood. It could also be some new knowledge that is provided by the user as a part of the interaction. There is so much of rich data that is coming from the user. How do we really collect that and how do we transform that into a type of data set that could be used for continuous learning, whether it is SF kind, transforming them into a SFT data so that you could do, fine tuning or whether you transform them into kind of policies or reward functions for your orl. And how do you like or. Or transform them into a knowledge base that is used by the agent to retrieve and use it as a part of the prompt, like this sort of like feedback collection or a user input collection to really make the agents a truly collaborative agents is also evolving. And I see one or two startups starting to work on this. So I'm hoping that these are the three common themes that I'm seeing as a, as ideas that will potentially help us solve these challenges.
ben parker:Okay, cool. Good. Some great insight there. Thank you. So what, can you share a success story where generative AI has delivered clear business value?
viji krishnamurthy:Yeah, so generative AI models today the way they are, they're extremely valuable in multiple efficiency focused use cases. As a result, you see a lot of assistance and advisor type agents that are released across the board. Even in our I highly encourage you to look at our Oracle AI world keynotes because, all of our leaders have presented multiple use cases. And some, we have released about 600 plus agents across our applications and and all, and if you look at them, they're all either assisting with the right information based on the national language interaction or whether they, or they're advising what is the best course of action that one could take based on the history. Given the current business status. So pretty much all the chatbots are getting transformed into knowledge assistant and advisors. And this, we are seeing across the board in the customers support, assistance or ticketing a, agents, et cetera, right? So you see quite a bit of efficiencies gained we have even published multiple blogs that shows more than 25% ticket deflections in some of these some of these from some of these agents. Because we are able to auto deflect on auto answer the questions that are coming up based on the history. This is one large area that that we see a lot of success stories from, from our customers. The second area is content generation and analysis, right? Pretty much if you look at our the kind of agents that we have released a substantial portion of them are assisting with auto summarizing an uploaded document or auto updating a transaction based on an uploaded document or auto generating a draft. It could be a draft document, draft email in a draft contract. It could be any of those. So auto generating these, and this has, we have seen our customers talking about how this has actually. Enable them to save, substantial portion of their time in completing that task. So it is a content compilation, summarization. It could be, it could also be, various kinds of report generation and others. So this is another area where we are seeing quite a lot of success story. The third area where we are seeing a huge success is coding. When you distill your coding requirement into specific tasks we are seeing generative AI models to be really good at generating a good code code. But you have to distill them into a separate specific task. And also you have to provide, specific coding policies, if you have specific, policies as per your enterprise requirement. If you do, then we see that we have a product called Oracle Code Assist that is pretty much used in developing all of our products internally, and we are also seeing our customers using them as well. So it, in addition to code generation, we've also seen, a lot of success with document generation, test suite generation release report generation and or, updating ATE based on what is completed based on, a LLM based summary of work completions. All of this we see quite a bit of success. So I would say these are the. Huge success areas at this point. And we are like, this solves pretty much affects everyone across the business functions, right? Because we've seen these, a agents and advisors used by sales, business development, product development is using coding content generation is using by marketing and product management teams within the company. And also we see our customers also across the board the business users from different functional areas or using agents and advisors of various kinds that we have released.
ben parker:Okay. Just, it's interesting to see and obviously it's gonna be, yeah, a lot more successes moving forward as well. It's gonna be, obviously it's a lot's happened, isn't it, really? And a lot, even in the last couple of years, it's been massive dramatic changes.
viji krishnamurthy:Yeah, definitely. Like quite a bit of we've made. Lot of progress, like you say, in adapting this technology and understanding how they can solve enterprise problems in the last couple of years.
ben parker:So with the, with organizations, how are they like addressing the risks around hallucinations, bias and trust? Using generative AI in production.
viji krishnamurthy:I think if you if you see, I already talked about one of these points, like evaluate. I keep saying because we do a, we, we do a lot of evaluation during design time of these solutions. And we are also encouraging our customers to evaluate extensively. So you have to de, evaluate from various aspects where things could go wrong from. From the overarching goal per protect goals perspective, that's one. The second is designing and deploying these verifiers like, just like I, not just using only LLM based do verifiers because LLM based verifiers can be gained in some situations. So we deploy a combination of LLM and non LLM based verifiers, and we also distill the out. Output into multiple categories so that you could indeed build a non LLM based verifier that can actually evaluate that aspect explicitly. Deploying and designing and deploying correct verifiers is a huge way in which we are we are able to address this risk. The third is collecting feedback and continuously monitoring monitoring the behavior of the agents, right? So every question asked to the agent recording what was asked, what is answered, what was the feedback from the user. And whether and using the evaluator rating that answer and see whether it's there, is there a drift in the agent's behavior due to new document addition or new user base or new type of questioning behavior. All these kind of continuous monitoring is also helping us to de-risk some of the issues that you talked about. And the other is a periodic update of the system. Based on the feedback retiming the pipeline, retiming the prompt and also updating the guardrail topics as to based on the feedback to ensure that, some of these questions are not answered or or some of the topics are put into the guard railing list as well. Like these things really help us quite a bit. So I like the way that we are looking at our agencies. We design an agent, we do quite a bit of verifier design and and evaluate extensively during the design time and deploy them. And then during the production mode, we collect a lot of feedback as well as the question and answers and the corresponding evaluator results for those answers so that we can continue to man monitor whether the agents continue to behave like the way that we assessed during the design time, or is there a drift or there specific topics where it doesn't work well. And this helps us to understand whether the current agent design is sufficient or whether we have to really add a new focused agent on a topic that it is able to, it is not able to answer very well. All of these kind of updates, periodic updates is also that we do and I see that this is how we are able to succeed in de-risking some of this hallucinations bias and other issues.
ben parker:Okay. Fascinating. And then, so I hear a lot of stories about companies going all in with generative ai, others a bit cautious. What cultural or like organizational shifts are needed for enterprises to fully harness like gen ai?
viji krishnamurthy:This is a really important question, right? So essentially like we talked about, right? We are all becoming managers of agents, right? So we are gonna have and you could imagine yourself having a bunch of agents always active if you're a knowledge worker, active in your laptop, and that is assisting you with the work that you're doing. And similarly, if you're in a factory operator or if you're in any kind of operator, you'll be having agents that are helping you, right? So as a result, when we are beco, when we are transforming ourselves from doing the work to distilling the work for the LLM and he and evaluating the l lms. And nudging it, teaching it, correcting it to do a better job. Then we have to become good at critiquing critiquing other people's work like meaning not other people's work. And the agent's work essentially. So how do we and that is a skill, that is a skill development compared to actually generating answer. Because, answering is one type of skill, but critiquing other critiquing in the other answers, and then helping the answer to be getting better and better is a different kind of skill. So I think this, there is a cultural shift happening as to looking at each one of us and saying that, Hey, can you actually critique agent's work and then make it better and then teach the agent to get better and better at what is expected from that task, right? That is the cultural shift that we are all making. The other shift is that, it's how do you really provide information to the agent? Not only what happened in the business, but also why it happened. So we are all becoming like a data curators in, in a way because if I just say that say that you're an e-commerce company and then you just provide that this product is shipped to this customer, like this is the kind of transactions that you provide to the agents. Agents would really know what was shipped to who, right? It won't know why it was shipped to that particular customer, when there was when there were only certain number of limited supply, why three customers have chosen to get the product and the remainder were not given the product, right? You have to start providing this reasoning behind it. Only then we can start to expect the agents to really do the business business side of the reasoning. To make the right decisions and recommend right decisions. So as a result, we are all becoming curators, data curators in a way that we are teaching the agent on the reasons behind why we are recommending certain behavior. We are recording we are, we all, this is a shift that is happening slowly. I see that, we are not only recording the. Transactions, but also reasoning behind the transactions so that it can be, that particular task can be identified in a way so that the agents can really use those reasoning to learn and reason for the new problem at hand. And the third type of transformation that I see, organizational transmission that is happening is we are going from individual expertise to community expertise. So we are no longer talking about how to we are going to be less and less worried about training the individuals on a particular task. Because we can always provide agents as assistance to individuals individual employees, individual team members. And how do you like, because. That way we can actually level set everyone with the same agents and same techno, same set of knowledge so that they could do their job somewhat similarly. So this would completely change the organizational training, learning, and growth in a way because, I'm not trying to be as good as the best in my team if I'm, if say I'm part of a call center, I'm not trying to be as good as the best in my team because all of us have access to the same set of agents that helps us, help us with information about how to solve a particular customer problem. Now, now that means organizationally there is a shift happening as to like, how are we going to set goals and expectations and train each of us? And and that's going from the indu individual expertise to how do you really train the individual so that the community expertise is increased and and this this is also a big shift that is happening in the in from the organization perspective.
ben parker:Yeah, I think transformation is the toughest bit.'cause most every business then they wanna get to A to B. But implementation is just so hard, I think for people.'cause you're dealing with people and soon as the goalposts move it, it's constant change, isn't it? And you gotta keep keep evolving and changing, which I think is the tough bit.
viji krishnamurthy:Yeah, it is tough. But I think the organizations that are making this transformation are the ones that are going to start seeing that kind of benefit as well, right? Going from individual to agent managers and going from individuals to SMEs who are curating data for agents to act better and going from individual expertise to the community expertise and actually working towards keep on improving that community expertise, from day to day and, month to month or the, or, will result in that organization to reap the benefit of this technology. And I do see it's tough, but I do see this kind of transformation happening.
ben parker:So is this gonna be down to strong leadership or firms being more creative?
viji krishnamurthy:I think it's not this or that per se definitely some of these transformations require like a top down a blessing because it's an organizational change and it's not easy without complete alignment all the way to the top. Some of these are put into motion with a leadership from the top down leadership who are saying that this is where we are going. And some of this is also happening from the creativity of the organization itself. If, like I have, I've observed this in my career. You have a team and there is a three or four people just murmur about something new and then they start working on it, and then it's snowballs into something that becomes the focus within a quarter or two of that team, right? That is also happening, like the creativity of the individuals that is snowballing some of these transformations. And also questioning what could be done creatively with this technology. What, how can we how can we how can we take, for example, one of the recent things that I've seen is how could we take the graph that we used to do before generative AI model? How could we use the graph to really. Transform the enterprise data so that we could teach the agents better. This is something that had come from the creativity of the individual team members, right? So it didn't come from the top down, but it comes from the bottom. So same thing on the how do we really how do we record the reasoning behind the transactions in a consistent manner, right? This is something that is coming from the bottom. So it's a combination of both. For sure.
ben parker:Okay, brilliant. So looking ahead, how do you see generative AI transforming enterprise applica applications over the next three to five years?
viji krishnamurthy:Yeah. I think in your question, the three to five years is the key phrase. So I will stick to the three to five years. I this is where I think this is going, right? Like we are going to basically. Our applications are going, all the enterprise applications are going to be transformed with agents. So you're going to do most of your work, if not all, through the agent taking interaction versus actually logging into the application, going through the UX workflows and clicking, clicking through those UX workflows, right? So agent taking interaction will start to do some of the repetitive work initially, and then start to do more and more. Is is is my expectation. And as a part of this, we'll also start to see some of these personalized app experiences to evolve, right? For both of us may be using the same application, but both of us may be like each of us is using it in a slightly different way, right? Because we have different goals, different intents. For each one of our interactions. And as a result, when it is agent when it is an agent interface, I feel like there is a huge opportunity for completely personalizing that agent from learning more about your behavior, which is different from my behavior with the agent. And then and the agents start to adapt to the individual that it is talking to. Is my expectation. So the personalized app experiences will come into fruition. And the third like from there I feel like, okay, now that personalized app, personalized agents and is helping me with, assisting me with what I need when I need it. Pretty soon the per the agents will start to learn. I do certain things in a specific way. And can those be automated? So you'll move from assistance to automation agents, and this requires quite a bit of work because automation means I need to be, very accurate. And I need to have. Consistency in the way that the agents are behaving. And it should be, there should be audit trails with which I can actually check how it came to that particular, set of tasks to be automated, et cetera. It's not gonna be easy. There's gonna be a lot of improvements that is needed between where we are and where we need to go. This is like the March of the Nines, right? You it's, it takes same amount of time to keep adding to the adding to the reliability of these agents. It's gonna take a lot of effort to go from nineties to 99, 9 9 9 to 9 9, 9 9 to go very close to the a hundred percent right? I think there's gonna be quite a bit of work needed, but I do expect some of these agents to become agents to actually start doing. Some of the repetitive work automatically, and also the agents to become more collaborative agents for decision making interaction. So now that majority of the, transaction creation transaction update and all of these can be automated, assuming that we have achieved the reliability then the only interactions that I'm having, the kind of interactions that majority of the in interaction that I'm having with the agents is gonna be decision making interaction. That means agents who should have to become collaborative to me, and they have to collaborate with me and they have to start generating scenarios. Help me explore the scenarios, help me understand the benefits and the pitfalls of the different scenarios and help me find a way in which I can solve the problem at hand. A decision making age, decision making, collaborative agents will start to evolve. I think I, I believe that we'll get there. Elaine. Then I think the once the decision making agents are there, right then you'll start to expect, you'll start to try to get into system two, like behavior I don't have to collaborate with you, in a live mode and ask you various questions. If there are some type of explorations that I do periodically related to financial account reconciliation or ways in which I can, minimize my mini minimize my investment on certain areas or ways in which I can support, our customers better, et cetera, these kind of problems. Then would I have these agents the agents learn enough so that they can become more like a system tool. Like they actually proactively start looking at SI signals that could result in certain business problems. And it starts generating recommendations, proactively coming to the business user and say, Hey, I expect I see this happening. This is a signal and I, if I project it out. I see X percent of the customers are gonna be affected. And I proj I recommend these corrective action. So this kind of agents, I expect, and for that, again, it's not gonna be easy. None of this is gonna be easy because there is quite a bit of gap between where we are and where we have to go. In terms of solving some of the model gaps related to consistency, reliability, and also ability for it to learn continuously and ability for it to consume diverse enterprise data. But, as we make progress on this, I expect in the next to three to five years, we see more and more of these collaborative agents, automation agents, and Tesla as a system, two type agents that, agents that are constantly observing studying, generating recommendations come to us proactively. Those kind of agents will will come into practical use, I think.
ben parker:Brilliant. Thank you for your insights. I love your passion, your dedication to the industry, and obviously we'll see if your predictions are right and also this. Yeah. I hope you keep surprising people.
viji krishnamurthy:Thank you, Ben. It was nice talking with you, and thank you for the opportunity to talk on this podcast.
ben parker:Brilliant. Thank you.