It was a pleasure hosting the third annual IA Summit in person on October 2, 2024. Nearly 300 founders, builders, investors, and thought leaders across the AI community and over 30 speakers dove into everything from foundational AI models to real-world applications in enterprise and productivity. The day featured a series of fireside chats with industry leaders, panels with key AI and tech innovators, and interactive discussion groups. We’re excited to share the recording and transcript of the panel “CIO Enterprise AI Strategy” with Marco Argenti, CIO of Goldman Sachs; Michelle Horn, SVP & Chief Strategy Officer of Delta; Swamy Kocherlakota, EVP & Chief Digital Solutions Officer of S&P Global; Lari Hämäläinen, Senior Partner QuantumBlack & Digital, McKinsey & Company; and moderated by Ajay Patel, General Manager of Apptio, an IBM company.
TLDR: (Generated with AI and edited for clarity)
Explore how top companies are moving beyond AI experimentation to full-scale enterprise deployment with Marco Argenti (Goldman Sachs), Michelle Horn (Delta), Swamy Kocherlakota (S&P Global), Lari Hämäläinen (McKinsey & Co.), and Ajay Patel (Apptio, an IBM company) during the “CIO Enterprise AI Strategy” panel at the 2024 IA Summit Presented by Madrona.
The panelists focused on three main themes: the importance of data quality and governance as foundational to successful AI implementation, the critical need to bridge the AI talent gap through both hiring and internal development, and the strategic integration of AI into core business processes to drive measurable ROI. They also emphasized the importance of creating safe, compliant environments for experimentation and focusing AI efforts on areas like customer experience, operational efficiency, and developer productivity. The overarching message was clear: successful AI strategies are business-driven, focused on real-world impact, and require careful alignment of technology, talent, and data.
- Mastering Data to Drive AI Success: The panelists emphasized that no matter how advanced your AI tools are, bad data will kill your projects. Start by auditing your data — security, privacy, and quality are non-negotiable. Founders who tackle this early will leap ahead when scaling AI.
- Bridging the AI Talent Gap: As companies scramble for skilled AI professionals, your strategy can’t be just about hiring. Delta’s approach includes building internal talent and leveraging external partnerships to speed AI adoption. Startups can do the same — bring in external expertise and invest in growing internal capabilities to stay nimble.
- AI for Business Impact, Not Buzz: To win with AI, it’s not about flashy technology — it’s about embedding AI into core processes. The panelists made it clear: prioritize AI that solves real business problems, like boosting customer experience or automating high-volume tasks. Look for opportunities to tie AI efforts directly to measurable ROI.
Ajay Patel: I have the pleasure of inviting an esteemed panel of CIOs and strategy leaders to the stage. This is a great opportunity to hear from the business side. We’ve talked quite a bit about technology all morning. We’ve talked about models. We’ve talked about the risk. On a positive note, hopefully, we’ll talk about the benefits of AI and how customers are looking to leverage AI into production.
So, team, since 2022 with ChatGPT that really popularized the whole LLM/GenAI revolution in many ways, we’ve now seen people move from what we’ve heard all day today, from tinkering and experimentation to starting to move things into production. Rama was talking about how NVIDIA is using it across the board in engineering, etc., right? We did a survey, we surveyed the IA winners as well as some of the CIOs, and we asked them what are the challenges moving from pilot prototyping to production? And we heard basically three themes that I’ll try to categorize, and hopefully, we spend a little bit of time using that as a guide, but you’re open to answering any of these other topics you may have as you talk through this.
The three themes we heard was one about data. I personally had tried to build an AI agent for our platform, and we’re realizing how bad our data is. So data quality, data security, data privacy, indemnification rights, IP rights, lawsuits. There’s a swath of things I’ll put in the category of data. That’s one. Second week we heard was talent. We can’t have enough talent to throw at the problem. That demand, we talked about that last night over dinner as well. And the third thing is just integrating into the business workflow or the integration with my enterprise systems. Those are kind of the three key themes.
And to add to that, I would say what I hear from many of my customers, I come from the business value side of Apptio, is how do I justify the cost, right? So is the ROI there, is the impact there? So I would say the fourth one is really are you able to tie this to business impact? How are you business justifying? And given the audience has a whole bunch of entrepreneurs here, I think I’ll leave a last question in terms of what are the problems they should be selling you, and how would they approach a CIO? Because I know many of the startups I worked with, they always say, “How do I address the customer? How do I get them to take my call?” So maybe we’ll leave that at the end for many of the entrepreneurs here in terms of firm leveraging it.
So maybe I’ll start with you, Marco. In July of this year, Goldman Sachs announced your AI platform. I had a chance to listen to some of your podcasts, and you said on an average 10% to 20% of developer productivity was one of the areas you invested early on around using a GenAI code assistant. Maybe I’ll have you talk a little bit about the approach that Goldman Sachs is taking from an AI platform perspective. You’re taking a platform approach. We’ve talked quite a bit about that. Maybe start there. How are you starting to think about getting from pilot to production using a platform approach?
Marco Argenti: Yeah. So Ajay, thanks. I think it’s a very broad question. So first of all, we started from the realization, and I think it’s actually true today as much as it was a couple of years ago, that there are many more things that we don’t know about AI than those that we know. We’re starting to form some convictions, and I think I’m using the word conviction because I don’t want to use the word certainty, because there is no certainty, almost like intrinsic in the model. And I think in a case where you have a large amount of ambiguity, but at the same time you’re in this quadrant where it’s like maximum ambiguity, maximum potential, I think the thing to do is to kind of create a way for our people to do experimentation in a way that is safe and compliant. The worst thing that can happen, especially to a bank, is that someone will start opening up accounts in various AI providers and then trusting what comes back, and then that thing goes in the hands of clients. That just cannot happen.
And so what we decided to do was let’s almost create a containment layer around these models. It’s almost like we were dealing with something really powerful that potentially problematic, and so it’s like a containment, like a reactor containment, where we implement a bunch of safeties around the models, around obviously infiltration, exfiltration of data, protecting against hallucination, grounding with our own sources of data, and also routing the right content to the right model. Also, to become model independent in a way. And then on top of that, basically standardize some of the techniques, such as chain of thought or a trigonometric generation so that developers will not really have to think about the how, because that’s largely, it’s lots of tricks and lots of things that you do to improve the effectiveness of those tools that you don’t want to repeat the learning curve for every application. And then on top of that, putting a sort of rapid application development layer.
With that and through an internal governance to try to really understand where to focus, we were able to create dozens of POCs. And out of that, we emerged with four main use cases, which is where we’re deciding to invest now, on areas such as developer productivity, document management, kind of a banker Copilot, so to speak. I’m not going to go into the details, but the approach has been bottom up, top down with a containment which will allow us to, in a way, let the creativity unfold and really trying to minimize sort of the downsides of that.
Ajay Patel: So you started by allowing a lot more experimentation across the organization in this containment layer, but then I’m assuming at some point, you had to choose which of these you want to put into production.
Marco Argenti: That’s right.
Ajay Patel: And so you have a governance process of sorts.
Marco Argenti: We do have absolutely governance process. We have created groups of people like committees. Unfortunately, that’s a word that is rather used in our environment, but getting a good mix of people, business, and technology that will look at form, first of all, a measurement and then form an ROI hypothesis and then eventually validate what gets funding. I think in general, and I’m not going to go very long on this one, but we see three waves of ROI in the type of investment. One is kind the wave zero where you can kind of adopt something out of the box, and you don’t have to invest too much in customization. An example of that is the code completion and a lot of the sort of out-of-the-box developer stuff. And that one is pretty easy to get to positive ROI, because if I’m spending 30 bucks a month on a developer license, that’s essentially, I don’t know, a few minutes of extra productivity in a way.
The wave two is when you actually start to integrate with the company’s data sources, etc., so you have a little bit of a J-curve. And then the wave three is when you’re really creating models for specific tasks where you’re betting that there’s going to be a return. Things like trading or things like portfolio management.
Ajay Patel: You called it the high Alpha in your conversation.
Marco Argenti: AlphaStaff.
Ajay Patel: Exactly.
Marco Argenti: Which has obviously intrinsically a deeper J-curve, and then you need to navigate your strategy around those three waves.
Ajay Patel: Excellent. Maybe I’ll switch over to you, Swamy. You are in a very data-rich environment, S&P Global, and you’ve also made some good acquisitions recently. Maybe broadly speaking, how are you looking at your AI strategy? And again, I’ll bias it towards what I call responsible AI, and also talk a little bit about that, so maybe give us a little bit more color from your side.
Swamy Kocherlakota: Yeah. Our AI strategy has three pillars. The first one is from an employee productivity perspective, what can we do? I think you heard a lot about developer productivity, analyst productivity. The ROI is a little bit black and white, if you will, for the most part, and some technologies we get to invest, and some technologies we try to build, because we don’t want to pay for the upfront cost. So that’s the first aspect of our AI strategy.
And the second one, which is the most important one, is where is the new revenue coming from? At the end of the day, we, as public company, have to figure out where is the revenue coming from. So the second pillar of our AI strategy is that what are the new net revenue growth opportunities? And the third one is AI. From a defensive move perspective, where do you have to be? Because you customers expect you to provide that search service and all that, right? So that’s the core pillars of our AI strategy.
Much like Marco has mentioned, we have a good decision tree around where to invest, because there isn’t any business process that I know of where AI cannot enhance. But the question really is which process do you want to invest in, whether it’s a productivity, and if in the case of new revenue growth, how do you work with the customer and what is your go-to-market strategy for the customer for the new revenue growth? Is customer willing to pay for it? Because a lot going on with these copilots and agenting things is that who’s paying for it, right? It’s very important. So that’s an important question.
To your second point, you have to understand the generative AI is a probabilistic result. We are used to calculators in computing that is analytical, that is deterministic answers. Every time you want to ask a question, you get the same answers in the deterministic computing. Because the generative AI by nature is probabilistic and statistical, you will have hallucinations. And then the way we are trying to solve the problem is restricting it to the data and the context that we give in so that we minimize the hallucinations. I think this whole ethical AI and hallucinations has a long roadmap. We are just at the tip of the iceberg on that.
Ajay Patel: Wonderful. Maybe I’ll move over to you, Michelle. You and I were speaking, Delta Air Lines, clearly every one of us is a consumer, and we talked about the experience. And most of you are flyers and here absolutely get touched one way or the other with an airline.
Michelle Horn: Thank you for that.
Ajay Patel: I think you had a really nice framework you were talking about in terms of how you think about what AI means for an airline like Delta. Maybe from a customer, we’ve always talked about chatbots. Customer services are really a low-hanging fruit like that and GNA function, and I think you were talking about how we would start looking from a customer service perspective. Maybe give the team a little bit of a view of how’s Delta looking at? And in your strategy role, you have a great role, you buy planes on one side, and you’re looking at AI on the other side, so it’s a pretty broad spectrum of a charter. So maybe a little bit more color in terms of how do you think about AI from a Delta Air Lines perspective.
Michelle Horn: Sure. And thank you for having me and really happy to be here. I think we are very much about connecting the world, and we think AI is a big part of that. So we think about it, actually, I like Marco’s three as well, because I think the way we think about it would parallel that. There is a lot that we’ve talked a lot about today already about developer productivity and things like that, but then the two other big ones for us, there is, one, we think about the customer side and we think about the operation, and those are a little different. On the customer side, customers are looking for what is an elevated, caring experience that is increasingly seamless. And so anything that we can do to get information in the hands of our customers, that means that they can do what they want to do as quickly as possible and fly through their journey.
Right now, for us, that’s a lot about putting information in the hands of our people to support that, and I think as models get more capable and as we get more confident, then I think we’ll be increasingly wanting to be more direct to the customer themselves and, also, I think more proactive. The kinds of things that we think about can be as simple as when do you need to leave the sky club to get to the plane to can I take my pet bird if I’m flying to Mexico? These are all questions that have a very specific answer that AI is really well-suited to do. And for us, we have 25 million SkyMiles members who we very much appreciate somewhere on this stage, lots are in this room and other places, and being able to serve you in a way that reflects that we know you and we know what you care about.
Ajay Patel: Personalization.
Michelle Horn: Personalization … Yeah.
Ajay Patel: Absolutely.
Michelle Horn: On the operation side, it’s more complicated, certainly. But just to think about for a minute, an airline to get you to where you want to go on time, about 10 things have to happen really well 5,400 times a day. And that is really, we are an industry that is made for the power of AI in making that happen. And you can think about things like the crew, the incoming plane, the maintenance, the bags, the connecting passengers, the weather, the air traffic control routes, the gates, all these things need to come together pretty seamlessly. And it speaks to, I think, the power of our team and what they do to make us an on-time airline, since all those things don’t go right every time. But for us, what AI can do to help them is really exciting. That is a longer road, but one that has real potential.
Ajay Patel: I think for many of the folks sitting in the room, as you start to speak to the business, you notice no one talked about models, no one talked about the how. They’re all talking about business problems that need to solve, and who better than McKinsey, who gets to sit across. And I was talking to Lari, we use McKinsey pretty heavily at IBM, and we use a stat from McKinsey, and I said, “Lari, we ready for this.” We’ve talked about anywhere from what is $3 to $5 trillion added to the GDP because of AI. And there’s a fairly rich document McKinsey’s written in terms of how they see this productivity gain is going to drive the economies.
Just for reference, the UK economy is $3.1 trillion. So you’re basically going to add a corner of a UK economy through AI productivity in the years to come with these thousands or millions of apps being built to assist us. So maybe from your side, Lari, as you look at this and you get to work with a lot of the customers, what’s your perspective in terms of where are we in this excitement? I see a lot of experimentation still, you see people moving, so maybe start by calibrating from your side, what’s the maturity? Where are we in this journey of hype to reality in some ways?
Lari Hämäläinen: I think it’s a great idea, Ajay, so let’s stay on that $5 trillion figure, because it’s a mind-blowing figure. So world economy is $100 trillion, right? So what we’re saying is that in the next five years or so, we can actually add 5% to the world economy through pure AI. Of course, there’s other effects as well. Then you ask that what does it mean for you guys just to stay with that figure? Our estimate currently is that that $5 becomes roughly $500 billion, so one-tenth, in terms of revenues that you can capture, you meaning technology companies. So that’s new software and GenAI PAM coming into the market.
And now we come to the question, Ajay, that how do you capture that? You’re all entrepreneurs. Well, I think the past technology waves are a good proxy to how this technology wave will play out. And if I may use Delta as an example, I, as a Delta customer, will not pay Delta more no matter what they do with GenAI. I might be more loyal, and maybe I will buy a meal because you make a better recommendation, but I will not pay more. That story has played out now time and time after again.
So if we look at early waves of digitalization, if we look at cloud transition, 90% of the benefits of that $5 trillion are productivity benefits. And we come to the question then why we haven’t moved yet to at-scale deployment, and I think you made a great point around the operations. When a company like Delta has to deploy this to thousands of flights per day, tens of thousands of people, millions of customers, simply the maturity has not been there yet. And I think we all heard and we all knew that ChatGPT came out 22 months ago, and yes, we had GPT-3 and others before, but the deployment, the reliability at the scale large-scale enterprises require, it hasn’t been there yet.
I think we’re actually on the onset of us getting to a meaningful place, but we’re just entering that era, and that’s I think one of the biggest drivers. And that’s the challenge any enterprise has that can I really deploy this? Even if I believe in the potential, do we have credible case examples? That’s the first question an enterprise ask. Then they will ask you, “Okay, show me the proof of value. Show actually what is the production type of environment where you’re running your tech.” And those cases I think are very critical for any enterprise customer, and I think we’re just getting there now.
Ajay Patel: So Marco, I’ll turn it back to you, my friend. You heard quite a bit around this potential that’s there, and you talked quite elegantly this morning about how you’re starting to see. And I love the fact that most of the GenAI capability is more an assistant type capability today. And we talk quite a bit with this agentic infrastructure and where things are going to move from assistance to action. As you look at the financial industries, usually cutting edge, the leading the pack in many of these cases, how do you see this evolution playing out, and are you seeing the ROI beyond some of the more assistant capability, like Codegen, etc., that you’re starting to see? How are you starting to scale this beyond the initial domain or function? Are you seeing a broad adoption, and are you seeing this moving more and more beyond assistance to kind of an action-oriented organization? It’s kind of a rough timeline If you were kind of putting an hourglass in terms of how do you see this maturing over time?
Marco Argenti: I think the headline is when we’re going to be in a position where we can actually specify an outcome and then letting the AI figure out how to get there. Right now, it’s very much hand-holding, because you need to essentially interact the step-by-step and guide the AI there. A lot of the use cases that work today, for example, we have an M&A use case where you have all the M&A documents that we’ve created over the years connected to an AI to which you can ask very complex questions around, for example, which companies have had M&A on a specific sector where the final price was lower than the first price and this following other criteria inserted. And then you kind of had to continually refine, step by step.
Where we want to get is a point where you can actually define an outcome. For example, I take a non-financial services, but very relevant for my engineering team, let’s migrate from Java 8 to Java 21, and I have this 100 applications, and there are tons of dependencies, because obviously Java 21 requires all the other components to be upgraded, and the outcome should be the following. We want a certain percentage of offers being migrated. So we just tell you what the end result is going to be, and then an agent will actually start the long-running task that will involve other agents, will involve humans, it will actually involve humans on demand. And I will say, “Marco, or an engineer, please do this for me.” It’s kind of reversing the paradigm. We have an incredibly complex-
Ajay Patel: The actor shifts from the human to the automation and the human’s there for checking.
Marco Argenti: Absolutely. And we have an example. We have extremely complex scenarios that we go through. For example, when we calculate a portfolio rebalancing, something happens or a company exits, for example, the S&P 500, and we need to rebalance it or there are certain world events, etc.. That’s a long job that can take days, and the outcome specified could be, “Okay, give me a new portfolio that, for example, contains more ESG companies and less international exposure and perfectly tracks the S&P 500. And end. Go.” And then that starts a series of tasks, and then the agent works with the company and the data at their fingertips. So it’s kind of the three levers that the agent has is the access to the data, access to people, and access to other AIs. And that’s where we see the 10X advantage.
Right now, we’re talking about we’re here. Listen, look how many AIs come, how many conferences have I been in the last two years? I don’t know say 100. How many of them were AI-centric? Maybe 101, maybe more. And so all this cannot be 10% improvement in productivity of developers. Come on. It can’t. It needs to be 10X. So the interactive hand-holding part is the 10X. The outcome-based automatic part where the AI is going to go for one-week journey to save you three years of work, that’s the 10X part, and that’s really where we’re going to see the real ROI.
Ajay Patel: So I think maybe to either one of you, Swamy, etc., is how do you get there, right? Where are you today? You paint a great vision, Marco, right? We all want to get to this fully automated reverse where the automation leads, humans are validating and being supported and pulled in. We want to flip the equation. There’s a big social impact to that. We won’t even get to that for now. But how do you get on the journey, and what do you do from an infrastructure? Swamy, you and I talk quite about infrastructure. What is the foundation need to put in place to get there?
Swamy Kocherlakota: Yeah. Much like when we talked about the copilots, you have to have the data ready, and your employees and the talent has to be there. I would like to think that a lot of the companies like us have solved that problem. We invested over the last two years or last one-and-a-half years on this. I feel pretty good. I think with this agentic thing, there are two things that I would say. The first and foremost, the prerequisite for you to take advantage of this agentic architecture and the vision that Marco has just shared, you have to have the right process under the hood. So at S&P Global, for example, from the lead comes in through our Adobe CDP system, and how do we market qualify it so that goes into Salesforce on a lead and to actually book it into the GL. And all that process is very, very clear. Now, what I can do is that with an agentic equation, I could do some process mining on what are the opportunities and where I would flip and where I would still have human at the front. That’s the biggest opportunity.
And the second thing that I would also say that not getting a lot of attention, but from an enterprise perspective, I see a lot of value is this notion that AI is the new UI. Because in the enterprises, there is lot of old technology, and in order for you to solve a problem, if you had to go through technical data and integrate them, thinking about all these edge cases in UI, it’s a waste of time. Now, I can put AI front-end in front of multiple applications and create a new workflow, because I don’t have to worry about those edge cases. So those two are to me are the foundational, having the right process engineering background if you have a good foundation, and second thing is this concept that AI is a UI and how you can overlay.
Ajay Patel: Excellent. Michelle, from your side, you and I talked about talent. We are not a Facebook, we’re not a Meta, right? We’re not a Google or one of these exciting companies, and you’re recruiting for talent because you’re more traditional airline industry. You and I talked about this. This is a challenge.
Michelle Horn: Yeah, totally.
Ajay Patel: A client is trying to recruit from Seattle or the Bay Area.
Michelle Horn: I’m totally open to it.
Ajay Patel: So I think the question then really is how does a traditional industry, who has to now digitize and leverage AI, look about to recruit and build talent? So what’s your strategy for the talent challenge that we talked about, both in terms of the demand you have as well as how do you start to go recruit and build out the talent gap that we talked about?
Michelle Horn: It’s a really, really good topic. I think for us, there are three things. One, we have to prioritize pretty ruthlessly, right? There are way more ideas than anything that we could do. So a lot of time is spent on where are the really big opportunities that we want to lean in with the resources that we do have? The second one is partners. We partner a lot. That has been a path to speed for us, and that is definitely something that we will continue where we can. So far, that has been a lot on the customer-facing side, for the most part.
And I think the third one is just exposure of our teams. We are pretty intentional in trying to make sure that every experience that we do have, we are sharing across our teams and developing our own talent, and that is a stream in and of itself that we don’t really want to leave just to chance. So those are the three big things that we do. There’s some really interesting problems and people like to work on these problems, and so that works to our advantage. But those are the three things we go for.
Ajay Patel: No, I think it’s great. I think one of the things we are all finding out that we had to incubate the talent in many ways, and we got to enrich the talent with the domain. So one of the things I’m finding in my business as a product team is great product managers, but they’re not practitioners. And a lot of this is really a data problem and not a capability problem, because we spend so much time on feature function at the heart of a data, and so trying to bring the practitioner for the training, for the modeling, the tagging is really a different. So you have to somehow marry business context or business knowledge with the technologists and the data science team we’re standing up. So great point.
Swamy, I know you and I have talked quite a bit about this notion around security and privacy and being a financial company and particularly providing information. How are you looking at this whole idea of security, privacy or governance broadly, I’ll call it?
Swamy Kocherlakota: It’s work in progress. I think the analogy that I would give from a security perspective is when the internet came in, we were only thinking at that time just the firewalls, nothing else. But now, to protect you from all the threats that are out there, you need to have application scanning technology, web application firewalls. So there’s lot more layers that you need to have. So right now, what we’re seeing is just we have a guardrails technology that we use. Effectively, there are prompts that will guide your model to what to say and what not to say. But that guardrails technology is very, very early. Hopefully, there’ll be a lot more innovation.
So every answer that I give to my customers, I have to show the lineage of where I came with the answer, and I also have to be consistent. So for us, we are a little bit protected, because I could restrict it and say, “Hey, I gave you this answer or this insight and that’s because of these four sources has been very helpful.” But this whole AI ethics and the truth has a lot more innovation.
And I would also say one more thing. If you look at all the solutions that are out there, and if you put them against a benchmark, we at benchmarks.kensho.com, one of the properties that we have, for the financial industry, we created a benchmark on what is the success? And if I look at all the models, the best model is only 90% successful. So I think the art of the possibilities around how do you take that 90% to 100%, and in that process, you are actually making it less hallucinative and more clear, and it also solves some of the other ethical and privacy-related issues that we have as well.
Ajay Patel: Lari, maybe from a broader customer and market perspective, what are you seeing as some of these challenges? How are customers leveraging McKinsey and your capability to help move from this initial experimentation production? Do you have a framework, an approach?
Lari Hämäläinen: Yeah, we do. Let me start with the challenges first. So what is the challenge? And maybe I build on you, Swamy, a bit. So you talked about workflows and process, right? In order to apply GenAI, what we need to do is to reimagine the workflow workflow, workflow being a set of processes or process across an enterprise. You might be surprised, but most companies actually do not have professionals who can in a credible way think that if we work like this today, what could it be based on GenAI three years from now or five years from now? That’s a capability that is very much in lacking. That’s challenge number one that we don’t have people who actually across domains can think of these workflows.
Second is an engineering challenge, not an LLM challenge or a challenge with transformers, but engineering challenge. So when we bring a solution, most of the process today are extremely customized in an enterprise. Even if you use, let’s say, an IT operations, you use ServiceNow, you’re going to have a customized instance or a customized instance of Salesforce. Meaning whenever we apply technology to those workflows, who’s actually going to do that, and how do we build then the transformer solutions there?
Third challenge I see is data and APIs. I think we talked a lot about data today. Data in enterprises is scattered, it’s not up-to-date. Even if we have the past tech, we might not actually get the data in. And it’s also APIs, the system landscape, if we look at cloud penetration today, I think it’s globally 15%, right? 15%. That means 85% of systems are running on-prem and their legacy systems. So how do you actually have proper APIs, proper system access, even if we have the perfect agentic framework and agents?
But the final challenge, I think, Ajay, is leadership, and the reason I mentioned that is that out of all the changes, McKinsey used to study that how many transformations fail? And we used to be historically, say, that constantly that’s 70% of every transformation fails. GenAI is, by the way, a transformation. Five years ago, we did the same benchmark with cloud. That failure percentage had increased from 70% to 86%. So actually, the success rate had less than half from 30% to 14%. That’s a challenge with leadership, it’s a challenge with where do we find leaders who are able to both at the same time vision how their company might look in the future but in a very pragmatic way. Then do things that lead towards that vision.
And I think that’d be my guidance and I think guiding thought for viewers, entrepreneurs or VCs that I worked during my career in McKinsey the last 20 years with about 100 companies. 50% of leaders, you should just forget about, sorry to say, like enterprise lead. And the reason is that they don’t understand the tech deep enough, what you guys are trying to do from your position. 40% of leaders care, but they don’t necessarily have the means to act at that point. 10% of leaders are willing to act and really try if you can show the proof of value. So the question for you guys all here in the audience, how do you find your 10%, and how do you show them the proof of value to build trust and use your company’s tech or your portfolio company’s tech?
Ajay Patel: I’ll do a rapid fire to leave five minutes for questions. If you had to give one set of advice in terms of an ROI or the business impact of AI, as you’re starting to look at your priorities for this year, what is an area that you think has the biggest impact from an AI perspective that any of these companies can look to pitch you? So one area that you would say, this is an area of priority for me, here’s where I can see tremendous ROI. What would you guide the team here on? Is it customer success? Is it more R&D? Is it engineering, right? Is it supply chain? We heard across the board. Again, we vary by industry, but is there a theme here that you would say, “This is here and now. This is what you should focus on in terms of driving business impact?” You want to start, Michelle?
Michelle Horn: Today, customer support.
Ajay Patel: Okay.
Swamy Kocherlakota: Yeah, I would say our business, we measure a lot on NPS, Net Promoter Score. So anything that would drive that NPS up, which includes customer success and go-to-market, is the opportunity for us.
Ajay Patel: Right. So top-line customer-facing. Marco, for you? Engineering still, or would you look beyond engineering now?
Marco Argenti: Well, I think you have two dimensions. I think I’ll give you a frame of mind rather than a direct answer. If this is about making people more efficient.
Ajay Patel: It’s pretty broad, yeah.
Marco Argenti: Focus on where you have lots of people that are actually quite expensive, because if you focus on people that are expensive and you have 10 of them, probably it’s not a great place to start, or where you have 10,000 people that are outsourced, and they’re paid very, very little. So who’s on the intersection of a lot and very expensive? And I guess that’s why developers is probably the best. We have 17,000 out of 45,000 people or something like that, including contracts, so 12,000 employees, and developers are paid well, and they’re also not only paid well but are also a scarce resource. So I think you need to start where you can see where you have the most bang for the buck is pretty clear.
Ajay Patel: Great point. Lari, anything from your perspective?
Lari Hämäläinen: I’m mostly thinking I’m going to hire Marco of the McKinsey. So the framework he just showed up is such a great one. On a serious note, I love the points that came out here. I would add one thing that we used to have RPA, right? I think what we’re seeing at McKinsey is that low complexity human-in-the-loop tasks, sorry, I’m generalizing now a lot, will be the next to go. So traditionally, we used to apply RPA. I think we see the next gen form of RPA, which is a hybrid between traditional automation as well as GenAI-driven task breakdowns, automatic implementations. That will come in addition to customer success and what Marco said.
Ajay Patel: Excellent. We have probably time for one or two questions. Anybody have a question? A couple here.
Question (Nikita – Neon): So the question’s for the folks who are on large software teams. Obviously, we all see the advantage of Copilot, and let’s call it approach one. We see RPA, call it approach two. Low-code probably that same category. And we’re starting to see those software agents that can do more complex software tasks in the core product or maybe internal tools and whatnot. Where do you see it’s heading, and if there’s percentage distribution for those and how this is going to change over time and why?
Swamy Kocherlakota: I can take a stab at it. See, look, I think we also understand that a lot of enterprises are actually, developers are actually modifying the existing code a lot. When you are talking about new developers building something new, that belongs to one use case, right? So I think to answer your question on the percentage distribution, it all depends upon are you writing a lot of new code, and the number may vary, if you’re making a lot of code for existing changes and feature enhancements, I think we have times to go on being able to fully automate with agentic. So the answer that the distribution, it depends, but I think what we are really actually saying is that developer productivity and what you can do to repurpose is a high use case, for sure.
Ajay Patel: I think the Java upgrade use case is a perfect one we talked about, for example. We call it KTLO, keeping the lights on. As a product organization, my job is to bring the KTLO down and improve the number of engineers working on capability and feature. So KTLO is probably the area we look at the most. Second one is we found that the productivity benefit for onboarding younger, so we hire, we call band fives by six, getting them productive faster. So a lot of the value we’re focusing on is how do we onboard new folks earlier? As we move from high cost to best cost location, how do we bring that training, documentation, simple use case of just documenting code? A lot of the historical is not well-documented. So there’s some very pedestrian use cases we’re seeing that can drive productivity before even getting to more of the fancy agent RPA automation kind of work. So most of the stuff is very much around the KTLO use case, at least in my shop. Anything else from anyone else?
Marco Argenti: I actually do agree with this, absolutely.
Ajay Patel: Great. Here’s another question, yes, sir, and then we’ll have one in the back.
Question: So Michelle, you had just mentioned customer support and the role AI and AI agents will play in improving that process. Do you believe that AI will ever fully replace the human-to-human problem-solving interaction that occurs? Each of you have some form of customer support in place, and sometimes, it’s easier to talk to a human to resolve your problem. So curious to get your thoughts on that.
Michelle Horn: I think it depends very much on the nature of the problem. Just sticking with airlines for a minute, there’s a lot of things that I think AI can handle directly, “Can I take my bird to Mexico?” Lots of questions like that. Even more changes and things that happen, AI is going to be able to do. I think there is a subset of things that there will be a human doing for a long time. Really complicated international itineraries, those kinds of things are things we think about. I also think what that enables, at least our people to do, is to spend more time with customers on things they really want to. So for us, we’re trying to take the burden off and just turn our people loose to do what they really want to be doing.
Ajay Patel: So in a product organization, bulk of the work we’ve been doing, I’ve been talking about support deflection for years. I see this as just another capability to provide more support deflection. Deflection meaning we don’t have to call a customer support person. You can self-serve identify a problem. So I see more and more that as a metric, how much of the workload is running back to a support and more from support to engineering. I want less people, all the requests coming in. So how can I get my support folks to answer more questions? How do I get the front line? So it’s deflection is a metric we already have, and we continue to see how to drive the ROI on that one. The one question, the last one.
Question (John Furrier – theCube): Hi. Good panel. Productivity and security has certainly got the board’s attention. That’s now they’re paying attention to AI’s impact. In security, cyber resilience has been a term for about recovery, protect and recover. How has AI changed the resilience equation now that you’re dealing with all layers of the stack?
Ajay Patel: Anybody else wants this one?
Swamy Kocherlakota: Yeah. I think from a cyber, the generative AI and AI really is increasing our protection levels. The CISO is part of my team, and the amount of operational data that we see to sift through what is actually risk has significantly increase with AI. So we’re able to protect and detect issues a lot faster and better. Even though you had a lot of these Spunk and other technologies, we were able to now drive value-based upon the threats.
Ajay Patel: I think one of the things we’re seeing generally is resiliency is becoming a theme across both the whole life cycle from how we build products, how we deliver products, how we support products. So as you think about digital products and services, resiliency has become an attribute. And so how do you apply GenAI techniques to drive better resiliency across the life cycle, providing the rich context? So more and more for us, it’s about providing personalized context to the persona. It’s incident management or root cause analysis or proactive security management, and then DLP, data leak protection. I was talking to Rama earlier, that data, the model that you built at NVIDIA, that is your company asset now. If someone gets access to that model with all the IP, that’s years of engineering IP that’s lost. So more and more the model becomes the company asset and how do you protect it becomes a big thing.
So you’re starting to see a lot of areas of data protection and DLP kind of stuff in terms of who has access to it, where’s the data going, do you train it? We’re calling some third party SaaS service, and your data’s shipping out there. Those are the kind of real concerns we’re seeing as we work with our customers. Great. Well, I want to thank you, this esteemed panel. Appreciate the time and hopefully that was valuable. Thank you.