IA Summit 2024 Fireside Chat: From SaaS to Agents With Mustafa Suleyman

A Conversation between Mustafa Suleyman, Microsoft AI CEO, and Soma Somasegar, Madrona Managing Director

It was a pleasure hosting the third annual IA Summit in person on October 2, 2024. Nearly 300 founders, builders, investors, and thought leaders across the AI community and over 30 speakers dove into everything from foundational AI models to real-world applications in enterprise and productivity. The day featured a series of fireside chats with industry leaders, panels with key AI and tech innovators, and interactive discussion groups. We’re excited to share the recording and transcript of the fireside chat on “From SaaS to Agents” with Mustafa Suleyman, Microsoft AI CEO, and Soma Somasegar, Madrona Managing Director.

TLDR: (Generated with AI and edited for clarity)

AI is evolving faster than we can predict, and no one has a clearer view of what’s coming than Mustafa Suleyman, the CEO of Microsoft AI. In his candid fireside chat with Madrona Managing Director Soma Somasegar during the 2024 IA Summit, Mustafa shared a bold vision for how personal AI assistants will soon transform the way we live, work, and interact with technology. He doesn’t just talk about tools or models — he redefines AI as a deeply personal relationship, one that adapts, learns, and even anticipates our needs in ways we haven’t yet imagined.

Curious about what AI will look like by 2030? Wondering how AI will impact industries, business models, and everyday life? Watch the recording (or dive into the full transcript) to hear Mustafa’s take on the most exciting — and challenging — shifts we’re about to experience.

  • Human-AI Relationship: Suleyman described the future of AI not as an app or tool but as an ongoing, dynamic relationship. He envisions AI becoming an ever-present, trusted companion with memory and emotional intelligence, capable of engaging users seamlessly across voice, text, and vision.
  • Focus on IQ, EQ, and AQ: He discussed the evolution of AI’s intelligence (IQ), emotional intelligence (EQ), and action-taking ability (AQ), which will drive more intuitive, natural interactions between AI and humans.
  • Regulation and Safety: Suleyman stressed the importance of proactive regulation and safety in AI development, drawing parallels to historical technologies like trains and cars and emphasizing the need to balance innovation with responsibility.

You can see the session recording here.

This transcript was automatically generated.

Soma: Thank you for being here, okay. For those of you who don’t know Mustafa, he’s what I call a quintessential AI entrepreneur. Well before AI became the craze that it is today, Mustafa started devoting his life toward AI. Today he’s the CEO of Microsoft AI, and he’s also the co-founder and former head of applied AI at DeepMind, a company that was acquired way back by Google. After leaving DeepMind, Mustafa co-founded Inflection AI back in 2022. And earlier this year, Microsoft and Inflection got into a pretty interesting partnership with the net result that Mustafa is now one of the key leaders at Microsoft, helping the company think about, hey, what is the future of AI, particularly as it relates to what should consumers expect and what can they get in terms of AI and all the goodness that comes with AI.

Just last month, “Time Magazine” published its list of the 100 most influential people AI, and obviously, that list wouldn’t have been complete without Mustafa. And that’s just one of the many, many, many recognitions that Mustafa has had in his career. Without further ado, Mustafa, thank you again, and please join me here.

Mustafa: Thank you. Thanks a lot.

Soma: Since you were flying in today from New York, Mustafa, I want to make sure that you heard about a couple of things that happened today.

Mustafa: Tell me.

Soma: And then I’ll go to yesterday, okay? Wherever you want to start.

Mustafa: No problem.

Soma: One is obviously one of your closest partners talked about a pretty big funding round. The other is Poolside AI, which is, again, one of the next-gen modern AI companies, raised about $400M. And we look at the amount of capital flowing in and the amount of investments that are happening in big tech. And it brought me back 24 hours ago when I heard about the set of announcements that you made yesterday in New York, right? And I really loved what you said about, hey, every human being on this planet will have a personal AI assistant. Start from that and tell us all a little bit about what you announced to the world yesterday.

The Future of AI: Predictions and Insights

Mustafa: I think we’re in a weird time because even though everybody is paying attention and we’re all seeing the same technological trend, we’re all focused on investing or building or partnering in some way. I think we’re still not fully able to predict what the world’s going to look like in three to five years, and that’s actually quite normal. The same thing happened in the very early days of the internet. It was super hard to predict that mobile was going to take the form that it did and that business models were going to adapt to this new modality. And I think the same is true now, even though we have a pretty clear line of sight to the capabilities that are going to emerge in the next couple of years, it’s very difficult to say what the combination of those capabilities means for existing ecosystems, for incentive structures and for the kinds of businesses that we’re going to build.

I mean, it is as profound as recognizing that for the first time in human history, machines have learned to speak our languages. And so the programming interface has already fundamentally changed. And so everybody is going to get access to this tool that they can use to program the digital world around them. And obviously that programming then changes us back. And so it no longer becomes a tool. It’s certainly not a tool in the traditional sense where there’s a very handcrafted clear set of instructions that delivers a known output in this deterministic way. There’s this dynamic and interactive nature, which is an inherent fundamental property of this new thing. It’s really better not to try and box it yet because it’s just quite different to anything we’ve known. We haven’t really encountered a new design material, or I often refer to it as a new clay that we sculpt with.

As a creative or a product maker or a builder, there’s now this new thing that you have to craft, which is quite different in characteristics to learning to program and having built websites and built apps and so on, it’s really quite different. You’re now creating a sustained experience. Your model, your LLM that you put into production in any setting is going to continuously adapt to the response of the environment or to the customer or to the user. And so you’re obviously unable to shape that past the second or third step. And so, really, what you’re crafting is a tone and a style and a personality. And I think that really is quite different, and it’s going to be amazing to see what it looks like in 2028 or 2030.

Mustafa Suleyman’s Vision: Personal AI Assistants

Soma: That brings up one thing that people commonly say, Mustafa, which is, hey, we tend to normally overestimate what can happen in a year’s time and we absolutely underestimate what can happen in a ten-year timeframe kind of thing, right? When you talk about 2028 or 2030 or whatever it is kind of thing, I actually don’t know whether we really are imagining enough about what is potential and what is possible kind of thing. I do want to go back to earlier this year, March, I think it was March when you gave a TED Talk. I don’t know how many of you here have seen the TED talk. Some people have seen. For those of you who haven’t seen, I would encourage you to go see it after this conference, not during this conference kind of thing, but for me it was a very inspirational set of things that I heard and learned from your talk. One of the things that you talked about there is, hey, we really are at an inflection point and maybe that’s why your last company was called Inflection.ai, but we really at an inflection point as far as humanity goes. And the other metaphor that you used there, which I really liked was, hey, there is a new digital species that is coming into existence. Tell us a little more about that vision and then we can tie it to the announcements that you made yesterday.

Capabilities and Challenges of AI

Mustafa: Well, I mean, look, what you first said was also totally right, which is that we’re really quite bad at making predictions and yet all I do all day long is try and make predictions and then bet on them. And I think it is actually quite possible to bet on the capabilities that are likely to emerge in the next two or three years at a capability level. And then the hard task is trying to translate those capabilities into new products, business models and ecosystems. If I were to bet on the capabilities, I would say, well, we are doing pretty well at IQ. The models are increasingly factual, they’re reducing hallucinations really significantly. They can retrieve well and conditionally generate over arbitrary documents, whether the web or your private corpus. And it’s pretty clear looking back retrospectively, in fact empirically so that with each order of magnitude more computation we add, they get easier to control, right?

That’s a big achievement. There were some people speculating two years ago that the models would get bigger and they would get more chaotic and more difficult to steer, difficult to align, and it turned out that the opposite appears to be true so far. They get better at instruction following, they get more adherent to the nuance of your product design, your behavior policy, your business policy, whatever you’re trying to optimize for. That’s on the IQ side. And then the second thing that people said a few years ago was, oh, they’re never going to be able to engage empathetically. They’re never going to have emotional intelligence. That’s always going to be some special magic source that’s far away from machines. And yet I think they’re really starting to show very elegant, very graceful, very fluid interactions obviously in some of the answers they give in text, but also with the voice.

They’re really capturing something very seamless and natural about these fast conversational interfaces. And that was some of the things that we launched yesterday. And then the third thing of course that we’re all talking about is how can it add AQ, this actions quotient, the extent to which it can use tools, communicate with other AIs, reason over things before it produces an output. I think that there isn’t that much difference, at least as far as I can see, in the nature of the actions domain compared to the video, image, audio or text domains. It’s fundamentally the same problem space. And so provided we have enough data, and I think that’s a very open question, whether we simulate that or we extract it from existing corpus, it’ll probably deliver the same kinds of returns. On those three fronts, I think we’re seeing quite predictable improvements.

The Role of Memory in AI

And then the missing piece that loops all of those together I think is memory. And lots of people are working very hard on that and there’s different approaches, but I’m pretty sure in the next 18 months we’re going to have AIs with very, very good memory, if not infinite and perfect, but very good ability to retrieve over arbitrary documents. And it’s interesting because the definition of intelligence that has shaped the whole field is AGI, generality, that’s the north star that we’ve all emphasized, but that’s one definition of intelligence. The same core system is able to perform well across a wide range of environments, that emphasizes that one system has to be general purpose. Another definition of intelligence is the ability to focus on the right thing at the right time, and it’s actually much closer to the way that humans operate.

My body and my brain will immediately know that if I spill this water, I’ll pull my hand away and if I have some more complicated tasks that I need to use in my prefrontal cortex with, I’ll engage a different part of my brain. And so, really, what my brain is doing, it has lots of different subsystems and it knows how to select the right system given the context and the task at hand. And so it may be that if we can direct processing power to the right subsystem at the right time, that’s a meta-enabler that enables us to leapfrog the challenges of just scaling it through a traditional context window. I think those are pretty high-confidence capability predictions over the next couple of years. And then you can imagine, okay, if I have really good IQ, EQ, AQ and memory, that’s a very powerful system, a very, very powerful system.

Soma: And just to build on that Mustafa, yesterday you talked about adding voice interaction to Copilot, okay. That was one of the many things that you talked about, that kind of thing. I’ve been a believer at least in, hey, the day we can, like today, you and I interact using a set of techniques, right? Whereas when I’m interacting with technology or with a machine, it’s different for the most part kind of thing, right? But the more we can use the standard way of humans interacting with each other as the way that humans interact with the technology or with a machine kind of thing, is the day we are going to have the vision for your EQ come into some level of fulfillment kind of thing. How do you see the interaction model changing in the coming years?

Mustafa: Yeah, I mean the challenge with these things is that one can demonstrate, you can ship a really cool demo that works 70% of the time, but in order for these to get widespread adoption, they have to be effective 99% of the time. It’s super low latency, very accurate, very fluid, really good at interruptions, know when to pause, and I think people often forget how hard it is to go from 90% to 99 and then from 99.9, right? It’s really, really as hard as it is almost to go from zero to 70% or something. I think there’s still a way to go, but what we demonstrated yesterday was the glimmers of what a more emotionally intuitive experience looks like. Not once yesterday in our launch in New York did I talk about parameters or models or any of the underlying technology ’cause I think to most consumers it just doesn’t really matter.

What matters is how does it feel and does it remember me? Do I trust it because it does the same thing over and over again? Does it interrupt me politely? When it gets it wrong, does it fail gracefully? Those are the feelings that make us trust something and want to use it more and therefore invest in it and share more and so on, or just make us skeptical because actually it was a little bit rude or it was stupid or… That’s the tough nut that we’ve got to crack to really get large scale adoption and I think we showed glimmers of that yesterday, but we’ve got a long way to go.

The Evolution of AI Applications & LLMs

Soma: Got it. The other thing you mentioned a few minutes ago is, hey, you are going to see more and more interesting applications that come out, some Microsoft is building, you are building kind of thing, the rest of the ecosystem is going to build. It takes me back to another thing that you said, hey, when the mobile revolution happened, so to speak, in my mind, as much as you can argue that phones existed or mobile phones existed before iPhone one came into existence, iPhone one was an inflection point kind of thing, right? But it really, and it came out in the, I think if I remember right, 2007 timeframe and it wasn’t until 2010 or 2011, particularly when Uber showed up and when Airbnb showed up that people really felt like, hey, what could you do with a mobile device, mobile platform and the kinds of applications, right? As you know, we are about 22 months into ChatGPT, since ChatGPT showed up in the world kind of thing, right? I still believe that as much as ChatGPT is amazing kind of thing, I don’t think the world has seen what is possible in terms of a true AI native application, okay. How do you think about that? Do you think that’s behind us or do you think that’s ahead of us?

Mustafa: It’s definitely ahead of us. I mean, again, I don’t even think of this as an application. I think of this as fundamentally a relationship. My team and I are now in the business of engineering personality. We’re crafting a lasting, meaningful, trusted relationship. That is the new platform as far as I see it because it’s not just about voice and text or language, it’s really going to be about vision. Your companion is going to see everything that you see in your browser and on your desktop in real time, understanding both the text and all of the images and able to talk to you about it as fluently as I’m talking to you about it now. And so trying to craft that into an application and a business model too soon almost misses how fundamental it’s going to be to have this ever-present, fully aware, perpetual memory companion that needs to be really, really aligned to your interests.
It’s going to be a highly intimate experience, much like if you’ve ever searched when you’re sitting on the couch at night with your partner or your best friend and you’re both searching to, I don’t know, plan a holiday or look up what you’re going to watch, what movie you’re going to watch, and you’re aware that they’re seeing a version of what you’re seeing but slightly different and you’re both talking about it in real time. That presence, I don’t even know how to describe that. That’s why I struggle for metaphors. It’s mostly I think a lasting relationship with an idea in your head than an application or a platform. And I think that that’s what I saw and pushing my creative team to really wrestle with is we have to immerse ourselves in the capabilities before allowing the experiences and the business models to emerge after.

Soma: That’s great. Today there is a fair amount of discussion on large language models, right? How we’ve seen an explosive growth in whichever dimension you think about kind of thing as far as large language models going in the last four or five years, right? And Microsoft is building a bunch of its own models. Microsoft is partnering with OpenAI for large language models kind of thing. Particularly as you’re talking to a set of entrepreneurs who are thinking about, hey, how do they bring their ideas to reality kind of thing, how much do you think people should think about or worry about having control at the model layer versus not worry about that and say, hey, let me just move up the stack and think about the experience and we call it application, but not really, the service that they’re going to be delivering or the capability that they’re going to be delivering. How much do you think people should think about models versus more of the stack?

Mustafa: Yeah. I mean I think most likely the pre-trained models are going to get commoditized, and I think all the craft is in fine-tuning. And I think a lot of people still haven’t quite figured out the everyday tricks of fine-tuning. A lot of the knowledge is widely available, but I think that there’s still a lot to learn there because, and most of it unfortunately is not just a technical question, it’s an operational question. The challenge is how you set up your feedback loop and you have trusted, highly trained, reliable humans who can give directional input to your models and your modeling team. And I think most people I think have slightly jumped too quickly to using models themselves as a judge for RLAIF. And I think that is maybe a little bit premature. And if you really want to craft high-quality experiences like we did at Pi with Inflection, then you have to imbue all of the knowledge and expertise of product design into your AI teachers and close that loop really fast and get them sitting alongside your machine learning engineers.

And so I think that fundamental paradigm is going to remain for a little while, and that’s where I think all the value is going to be added and the moment someone really excels in a particular vertical or an application area, it’ll be because they’ve really nailed the fine-tuning loop and they’ve got their data flywheel going, and then they’ve built something that is of outsized value to some customer, and then obviously someone’s paying for it and they have first-mover advantage and so on. I don’t think it’s going to happen in the pre-training space. The good news is it’s going to be open-sourced ’cause I think Meta’s going to keep open-sourcing. I think that’s actually great for the community and we’re not open-sourcing the frontier, but we’re open-sourcing MLMs and SLMs, so medium-language models and small-language models.

Specialized AI Models and Future Predictions

Soma: And do you see a world where there is room for LLMs, MLMs, SLMs kind of thing and different interaction models with all those different models for different use cases or do you think one is going to win over the other?

Mustafa: I don’t think it’s going to be one to rule them all. I mean, as I said, my definition, I think a helpful definition of intelligence to keep in your working memory is this one about attention and using the right tool at the right time. Even in our models today, we deploy lots and lots of different models underneath the hood and it’s really your router, which is the critical piece judging is this a really complicated and rare query and does it need to be routed to X model or is it actually a really quick one? Does it need to go to Y model? Is it safety, is it retrieval, is it personalization? There will be some specialized models obviously in healthcare, legal, finance, these kinds of things, but ultimately there are going to be AIs representing every brand, every business, every influencer. There will be celebrities that are AI-first celebrities. My friend Yuval Noah Harari says that the next great religion is going to be founded by an AI. It’s quite a cool line. I think he’s right. I think it’s a little bit scary, but I think that’s probably true, they’re going to be very capable and persuasive AIs.
And so I think that’s another question to think about is just as your AI will draw on a multitude of models under the hood before giving you an answer, it will also query a bunch of other AIs in real time at lightning speed that aren’t even in your ecosystem. They’ll just go and find that information from a third-party AI instantly within seconds. And that’s very interesting from a business model perspective ’cause it means that it’s going to be very hard to use the old school walled garden approach to lock up value, because the new API is just language. And language can be transmitted on the phone, on email, in vector space as you wish almost in real time, and everyone’s going to build an AI by 2030 just as everyone built a website and everyone built an app.

It was a difficult theoretical question. It’s like how do you capture value long enough? It’s going to be hyper-competitive, which is going to reduce the cost and, therefore, price for everybody, which is great news in a way because personalized knowledge and everything’s going to be available to everybody. And so we’re just going to have to think about what the market dynamics of that end up being. I haven’t quite figured that out yet. I don’t think anyone has, especially if you think, and if you disagree with me on 2030, then just roll it out to 2035.

Soma:
Sure, yeah.

Mustafa: Then it becomes even clearer, I think.

Soma: I remember this is now about, I don’t know, maybe eight years ago kind of thing. You go and talk to a company like Ford, Ford at the time automotive company, they’ll tell you that, hey, we think fully self-driving cars, we see a roadmap where by 2032 we could get there. And then Elon came around and said, “What the hell are you talking about? In the next three to five years, I’m going to get there.” I hope we don’t have to wait until 2032, but Elon’s timeframe has come and gone and we don’t have fully self-driving cars yet kind of thing. To me, whether it’s 2030 or ’35, I think we can debate, but the more interesting thing is I think the innovation is happening at a very, very, very rapid pace, and I think that is the most exciting and who knows, time will tell whether or not.

Mustafa: Well, the crazy thing is that market dynamics are reducing the cost of production to near zero marginal cost as well. That is a really incredible thing, given that this isn’t just any raw commodity. This is not just knowledge but intelligence, right? Intelligence is going to become cheap and abundant, right? And intelligence is the thing that has produced all the value in our world today, right? Everything around us, every building, every material, everything that you can see is somehow a product of our ability as a species to synthesize information, make these clever predictions, invent tools, and use those to create more things. And so you don’t have to mess with the AGI superintelligence idea to just wrestle with the profound consequences of the idea that the prediction engine is going to be super cheap and widely available to everybody. I think everyone’s got so caught up in the superintelligence thing. It’s almost like a personification. It’s going to turn into this person that is all powerful, this omniscient, omnipotent being, and I think it’s jumping from zero to a thousand too quickly. There’s these steps along the way that are going to be incredible for global productivity and well-being also disrupt many of the incentive structures and value creation distribution structures that we’ve come to know so well.

AI’s Impact on Software Development

Soma: One of just, maybe it’s because of my background or whatever it is, I really think about, hey, how can AI help with software development, okay? And I think the world is seeing a lot of great progress starting with GitHub, Copilot and everything else that has happened kind of thing. How do you feel the AI helping with software development as we move forward kind of thing? Particularly when you think about, hey, if an AI system can create another AI system kind of thing and it can all be done automatically or near-automatic or whatever it is this kind of thing, do you think that’s going to exponentially increase the rate of innovation?

Mustafa Suleyman: Yeah. I think we’re already seeing that. GitHub and Copilot is really quite incredible. It’s just remarkable how much time it’s saving, and I think that that’s obviously going to continue because we’re going to be finding more and more efficient ways to produce the same software and some of the new fully generative experiences are incredible. The question of whether it then leads to an intelligence explosion or recursive self-improvement I think is premature. I don’t see that today, but I do think that we’ll have to be careful about the kinds of incentives that you give some of these systems in a few years time. And there are some capabilities that you can put your arms around and be like, okay, this is a higher risk set of capabilities. Autonomy is clearly a high risk set of capabilities. One has to define that and really bound the scope of that and so on if you’re actually going to make a meaningful intervention from a regulatory perspective. That’s tough.

But likewise, recursive self-improvement, that is clearly going to raise the risk level in five to 10 years time and probably needs to be a capability that will be looked at I think by regulators in a sensible way. The challenge of these things is going to be defining them and not massively slowing down innovation ’cause I think that’s quite likely and they’ll probably over-regulate and that’s probably bad. That’s probably bad.

Soma: You mentioned AGI a couple of times now Mustafa, and my own experience with AGI when I talk to people about AGI, you can really divide the world into two parts, right? One part that says like, oh my God, I’m so excited. And the other part, oh my God, that’s the end of the world, right? And how can I even imagine AGI coming to existence? That means bad things could happen and you can go back to Terminator and what you’ve seen in movies kind of thing. But in general, how do you think about safety and yeah, you just mentioned, hey, in the next five to 10 years as autonomous comes into truly existence, you had to think about because on the one hand, it’s a high-reward situation, but it’s also that is high-risk situation kind of thing, right?
As you think about your vision for, hey, every human being on this planet is going to have an AI assistant kind of thing, what are the pitfalls? What are the things that people both you are thinking about and by extension everybody here who’s working on AI in one way, shape or form, what should they be thinking about as far as guardrails or safety or other things that go with that?

Safety and Ethical Considerations in AI

Mustafa: Yeah. I mean every new technology comes with unprecedented risks that at the time feel really profoundly different to anything that you’ve seen before. In my book, I found this story in the 1830s of the first passenger railway journey in Liverpool. And the train and the tracks were so unfamiliar to the celebration party that was welcoming the arrival of this new train, and he included the prime minister and the local MP of Liverpool in 1830 that they actually stood on the tracks to welcome the train coming and it killed 15 people in the party including the MP of Liverpool, because they couldn’t quite grasp the concept that this thing was going to come straight through them. And we are the same species as them just 170 years on.
I think that’s a profound thing to just absorb is that new thing, and now we have trains all over the world and they’ve done incredible things and they feel like some of the safest things we’ve ever encountered, right? They feel banal and very mundane. The first time something happens, it feels freaky. The second time you’re like, okay, how do we wrap our hands around it? And by the third time, we really have put in place all kinds of governance mechanism and think about a car for example, that’s regulated by scores of different agencies, layers of protections from seatbelts to driver emissions to traffic lights to driver training to windscreen tensile strengths. I mean it takes time, but we layer up mechanisms for containment, which is why when drones came along, we took all of the experience that has been developed over 80 years of aviation regulation and just applied it to that new thing and now we don’t have drones wandering around autonomously and freely invading our privacy and delivering packages in funky ways and respite, and it’s all pretty regulated and had a modest impact.

The key thing is that we have to learn the lessons of the previous technological wave. In my case this would be Web2 and social media age because that’s the closest and all the issues and mistakes of that and make sure that we’re applying those lessons as quickly as possible into the new thing that we’re facing. That’s a long way of saying I’m optimistic about it only to the extent that we take seriously the scale of the risk and that we really embrace the responsibility that this isn’t just going to be straightforward, it is going to be really hard and there will certainly be some harm. There is no path to no harm, and it may be that the toughest calls we have to make in the future are actually about what we don’t do with technology because we are crossing a precipice where a tiny concentrated cluster of breakthroughs can have an effect on hundreds of millions of people very, very quickly. That one to many effect is also unprecedented. That’s not something that I think we’ve really grappled with before as a species.

Soma: Do you feel like as an AI technologist that you and the rest of the world are spending enough time thinking about safety or is it like, hey, let’s go build the technology, let’s see what the capability, what the boundaries are, and then we’ll think about safety kind of thing? Where do you think we are on that spectrum?

Mustafa: I go both ways. Partly the safety stuff drives me nuts ’cause I feel like I’ve been talking about it since 2010, but then partly it’s one of the most important things to worry about. As with all these things, it just feels really important and urgent and also just like we hear the same thing over and over again and are you a doomer or are you a hyperoptimist? And I think that we get tired of each other’s rhetoric, and so I think thinking about how we make things practical is the big challenge these days with the intensity of social media and all the criticism and stuff like that.

Advice for Future Entrepreneurs

Soma: Got it. I know as much as it’s been a great conversation, I know that we are coming close to the wire in terms of time, but let me ask you one last question. The vast majority of the audience are people who are founders or entrepreneurs or people working in AI in one way, shape or form, right? Starting from, hey, I’ve got two people in my company and a PowerPoint to something that’s at scale kind of thing. If there are one or two things you want to tell the next generation entrepreneurs, what would you say?

Mustafa: One thing I think is underappreciated is take more risk. It’s maybe a weird thing to say to a room of entrepreneurs, but I do often just see people not being all in enough. You have to have everything on the line, and I think I’ve certainly had that multiple times and sometimes hit the wall really hard, but I really think that if your entire mindset is not, I’m prepared to sacrifice absolutely everything and really focus on execution, then it creates this amazing radical accountability which just focuses the mind on do or die. And sometimes I feel like there’s, even in entrepreneurial communities, too much comfort and I really think that’s a powerful thing that drives our creativity forward. It’s a great thing to be all in. It really is. I like it.

Soma: Thank you. Thank you. Please join me in thanking Mustafa.

Mustafa: Thank you very much. That was really great. Appreciate it.

Related Insights

    Highlights From the 2024 IA Summit
    IA Summit 2024: Fakes, Deepfakes, and the search for True
    Perseverance Over Speed: Creating a New Category with Impinj’s Chris Diorio

Related Insights

    Highlights From the 2024 IA Summit
    IA Summit 2024: Fakes, Deepfakes, and the search for True
    Perseverance Over Speed: Creating a New Category with Impinj’s Chris Diorio