Reinventing Recruiting: SeekOut’s Anoop Gupta on the Rise of Agentic AI

 

This week, Madrona Managing Director Soma Somasegar sat down with Anoop Gupta, the co-founder and CEO of SeekOut — a company at the forefront of agentic AI in recruiting, redefining how organizations discover, hire, and manage talent.

In this conversation, Soma and Anoop explore how SeekOut has evolved its platform to include SeekOut Spot, an agentic AI solution that reduces the time it takes to move from job description to qualified candidates — from 45 days down to just three. Together, these two long-time friends unpack lessons on building an AI-native company, navigating changing market dynamics, and what it takes to deliver real outcomes in a sea of AI hype.

Listen on Spotify, Apple, and Amazon | Watch on YouTube


This transcript was automatically generated and edited for clarity.

Soma: Anoop, you have truly an eclectic background, starting with, initially, academic, then a startup, then a large company, and now back to a startup. Why don’t you introduce yourself a little bit to the audience and also talk about what you do at SeekOut and what SeekOut does?

Anoop: I’m Anoop, I’m a geek and an entrepreneur. I got my PhD from Carnegie Mellon in computer science. I was in the faculty of computer science at Stanford. And then did my first startup in ’95. We were one of the first companies doing streaming media. When the modems were still 56K modems and it was a wonderful time at Stanford. That company was acquired by Microsoft in ’97. SeekOut is a talent business. We focus on helping companies build winning teams and fill the talent gaps. We look at talent very holistically from external talent, internal talent, how to retain, grow, and recruit people. We are used by some of the top brands. We have over 700 customers, who is who in tech, in defense, in pharma. We feel really privileged that they used us as their recruiting and talent solution.

The Role of Agentic AI in Recruiting

Soma: Now, to remind people, as much as we think AI has been around for the last 100 years, it was really only five years since the transformer revolution, so to speak, happened and the large language models came into existence. Even before all that happened, you were thinking about SeekOut as an AI-first company. You’re going from like, “Hey, how can AI help you,” to , “Hey, how can AI do it for you,” kind of thing, right? Some people refer to it as agentic AI or what have you. Tell us a little bit about what got you started on AI from day one and how you evolved with the changes in technology and the innovation that’s happening at a pretty fast pace.

Anoop: If you look at talent, there’s a lot of data. You’ve got to understand the data. The data is noisy. Even if you think about you went to the University of Pennsylvania, do you say Wharton, Pennsylvania. If you’re doing diversity classifiers. The early AI was in data cleansing, in data combining, in classifiers, in everything in how do you build the most amazing search engine in the world. So that is where we started. As the time has gone along, especially, the second thing was, “Oh, LLMs are there and you just give it a job description and it can build searches for you.” But it has fundamentally changed with agentic AI.

The thing is that recruiting in some sense, and actually any talent job, whether you’re thinking succession planning, anything, is a very predictable thing. You look at a job description, you talk to the hiring manager to understand the needs. You go and search a large landscape, you go and evaluate hundreds of thousands of people, you reach out in very hyper personalized ways, and you do that and that magic agentic AI is very good. Vertical AI is very good. And that’s where we can bring the time from a job description to initial candidates you’re interviewing from 30, 45 days to three days. That is the magic. It’s a transformational jump.

Soma: I’ve heard you consistently tell me this, Anoop, for a while now. Companies that don’t take a step back and reimagine how they are going after recruiting talent and managing talent, they’re going to be left behind. Why do you think now is the time for companies to embrace agentic AI solutions to reimagine how they could go about talent acquisition and what is the urgency here?

Anoop: Yeah. So, Soma, the world is changing. Every industry is changing, every business is changing. People are refining their strategies and saying, “We’ve got to adopt AI, or we’ve got to do this thing differently otherwise.” Now all of this evolving, changing business strategy, you’ve got to have a talent strategy. You got to say, “Do I have the right people in the seat now the speed of change has become, and the speed at which the companies need to change has increased?” In this world where things are changing, it becomes urgent for the talent organizations to say, “What am I going to do differently to deliver talent quickly that’s high quality so the right people are in the right seat?” One more angle just quickly I would add to that is there is a lot of pressure on all organizations that AI is coming, how are you becoming more effective, more efficient? That is another pressure that people are feeling on how to do more with less.

Soma: You guys have recently launched SeekOut Spot. I’m super excited about that. In fact, I’m proud to say in this forum that we were probably one of the earliest customers of SeekOut Spot and we are happy customers. Tell me a little bit about that. Tell me how you came up with that, because there is a growing trend here where people are talking about not just software as service, but service as software with AI playing a key role. Tell me a little bit about the genesis of SeekOut Spot and what does it do for companies, organizations, and talent leaders.

Business Model and Flexibility

Anoop: When we start with the business leader and what they care about is the right hires with speed and quality that is there. The magic of AI agents is outcomes as the delivery thing, and not, “Here is a tool that your people have to use.” The fundamental thing, from a business model, is the focus on outcomes. Now there is a lot of flexibility because it’s a combination of people and the AI agents that are happening. We have supported a lot of different models. We will deliver you a hire and that is pricing costs associated with it. We can say we will augment the sources that you have. Maybe you need fewer sources, or when the demand is changing, we can come and help you.

There’s a lot of flexibility of business models, but they’re all outcome-based that we deliver for them. To dig in a little bit, what does the recruiting task look like? How do you engage with the talent? That is interesting. Our recruiters tell us, by the time they’ve done the 10th message, their eyeballs dry out, fall off, and roll across the table. It is crazy the amount of hard work and crazy work that you have to do as a recruiter. With SeekOut Spot, the recruiters focus on the tasks they love, talking to the hiring managers, talking to the humans, selling to the candidates what needs to be done and Spot takes over everything in the middle, delivering your results faster and with higher quality.

Soma: That’s awesome. Sounds truly magical, but help us walk through the shift here. SeekOut Spot in my mind is a classic example of service as a software. Tell me what is the business model changes here and why is it the right thing for your customers?

Anoop: The business model change is we deliver hires or we deliver you strong candidates for this thing. That is the outcome. Eventually, what the talent leader and the business leader are looking for is, “Did you get me a hire? Are they great? Are they the right fit?” They don’t care when it’s taking six weeks. The average for a technical hire by Ashby is 83 days for a hire, and for non-technical, it’s around 63. That’s a long time, and that’s just the median. Many things take longer. So this can be so much faster and better.

Soma: As you know, Madrona wanted to hire a data scientist a few months ago, and whenever we think about hiring a position, my mental model is, “Hey, I’ll be happy if we can hire somebody in the next 90 days.” 180 days maybe, but 90 days would be like, “I’ll be thrilled,”. This hire, the data scientist hire, from start to finish, I think, took less than four weeks for us.

I was amazed at the speed and the quality of the candidates that we saw through the process. It all happened amazingly well for us. Thank you to you and to SeekOut Spot, we made that hire and that person is on board for the last few months and we are thrilled with him so far. So tell me, you mentioned earlier, this goes from 30 days or 45 days to three days, right? We’ve seen at least one example in our environment where we’ve been able to hire a high quality data scientist in about three and a half weeks from start to finish kind of thing.

Anoop: Basically what we can do, and this technology is, from the kickoff meeting with the hiring manager to the initial candidates you’re interviewing, on the fourth day. So that is the magic. Now the hiring, making the offer takes a little bit of time. We have had Discord, which is getting amazing results. We had a startup named Shaped.ai, which is getting hires within three weeks with the initial… And they’re amazed at it. If you look at the quotes on our website, we have Discord, 1Password, HP, Shaped, and Madrona. Even though it’s early, we are seeing real proof that there’s magic in here.

Soma: The other thing that I’ve heard you say, Anoop, when I was over to your place for the SKO, you mentioned, “Hey, with SeekOut Spot, we can deliver a 5 to 10X productivity for recruiters,” or talent acquisition, people kind of thing, right? Talk to me a little bit about that.

Recruiting Process and Efficiency

Anoop: Basically, here is how the recruiting process works. The recruiter talks to the hiring manager, understands the role, then they build a mental model of what they need to do, do some search, come back, do some search, send some messages, and that cycle goes on. In SeekOut, on the first day, after you talk to the hiring manager, you have this success rubric. You have explored automatically thousands of candidates, you’ve evaluated each one of them and we give you a spider graph and how they’re doing across each of the rubric elements, and you have sent out messages.

That thing, I’m saying 5 to 10X, that takes a long time for a recruiter, and that time is being compressed to the benefit of the recruiter and the customer. I’ll tell you, we have specialized, in this service, of course, there’s technology, but we also have recruiters who operate this technology because they’re some tasks that humans do best. They’re the happiest, most energized recruiters, because, “I’m doing what I love and I’m delivering results quickly.” It is pretty amazing how everyone is happier, the business leader, the talent leader, the recruiter. So it’s exciting to us.

Soma: That is great. We’ve been talking about human in the loop for a while now and with something like SeekOut Spot, what you’re really telling organizations is like, “Hey, recruiters are highly valuable. Let them focus on things that they need to focus on. And I’m going to give you agentic AI solution or AI agents that is going to work in conjunction with your talent people to be able to get things done better, faster, all that fun stuff. This notion of hybrid model where you have AI agents working with human beings, that seems like a great model to drive forward as it relates to recruiting and talent acquisition, right?

Anoop: Yes.

Soma: If I’m a recruiter today or a talent acquisition professional, and I see the world of agentic AI, you could argue like, it’s going to disrupt my world, or you could say, it’s going to reimagine what is possible and get me to do what I need to do much, much, much better, 10X better, whatever it is, what should I take away from this as a recruiter or a talent acquisition person and how should I be prepared for this wave of innovation to come?

Anoop: So here’s the way. Do what is human and only humans can do and become the best at it. One is it means when you talk to the hiring manager, how can you be an advisor? Ask hard questions. Do you really mean this? Do you really want this? What is this person going to do? If you feel confident, expert, and good at that, that is one side of it. The second is when you’re talking to a candidate, right? How do you sell? How do you say what is inspiring? How do you say what are you going to do? Why is this a great company for you or not? I think those are the skills you have to become very good at. A lot of them, messy middle, which is critical and important, AI agents will do a great job for you.

Soma: That’s cool. That’s a good way to frame and think about it. I always tell this, Anoop, to every founder or every startup, in the history of this world, there isn’t a single company which has always had a smooth journey. There are good days, there are amazing days, there are okay days, there are lousy days and everything in between.

In your journey over the last, say, seven years or so, you’ve gone through some amazing highs, some not so great lows, and everything in between. How did you and your co-founder, Aravind, how did you guys navigate through this and are there any takeaways or learnings or ahas that you would like to share that will be valuable for other founders where every founder goes through this?

Navigating Challenges and Product-Market Fit

Anoop: We had an early phase of it where we were in hypergrowth, exponential growth, and then the economic malaise, the market change, and we went through some flat portions and now we are on a path to hypergrowth again. What are the things that you need to do? I think the most important thing is continuously watching a product market fit. When the market changed, the environment changed, what was needed changed, it took us a little while to say, “How do we do?” Because market always wins. You can have a great team and the market is this thing you’re not going to succeed in. You can have a lousy team, but you’re aligned with the market, you’re going to win. So one key message is watch for market fit. Just because you have market fit once doesn’t mean you’re maintaining.

The other is to have a sense of confidence and always iterating and experimenting and keeping calm. Your organization comes along with you. That bad stage shouldn’t ruin, though you have to be very conscious about managing your expenses and how you are doing. I want to point to one piece advice, which was maybe a thing for the times. When we were raising a series B, or series C, we had not spent any money that we had raised. And you said, “Go ahead and raise it anyway. It’s good times.” And that helped us too. We’ve never had to be in a desperate situation, not that we don’t want to be scrappy, or want to be conscious. That has given us a comfort and a cushion and that was very wise advice to us.

Soma: Thank you. There are two elements to that, Anoop. One is you want to raise money when you don’t need to raise money, that’s always the best time to do it. The second time is what you said is one of the reasons I was excited about us raising that money was because I’ve seen how you and Aravind have been very scrappy. I wasn’t worried about if there was a little more money than what you need today, would bad behavior set in in terms of spending. Right? It doesn’t matter how much money there is in the bank or not in the bank. I think as entrepreneurs, as founders, as what I call efficient stewards of capital, you have to always be thoughtful about that. I say this right now, hey, your increase in investment should warrant a return on that investment. If you’re confident about that, go do that. If not, you have to be really thoughtful.

I’m so glad whenever you raise your CDC and that has helped you tie through the last year or so to put yourself back in hypergrowth mode. That’s fantastic. Going hand in hand with raising is also a more thoughtful deployment of capital.

Anoop: I totally agree. We always feel like it’s our own money, we have a responsibility.

Soma: Are there any lessons from your own journey? I truly believe that you guys were AI-first from day one, as I said, well before the Transformers and large language models came into existence. Any guidance or advice that you would give to founders today when people are thinking about like, “Hey, I want to ride this AI wave. I want to truly be an AI-first company,” what should they do or what should they not do?

Founders: Focus on Outcomes, not on Hype

Anoop: My advice to founders and actually to the customers that we have, or prospects, is to focus on outcomes, not on the hype. If I were advising this thing, and because everybody’s put AI in their marketing materials, I say look for the outcomes. What are the outcomes you’re delivering? Get to the success stories, shout those out from the rooftops — that focus on outcomes is really important. In the recruiting space, we have a lot of companies that would talk about, “We are AI. We are agentic AI,” and all they have done is maybe an LLM, and they give a natural language query, and something comes out. That is not the end solution. Part of the vertical AI is looking at the whole workflow and process that results in the outcomes. What I would say is, don’t use it as a buzzword, genuinely create value for your customers. That is the thing to do. There is a lot of power in what’s coming, but focus on the customer’s problems and outcomes.

Soma: When people are thinking about AI, and I agree with you completely, focus on the outcome and not the activity alone, how important is what I call a data moat? Do I need to necessarily have either what I call proprietary data or a data moat if I’m an AI-first company, or I don’t need to have that?

Anoop: I think there’s different kinds of data. A lot of data is available, everybody has data. One is the experience moat. As you work with clients and you get the proprietary data from the customer and the client and how you integrate it and how easily you can do that, so it becomes not your data moat and some base data that you have. Our one example would be, in recruiting, it’s not just external data, how do you integrate with the applicant tracking systems and the data that customers have, or their internal employee data and how do you bring that or how do you integrate with specialized resources and partners, whether it’s healthcare and nursing data. I think the data moat comes from delivering outcomes and the learnings, and that learnings and the data becomes also a moat.

Soma: If I take you at face value, you’re telling this from the rooftop that, every investor should be talking to their portfolio company about, “what is your talent acquisition strategy”? How are you reimagining in this world of agentic AI?'” What would you want to tell investors?

Don’t build a recruiting org too early

Anoop: I think building a recruiting org too early is not good because your demand is going to fluctuate. The quality and the people that you need will fluctuate. These are specialized roles. What a startup should have is probably one recruiting manager and recruiting coordinator that they have where the interface to the hiring managers and scheduling interviews and calendaring, work with somebody like SeekOut. Because in startups, the right hires are really important. The cost of a bad hire is so much more than just, “Here is what I did.” If you can get high-quality hires in two weeks, three weeks, that makes a difference to your business outcome. I would invite, it’ll sound a little bit selfish as I am saying it, come and check us out, talk to us. I think it can make a big difference.

Soma: I want to underscore one point that you made. For every organization, every company, this is true, but it is even more true for a startup because you have finite assets of resources. Every right hire can be a true force multiplier.

I truly believe it’s extremely important for startups, particularly the early stages of the company, to ensure that, and everybody goes through this way. Nobody bats 1000. There’s always hiring mistakes, but you want to minimize that and truly understand that every great hire is going to be a true force multiplier for your company.

Anoop: Yes, it is so important.

Soma: From your vantage point, particularly as a hiring manager, what do you think the biggest hiring mistakes that companies today are making?

Anoop: One is company strategies change. Hiring strong, fungible engineers and marketers who can change as your strategy changes is really important. That is the thing you need to do. The second thing we have found is attitude is really important. People who understand the ambiguity, who can take the punches, roll with the punches, adapt and adjust, and are there and get shit done. Those things are also super critical in the hires that you make.

Soma: Now the other side of the audience is usually founders, either existing founders or new founders or people who are thinking that in the next 6 to 12 months, they want to be founders. What is your message to them?

Anoop: My message to founders is, one is before you hire, try and do the job yourself in some of the cases. I did a lot of sales, I’ve never done sales before, and I became an expert because I didn’t know how to hire a sales person. That was one thing on the talent side that I did. Then there were many cases where I had no expertise. Let’s say on a sales leader CRO. I leveraged people at Madrona and said, “Would you interview this person for me?” You want to leverage your connections and contacts who are experts in that so that you can get a good sense of what it needs to be.

My recommendation is, to the people who are thinking to be founders, that the initial team is super critical. Become familiar yourself before you jump into hiring all those people at a level of detail so that you know what is the right thing you need. The salesperson you need for one startup versus another varies a lot. You have to say, “What is your selling motion, who is you need,” and understand that deeply, and second, leverage your friends to do that, and then leverage the right people who can feed you that talent.

Soma: How can I get in touch with you, Anoop, if I’m a founder or investor and want to learn more?

Anoop: Okay, it’s simple. My email is [email protected]. You can find me on LinkedIn, connect with me and we’ll love to talk to you and show you. Because seeing is believing. Everybody talks so much. I’m just such a passionate thinker that seeing is believing, come and see, come and experience, and we would love to partner with you.

Soma: As we come to an end in this episode, Anoop, I want to congratulate you on pushing the boundaries and pushing the envelope on what AI can do for talent acquisition for organizations of all sizes and in all industries. Is there a final word that you would want to say to people, whether they are in a smaller environment like a startup or a bigger environment like an enterprise as to what they should do about talent management and talent acquisition as they look ahead?

Agentic AI for recruiting is here

Anoop: Agentic AI for recruiting is here and now. I would say experiment with it. This is the time. Be early before the change is thrust upon you. Be the lean-forward leader, experimenting, and adapting, and flowing with the transformation versus being hit by it when somebody comes and says you’re too late. The world is changing, and it is changing in amazing, wonderful ways, that is don’t get stuck in the old world to the extent you can avoid that and look broadly on what needs to be done. Especially for the large enterprises, the transformation is going to be very huge, and even for small companies. So, my final word, this will sound very selfish, contact us. We’ll show you what we can do for you as you explore all of the different options that are out there so that you’re getting the right tires and kicking the ball out of the field.

Soma: First of all, even before I wrap this up, thank you for allowing me to partner with you and Aravind for the last seven years or so. It’s been a fabulous journey. So many learnings and so much success. For all the progress we’ve made, I think we are still in early stages and there is so much more that we can do, and I’m looking forward to that. Thank you so much for joining us here today, and thanks to everybody for listening and we’ll see you again soon.

Anoop: Thank you, Soma. It has been wonderful to have you as a partner in our journey.

 

Unscripted: What Happened After the Mic Went Off with Douwe Kiela

 

Listen on Spotify, Apple, and Amazon | Watch on YouTube

Full Transcript below.


Sometimes, the best insights can come after an interview ends.

That’s exactly what happened when Madrona Partner Jon Turow wrapped the official recording of our recent Founded & Funded episode with Douwe Kiela, co-founder and CEO of Contextual AI. The full conversation dove deep into the evolution of RAG, the rise of RAG agents, and how to evaluate real GenAI systems in production.

But after we hit “cut,” Douwe and Jon kept talking — and this bonus conversation produced some of the most candid moments of the day.

In this 10-minute follow-up, Douwe and Jon cover:

  1. Why vertical storytelling matters more than ever in GenAI
  2. The tension between being platform vs. application
  3. How “doing things that don’t scale” builds conviction early on
  4. The archetypes of great founders — and how imagination is often the rarest (but most valuable) trait
  5. Douwe’s early work on multimodal hate speech detection at Meta and why the subtle problems are often the hardest to solve
  6. Why now is the moment to show what’s possible with your platform — not just sell the vision
  7. It’s a fast exchange full of unfiltered insight on what it really takes to build ambitious AI systems — and companies.

And if you missed the full episode, start there.


This transcript was automatically generated and edited for clarity.

Jon: One thing I’m learning about, I talked to a lot of enterprise CTOs, as I’m sure you do, and a lot of founders, as I’m sure you do, and I feel like even when this kind of technology is horizontal, we say you go to market vertically, or by segment, or whatever, but I don’t even think that’s quite right, I think the storytelling is the thing that becomes vertical or segmented. When you speak to a CTO of a bank versus a CTO of a pharma company, or the head partner of a law firm, or whatever it would be, none of these people, their eyes will glaze over when we start to talk about chunking. But if we can talk about SEC filings and the tolerances in there, and a couple of really impactful stories that are in the language of those segments, that seems to go so far. I’ve seen it myself, and even when a student, customers will realize it’s the same thing. And so storytelling at a time like this, where there’s opportunity in every direction you look, feels like a thing that can be a superpower for you.

Douwe Kiela: It’s not easy, because it’s like, how vertical do you want to go? We don’t want to be Hebbia or even Harvey; we want Hebbia and Harvey to be built on Contextual, but the only way to do that is to maybe show that you can build a Hebbia and Harvey on our platform.

Jon: I’ll tell you about when I’ve done it right and when I did it wrong. When I did it right was in early days of DynamoDB, the managed NoSQL data store, and we said, “Dynamo is really useful for ad tech, for gaming, and for finance, probably.” It’s because there were key use cases in each of these domains that took advantage of the capabilities of NoSQL and were not too bothered by the limitations of NoSQL, we only have certain numbers of lookups and things like that. Astute customers could realize you could use Dynamo for whatever you wanted, but we didn’t say that ever. All of our market was we had customer references, and we had reference implementation, and that helped us, like you plant your feet really well. When I’ve done it badly, also shows the power of this technique. I remember I did a presentation about Edge AI, this was like 2016, at AWS re:Invent. Edge AI, we shipped the first Edge AI product ever at Amazon.

We showed how we were using it with Rio Tinto, which is a giant mining company doing autonomous mining vehicles. We chose that because it’s fun and sparks the imagination, and we thought would spark the imagination across a lot of domains. This is a re:Invent, so it was on a Thursday, I want to say, a Wednesday or a Thursday, that I did that presentation. On a Friday morning, before I was going to fly out, I got an urgent phone call from the CTO of the only other major mining company of that scale, saying, “I have exactly that problem. Can you do the same thing for me?” I thought, “Well, gee, I aimed wrong,” because I picked a market of two, I already had one. But it shows that if you really put it in people don’t necessarily use imagination, but if you put it in terms that are that recognizable, they can see themselves.

Douwe Kiela: Yeah. So I heard that, maybe it was Swami or someone senior in AWS, said, “The big problem in the market right now is not the technology, it’s people’s lack of imagination around AI.”

Jon: That sounds like a Swami.

Douwe: Swami or maybe Andy. Yeah, I don’t know.

Jon: It could be. I would also say that that’s a major role for founders on this spectrum. There will be, put you in a group with Sergey and Larry, right? And so there’s the Douwes, Sergeys, and Larrys, there’s the Mark Zuckerbergs who are only PHP coders, and there’s the domain experts who are visionaries, they’re missionaries about solving a real problem, and they understand the problem better than other people do, and they are not necessarily nuanced in what is possible, but they can hack it together, they can get it to work enough that they can get to a point to then build a team around them.

Douwe: Who’s the archetype there?

Jon: I would think about, this is not a perfect example, but I would think about pure sales CEOs.

Douwe: Benioff or something?

Jon: Yeah, or the guys who started Flatiron Health and Invite Media. They were not oncology experts, they understood their customers really well. Jeff Bezos was not a publishing expert, nor did he wrote code at all at Amazon, I’m not sure he ever checked one line of code in a production, but deep customer empathy and conviction around that. The story with Jeff is that the first book that was ordered on Amazon.com from a non-Amazonian was not a book that they had at stock. And the team told Jeff, “Sorry, we got to cancel this order.” And Jeff said like, “Hell, we do.” And he got in his car and he went to every bookstore in the city.

Douwe Kiela: Barnes & Noble, somewhere.

Jon: Yeah and he found it, and then he drove to the post office and he mailed it himself. He was trying to make a point, but he was also saying, “Look, we’re in the books business now and we promised our whole catalog. In the first order, you better believe we’re going to honor it.” So that’s what I think about. And you do things that don’t scale and the rest.

Douwe: Doing all the crazy stuff. All the VCs are saying, “Just do it SaaS, no services. Focus on one thing, do it well.” And all of that is true, but if you want to be the next Amazon, then you also have to not follow that.

Jon: Do things that don’t scale, and you figure out, you know and I know, eventually, you can get things to scale. One of the reasons, and you would know this so much better than I do, one of the reasons Meta invested as early as it did in AI was content moderation.You would like a social media business to scale with compute, but it was starting to get bottlenecked by how many content moderators, and that’s a lot slower and more expensive. How quickly and effectively can you leverage that up?

Douwe: That’s why they needed AI content moderation.

Jon: That’s why they needed AI.

Douwe: We’re doing all the multimodal content moderations. That was powered by our code base.

Jon: Wow. And what year?

Douwe: It was around 2018. We did hateful memes. I don’t know if you’ve heard of this, the Hateful Memes Project, that was my thing. Where that came from was content moderation was pretty good on images and it was pretty good on text, like if there was some Hitler image, or whatever, or some obvious hate speech.

Jon: That’s kind of an easy one.

Douwe Kiela: Exactly. The most interesting ones, and people have figured this out, is like multimodal. It’s like I have a meme, so on the surface, to the individual classifiers, it looks fine, but if you put them together, it’s super racist, or they’re trying to sell a gun, or they’re dealing drugs, or things like that. Everybody at the time was trying to circumvent these hate speech classifiers by being multimodal. Then I’m on it, and I came in and we solved it.

Jon: How did you solve it?

Douwe Kiela: By building better multimodal models. So we had a better multimodal classifier that actually looked at both modalities at the same time in the same model. We built a framework, and we built the data set, and we built the models, and then most of the work was done by product teams.

AI, Ambition, and a $3 Trillion Vision: Satya Nadella on Microsoft’s Bold Bet

 

TLDR: Microsoft Chairman & CEO Satya Nadella shared candid insights on leadership, AI, and Microsoft’s transformation into a $3 trillion powerhouse during Madrona’s Annual Meeting on March 18, 2025. He reflected on the cultural shifts that fueled the company’s resurgence, Microsoft’s AI strategy and pivotal AI partnership with OpenAI, and why AI’s success should be measured in global economic growth. His key messages? Mission and culture define strategy. AI is still in its early days. And “The world will need more compute.”

Listen on Spotify, Apple, and Amazon | Watch on YouTube


This transcript was automatically generated.

Soma: Satya, it’s fantastic to have you here today. I don’t know if you remember this. We had you actually at our annual meeting five years ago to celebrate our 25th anniversary back then. But it so happened that once we agreed that we are going to do this two weeks before the event, we had to go on a massive scramble. The world changed from everything being in person to everything being virtual and you were a good sport and we did this virtually five years ago and that ended up being a great conversation. Thank you for doing that.
But I’m so, so excited to have you in person here today.

Satya Nadella: Likewise, I’m glad it’s in person.

Soma: This year we are celebrating a couple of different milestones, okay? First and foremost, obviously, Microsoft is celebrating its 50th year anniversary. In fact, I think two and a half weeks from now (April 5th) is the 50th anniversary. So that’s a fantastic milestone. I spent 27 out of these 50 years at Microsoft and some of those years working closely with you, so for me personally, it’s with a lot of personal joy and satisfaction to see how far Microsoft has come along under your leadership these last 11 years. Coincidentally, we’re also celebrating Madrona’s 30th year anniversary this year. Back in 1995, when the four co-founders of Madrona started Madrona; and I see Paul there. He was one of the four co-founders for us back then. The thesis and the bet for Madrona was very simple. It was all about like, hey, we are going to take a bet on the technology ecosystem, on the startup ecosystem in Seattle.

And 30 years later we are so glad that they took the bet and we all joined the journey. But for all the progress that I think we’ve seen in Seattle, I think we are still scratching the surface. There’s so much more ahead of us in the next 20 years, 30 years, 50 years that we are excited to see where the world is going and how we can play a part in help shape that world, so to speak.

11 years ago when you became the CEO for Microsoft, I actually don’t know how many people in this audience and in the world imagined that hey, there’s going to be a day not in the too distant future where we are likely to have two companies that collectively have a market cap of over $5 trillion in Seattle. Microsoft being one and Amazon being the other. But just looking at what you’ve been able to accomplish at Microsoft, when you took over as the CEO, Microsoft’s market cap was around $300 million. Today it’s around $3 trillion. It’s a phenomenal progress and one that I definitely did not imagine and I continue to think about, hey, how did this happen and what caused it to happen?

Satya Nadella on Microsoft’s AI Strategy, Leadership Culture, and the Future of Computing

Satya Nadella: But did you hold?

Soma: A lot.

Satya Nadella: That’s great.

Soma: In addition to everything else, I’m a shareholder of Microsoft. I’m excited about that. Okay. But Satya, congratulations on a great, great run at Microsoft so far, and I know there’s still a lot more to go there.
I do know that everybody here in the audience is really interested in hearing from you, so I should stop my ramble and dive into the conversation.

Satya Nadella: Sure.

Soma: I want to take you back 11 years ago when you decided that, “Hey, I’m going to take on the matter to be the CEO for Microsoft,” what were some of the things in your mind in terms of what were your expectations, what do you think might happen? And then talk about some of the key inflection points in the last decade in your tenure as the CEO of Microsoft.

Satya Nadella: Yes. First of all, thank you so much for the opportunity to be here. It’s great to be celebrating, I guess, your 30th year. And as you said, for me of late, I’ve been thinking a lot about our upcoming 50th, which it’s unbelievable to think about it. I was also thinking about it yesterday. I was seven years old, I guess, when Microsoft was formed. And a lot has happened.
In 2014 when I became CEO, Soma, quite honestly at that time, my frame was very simple. I knew I was walking in as the first non-founder. Technically Steve was not the founder, but he had founder status at the company. The company I grew up in was built by Bill and Steve. And so therefore, I felt one of the things as a non-founder was to make first class again what founders do. What founders do is have a real sense of purpose and mission that gives them both the moral authority and telegraphs what the company was created for and what have you. And I felt like we needed to reground ourselves.

In fact, back then, one of the things I felt was, wow, in 1975 when Paul and Bill started Microsoft, they somehow thought of software as a … In fact, the software industry didn’t even exist, but they conceived that we should create software so that others can create more software and a software industry will be born. And that’s what was the original idea of Microsoft. And if that was relevant in ’75, it was more relevant in 2014 and it’s more relevant today in 2025.

And so I went back to that origin story, took inspiration from it, re-articulated it as our mission now that we are to talk about, which is empowering every person and organization on the planet to achieve more. So that was one part. The other piece that I felt also, again as a non-founder, was to make culture a very first class thing. Because again, in companies that have founders, still culture is also implicit because it’s a little bit of the cult of the founder. You can get away with a lot, whereas a mere model CEO like me can’t.

And so you needed to build more of that cultural base even. I must say I was lucky enough to pick the meme of growth mindset from Carol Dweck’s work and it’s done wonders. And quite frankly it’s done wonders because it was not considered as new dogma from a new CEO because it spoke a lot more intrinsically to us as humans, both in life and at work. And so therefore, both these things, making mission a first-class explicit thing and culture, these two things. And then of course they’re necessary but not sufficient because then you’ve got to get your strategy right and execution right, and you’ve got to adapt because everything in life is path dependent.

But you don’t even get shots on goal if you don’t have your mission and culture set right. And so that’s at least what I attribute a lot of, at least our … And we have stayed consistent on that frame, I would say, for the last whatever, 11 years.

Soma: If you go back to I think you took over in February sometime and then in May that year, 2014, your first announcement externally came up as like, “Hey, we are going to take Office cross-platform.” And that I thought was visceral. Particularly people who knew Microsoft until then or who had been part of the Microsoft ecosystem in one way, shape or form, knew how big of a statement that was. Was it a conscious decision on your part to say, “Hey, I need to signal not just to the external world, but to my own organization what it means?”

Satya Nadella: Yeah. The Microsoft that you worked at and that I worked at know, you’ve got to remember, we launched Office on the Mac before there was Windows even. So in some sense, obviously we achieved a lot of success in the ’90s and so therefore we went back to Windows as the only thing that is needed, and the air we breathe and what-have-you. But it was really not the company’s core gene pool. Our core gene pool was we create software and we want to make sure that our software is there everywhere.

And obviously it’s not like I came in February and I said, “Let’s build the software.” Obviously Steve had approved that. But it worked well because it helped unlock, to your point, what was Microsoft’s true value prop in the cloud era. See, one of the things when I look back at it, if God had come to me and said, “There’s mobile and cloud, pick one,” I would’ve picked cloud. Not that mobile is not the biggest thing, but if you had told me pick one, I’ll pick something that may even outlast the client device.

And so therefore, that’s what was the real strategy, which is we knew where our position at that time was on mobile. We were struggling at third. Having seen what happens to number three players in an ecosystem, I felt like wow, that train had left the station. So therefore it was very important for us to make sure we became a strong number two in cloud at that time. And then in fact, more comprehensive than even our friends across the lake because of what we were doing on Office 365 and Azure.

And so we just doubled down. And when you double down on such a strategy, you got to make sure that your software is available and your endpoints are available everywhere. And so that was what that event was all about.

Soma: Great. You just referenced culture, cultural transformation, and growth mindset in the context. By the way, if any of you haven’t read that book, I’m a huge believer in the book. I think that book is one of the best books that’s been written on culture and please get a copy and read that. It’s a fantastic book and something that I try hard to practice every day. And I can tell you I’m still learning.

But I’ve also heard you talk a lot about changing the culture from a know-it-all culture to a learn-it-all culture. But like anything else, when you took on the mantle, they were already a 100,000-people-strong organization that was steeped in a particular set of ways of doing things and thinking about things. How easy or hard was it for you to go through the cultural transformation?

Satya Nadella: Yeah. I think the beauty of the growth mindset framework, if you will, is not about claiming growth mindset, but confronting one’s own fixed mindset. At the end of the day, the day you say you have a growth mindset is the day you don’t have a growth mindset. That’s the nice recursion in it. And it’s hard and it has to start with setting the tone.

Let’s face it. In large organizations like ours, or anyone I guess, it’s easy to talk about risk because you want the other person to take risk. Or it is easy to say, “‘Let’s change.” It’s the other person who should change. And so in some sense, the hard part of organizational change is that inward change that has to come. And so this thing pushes you on it. It gives you at least a way to live that. And by living up to that high standard of confronting your own fixed mindset, you get hope to make that large-scale change happen. And like all things, Soma, it’s always top down and bottom up. You can never do anything in any one direction. It has to happen across both sides of it and all the time.

The other thing I must say is you have to have patience. You can’t come in the morning and say, “Hey, we need to have by evening growth mindset.” You have to basically let even leaders bring their own personal passion to it, personal stories to it, give it some room to breathe. And I think somehow or the other not because we really thought it all through, it took on, as I said, some organic life. People felt like this is a meme that made them better leaders, it made them better human beings.

And so therefore, I think that that’s what really helped. And we were patient on it. Like for example, the classic thing at Microsoft would have been to metric it and then say green, red, yellow, and then start doing bad things to all the reds and then it would’ve been gamed in a second. We didn’t do that, and that I think helped a lot. And like all things, it also can be taken to the extreme. There are times when I’m in meetings where people will look around the room and say, “Here are all the people who don’t have a growth mindset,” versus saying, “Look — the entire idea is to be able to talk about your own fixed mindset.” And by the way, the best feature of that cultural thing is that it’s never done. So you never can claim that job done. Right now, oh my God, talk about it. Which is you’re in the middle of, again, saying, “Wow, we’ve got to relearn everything because there’s a complete new game in town again.

Soma: So before we talk about AI, I thought we’ll talk a little bit about something that is personal to you and hopefully something on a lighter note. You’ve been a cricket player in high school and college and it’s been fun working with you these last many years, trying to bring cricket to the US through Major League cricket. And you’ve mentioned this many times, Satya, about how that sport has shaped your thinking, your leadership style. In fact, had a positive impact on your life. Share with us a little bit about that.

Satya Nadella: Yeah, Caitlin, who works with me is here. Every time I post on cricket, I get all these likes from India and she says, “God, why don’t they do the same when you post on Microsoft products?” It’s like a billion and a half people who are crazy can do that for you.

Look, I think all team sport shapes us a lot. I think it’s one of those cultural things that … When I see leaders; and you can easily trace back to the team sports they played and how it impacts how they think about it. There are three things that I think I’ve written a lot about and I think a lot about even daily. I remember there was this one time. It’s interesting, there’s this guy that you know, Harsha Bhogle, who actually went to the same high school as me and recently I was talking to him and he was telling me about our … we call them physical directors. Think of them as a coach, I think is the best translation.

But anyway, so we were playing some league match and there was this guy from Australia who suddenly happened to be in Hyderabad of all things and playing for the opposition. And he was such an unbelievable player. And I was sitting, I was feeling at whatever, at forward short leg and watching in awe of him. So I hear this guy yell, saying, “Compete, don’t admire.” And it’s like when you’re in the field that zeal, the competitive spirit and giving it all, I think it’s just such an important thing that sport teaches you. That ability to get the energy to go play the game is one.

The other one that I’ll say, talking about teams, I’ll never forget this. There was this unbelievably important match of ours. There was this unbelievable player who was pissed off at our captain for whatever reason, because I think he changed him soon or what have you. And the guy just drops a catch just on purpose. And think about like the entire 11. All our faces dropped. We were all so pissed off, I guess. But also more let down when, in fact, your star player who somehow feels like he wants to teach us a lesson and then thereby cause us to lose.

And then the last thing I would say, which has probably been the most profound impact in me, is what is the leadership lesson? There was a captain of mine who went on to play later a lot of first-class cricket. One day I was a bowler and I was bowling some thrashy off spin. And so this guy takes the thing. He changes me, he bowls, he gets a wicket, but he gives it back to me the next over and that’s a match I got some four or five wickets. And then I asked him like, “Why the heck did you do that?” And he comes to me and he says, “You know what? I needed you for a season, not for a match. Because I wanted to make sure that I could make sure that your confidence is not broken.” I said, “Man, for a high school captain to have that level of enlightened leadership skills …”

That’s the idea, which is leadership is about having a team and then getting the team to perform for a season. And I think team sport and what it means to all of us culturally and what it means in terms of teaching us the hard lessons in those fields is something that I think a lot about.

Soma: That’s great.

Satya Nadella: And of course, I think a lot about MLC too.

Soma: Season three starts June 12.

Satya Nadella: The sports market is not sufficiently penetrated in the United States. Talk about you got to make your money somewhere else.

Soma: Let’s talk about AI now. You mentioned this, that if you look at the history of Microsoft, we are in the beginning or in the middle of the fourth platform wave. First one was Client Server, then it was internet and mobile, and then the cloud, and now it’s AI.
Microsoft, as much as we talk about AI a lot these past few years, Microsoft has had investments in AI for decades now. Tell me a little bit about how you decided, hey, in addition to everything that we are doing ourselves, how do we think about partnering with OpenAI.

Satya Nadella: I love the way you say ourselves. That’s good.

Soma: How does Microsoft think about partnering with somebody like OpenAI? And then more importantly, how has that partnership evolved till today and what do you think the future is going to be of that partnership?

Satya Nadella: Yeah, it’s a good point. I think in 1995 is when we had our first ML research team and MSR speech. That was the first place we went to. And obviously we had lots of MSR work. Here’s the interesting thing, which is even the OpenAI side, we had two phases of it. In fact, the first time we partnered with them was in the context of when they were doing Dota 2 and RL at that time. And then they went off on that and I was interested, but RL by itself, at that time at least, we were not that deep in. When they said, “We want to go tackle natural language with transformers,” that’s when we said, “Let’s go bet.”

Quite frankly, that was the thing that OpenAI got right which is that they were willing to go all in on scaling laws. In fact, the first paper I read was interestingly written by Elian Dario on the scaling laws and saying, “Hey, we can go through compute and see scaling laws work on transformers on natural language and natural language.” If you think about Microsoft’s history, for those of you who’ve been tracking us, Bill has been obsessed about natural language. And of course the way he has been obsessed about it is by schematizing the world. To him, it is all about people, places, things, beautifully organize it into a database and then do a SQL query, and that’s all the world needs.

That was the Microsoft that we dreamt of. And then of course, when we thought of AI was, oh, adding some semantics on top of it. That’s sort of how we came in. It turns out in hindsight, of course, when we were taking that bet, it is unclear to us quite frankly. But to me, when I first saw code completions in a Codex model, which is a precursor to 35, that is when I think we started building enough conviction that, one, you can actually build useful products. And software engineering, the team that you ran, even the engineers are skeptical people. No one thought that AI will go and make coding easy. But man, that was the moment when I felt like there’s something afoot. Definitely my belief in scaling laws and the fact that you could build something useful. And so then the rest is history. We just doubled down on it. And even today when I look at GitHub Copilot, it’s unbelievable to see in the, whatever, three years or so, there’s code completions.

And by the way, all of these things are happening in parallel. Code completions are getting better. We, in fact, just launched a new model even for code completion. And then chat of course is right there. You have multi-file edits. You have agents that are working at their full repo, and then we have a SWE-agent that is more like you’re going from, I’ll say, pair programmer to a peer programmer. So it’s all like a full system being built off of effectively what is one regime.

Soma: I remember now, this was before GitHub Copilot had launched in beta or whatever it is to the world. You and I were having dinner and now you literally spent probably 20, 30 minutes there talking about this new thing that the GitHub guys were doing called Copilot. I remember walking out of that meeting thinking I need to go talk to my buddies in DevDiv to understand what is happening here, because I haven’t seen you that animated and excited about something. And this was well before it came into what I call as a product finally kind of thing.
But those early days, how did you decide to take a bet on that inside the company? Because I would assume that in any organization there’s going to be some level of resistance to something new that is going to be fundamentally a paradigm changing thing.

Satya Nadella: Yeah. There were two phases to that as well. GitHub Copilot was the first product, and then ChatGPT happened. And ChatGPT, quite frankly, you should ask the OpenAI folks, but nobody thought that this is a product. It was supposed to be at best, maybe some data collection thing. And then rest is history. But I must say that was the thing that really helped, which is the beauty of at least Microsoft’s position was one, the partnership with OpenAI. Second thing is we were already building products like GitHub Copilot. And thankfully ChatGPT happened because then there was no … And we were ready so once ChatGPT happened and we had built a product and we had built the stack, it was easy to copy, paste, so to speak, across all of what we were doing.

But a lot of these waves are that, which is, if I look back at it, even in the four waves, you could say Windows, we had one, two, and three, but I joined really post three. And that was what we did. Once Windows 3 hit, it was like we knew what to do after. That’s where I think the path … And we were ahead. In some of the others, we were behind, but that’s fine. But this one we were ahead and so we executed pretty well, quite frankly, across the length and breadth of the Microsoft opportunity. But as you rightfully point out, but it’s still very early, I think in backstage, you and I were talking about it.

I think I feel it’s a little more like the GUI wave pre-Office or the web wave pre-search. I think we’re still trying to figure out where does the enterprise value truly accrue? Is it in the model? Is it in the infrastructure? Is it in one app category? And I think all that’s to be still litigated.

Soma: We have a point of view on that, but let me turn around and ask you that. If you look at the AI stack today, you’ve got AI infrastructure, you’ve got models, you’ve got applications, what we call intelligent applications. We historically always believe the application layer is where you’re going to have the most, what I call value creation over a period of time. Whether it’s horizontal or vertical or some combination thereof. Do you see that trend also following through here in the AI or do you think differently?

Satya Nadella: It’s a great question. I think that if I look back through all these tech shifts, I think all enterprise value accrues to two things. One is some organizing layer around user experience and some, I’ll call it, change in efficiency at the infrastructure layer. You can say GUI on the client and client server. That was one. Or you could say search as ultimately, although we thought browser for the longest time, but turns out search was the organizing layer of the web. And then SaaS applications and the infrastructure and databases and what have you. And same thing with cloud.

In this case, I think hyperscale, when I look at our business, if you ask the question five years from now, even in a fully agentic world, what is needed? Lots more compute. In fact, it’s interesting. When I look at, let’s take Deep Researcher or what-have-you. Remember, Deep Researcher needs a VM or a container. In fact, it’s the best workload to drive more compute.

And in fact, if you look at the ratio, even take ChatGPT. It’s a big Cosmos DB customer, which is all its state is in databases. In fact, the way they procure compute is they have a ratio between the AI accelerator to storage and compute. And so hyperscale, being one of the hyperscalers is a good place to be and to be able to build the infrastructure. You’ve got to be best-in-class in terms of scale and cost and what-have-you.

Then I think after that, it gets a little muddy, because what happens to models, what happens to app categories? I think that’s where I think time will tell, but I go back and say each category will be different. Consumer, there’ll be some winner take all network effect. In the enterprise space it’ll be different. That I think is where we are still in the early stages of figuring out, but I think the stable thing that at least I can say with confidence is the world will need more compute.

Soma: I have a lot more things to talk to Satya about but I know that we are running short of time here. I’m going to ask him one more question. You have a very unique vantage point in terms of who you talk to day in and day out, whether it’s Fortune 100 CEOs or whether it is heads of government or what-have-you. You recently mentioned something about one way to think about maybe the impact of AI success is its ability to boost the GDP of a country or the world or whatever it is. That’s a fascinating way to think about what AI’s impact would all be over a period of time. Can you elaborate a little bit on that?

Satya Nadella: Yeah. I think I said that in response to all these benchmarks on AGI and so on. I find that entire … First of all, all the evals are saturated. It’s becoming slightly meaningless. But if you set that aside, just take the simple math. Let’s say you spend a $100 billion in CapEx, and then you say, okay, you go t to make a return on it, and then let’s just say roughly you are to make a $100 billion a year on it. In order for you to make a $100 billion dollars on it, then what’s the value you have to create in order to make that?

And it’s multiples of that. And so that ain’t going to happen unless and unt il there is broad spread economic growth in the world. So that’s why I look at it and say my formula for when can we say AGI has arrived when, say, the developed world is growing at 10%, which may have been the peak of industrial revolution or what have you, that’s a good benchmark for me. If you ask me what’s the benchmark. This is the intelligence abundance and it’s going to drive productivity. I think we should peg ourselves. In fact, we should say the social permission at least for companies to invest what they’re investing in, both from the markets as well as the broader society will come from, I believe, our ability to have broad sectoral productivity gains that’s evidenced in economic growth.

And by the way, the one other thing that I’m excited about is this time around. It won’t be like the Industrial Revolution in the sense that it’s not going to be about the developed world or the Global North and the Global South. It’s going to be about the entire globe, because guess what? Diffusion is so good that everybody is going to get it at the same time. So that’ll be the other exciting part of it.

Soma: Great. Thank you, Satya. Thank you for being here, and congratulations again.

RAG Inventor Talks Agents, Grounded AI, and Enterprise Impact

 

Listen on Spotify, Apple, and Amazon | Watch on YouTube

From the invention of RAG to how it evolved beyond its early use cases, and why the future lies in RAG 2.0, RAG agents, and system-level AI design, this week’s episode of Founded & Funded is a must-listen for anyone building in AI. Madrona Partner Jon Turow sits down with Douwe Kiela, the co-creator of Retrieval Augmented Generation and co-founder of Contextual AI, to unpack:

  • Why RAG was never meant to be a silver bullet — and why it still gets misunderstood
  • The false dichotomies of RAG vs. fine-tuning and long-context
  • How enterprises should evaluate and scale GenAI in production
  • What makes a problem a “RAG problem” (and what doesn’t)
  • How to build enterprise-ready AI infrastructure that actually works
  • Why hallucinations aren’t always bad (and how to evaluate them)
  • And why he believes now is the moment for RAG agents

Whether you’re a builder, an investor, or an AI practitioner, this is a conversation that will challenge how you think about the future of enterprise AI.


This transcript was automatically generated and edited for clarity.

Jon: So Douwe, take us back to the beginning of RAG. What was the problem that you were trying to solve when you came up with that?

Douwe: The history of the RAG project, we were at Facebook AI Research, obviously, FAIR, and I had been doing a lot of work on grounding already for my PhD thesis, and grounding, at the time, really meant understanding language with respect to something else. It was like if you want to know the meaning of the word cat, like the embedding, word embedding of the word cat, this was before we had sentence embeddings, then ideally, you would also know what cats look like because then you understand the meaning of cat better. So that type of perceptual grounding was something that a lot of people were looking at at the time. Then I was talking with one of my PhD students, Ethan Perez, about, “Can we ground it in something else? Maybe we can ground in other text instead of in images.” The obvious source at the time to ground in was Wikipedia.

We would say, “This is true, sort of true,” and then you can understand language with respect to that ground truth. That was the origin of RAG. Ethan and I were looking at that, and then we found that some folks in London had been working on open-domain question answering, mostly Sebastian Riedel and Patrick Lewis, and they had amazing first models in that space and it was a very interesting problem, how can I make a generative model work on any type of data and then answer questions on top of it? We joined forces there. We happened to get very lucky at the time because the people at the Facebook AI Image Similarity Search, I think is what it stands for, basically, the first vector database, but it was just there. And so we were like — we have to take the output from the vector database, give it to a generative model. This is before we called it language models, Then the language model can generate answers grounded on the things you retrieve. And that became RAG.

We always joke with the folks who were on the original paper that we should have come up with a much better name than that, but somehow, it stuck. This was by no means the only project that was doing this, there were people at Google working on very similar things, like REALM is an amazing paper from around the same time. Why RAG, I think, stuck was because the whole field was moving towards gen AI, and the G in RAG stands for generative. We were really the first ones to show that you could make this combination of a vector database and a generative model actually work.

Jon: There’s an insight in here that RAG, from its very inception, was multimodal. You were starting with image grounding, and things like that, and it’s been heavily language-centric in the way people have applied it. But from that very beginning place, were you imagining that you were going to come back and apply it with images?

Douwe: We had some papers from around that time. There’s a paper we did with more applied folks in Facebook where we were looking at, I think it was called Extra, and it was basically RAG but then on top of images. That feels like a long time ago now, but that was always very much the idea, is you can have arbitrary data that is not captured by the parameters of the generative model, and you can do retrieval over that arbitrary data to augment the generative model so that it can do its job. It’s all about the context that you give it.

Jon: Well, this takes me back to another common critique of these early generative models that, for the amazing Q&A that they were capable of, the knowledge cutoff was really striking, you’ve had models in 2020 and 2021 that were not aware of COVID-19, that obviously was so important to society. Was that part of the motivation? Was that part of the solve, that you can make these things fresher?

Douwe:

Yeah, it was part of the original motivation. That is what grounding is, the vision behind the original RAG project. We did a lot of work after that on that question as well, can I have a very lightweight language model that basically has no knowledge, it’s very good at reasoning and speaking English or any language, but it knows nothing? It has to rely completely on this other model, the retriever, which does a lot of the heavy lifting to ensure that the language model has the right context, but that they really have separate responsibilities. Getting that to work turned out to be quite difficult.

Jon:

Now, we have RAG, and we still have this constellation of other techniques, we have training, and we have tuning, and we have in-context learning, and that was, I’m sure, very hard to navigate for research labs, let alone enterprises. In the conception of RAG, in the early implementations of it, what was in your head about how RAG was going to fit into that constellation? Was it meant to be standalone?

Douwe: It’s interesting because the concept of in-context learning didn’t really exist at the time, that really became a thing with GPT-3, and that’s an amazing paper and proof point that that actually works, and I think that unlocked a lot of possibilities. In the original RAG paper, we have a baseline, what we call the frozen baseline, where we don’t do any training and we give it as context, that’s in table six, and we showed that it doesn’t really work, or at least, that you can do a lot better if you optimize the parameters. In-context learning is great, but you can probably always beat it through machine learning if you are able to do that. If you have access to the parameters, which is, obviously, not the case with a lot of these black box frontier language models, but if you have access to the parameters and you can optimize them for the data you’re working on or the problem you’re solving, then at least, theoretically, you should always be able to do better.

I see a lot of false dichotomies around RAG. The one I often hear is it’s either RAG or fine-tuning. That’s wrong, you can fine-tune a RAG system and then it would be even better. The other dichotomy I often hear is it’s RAG or long-context. Those are the same thing, RAG is a different way to solve the problem where you have more information than you can put in the context. One solution is to try to grow the context, which doesn’t really work yet even though people like to pretend that it does, the other is to use information retrieval, which is pretty well established as a computer science research field, and leverage all of that and make sure that the language model can do its job. I think things get oversimplified where it’s like, “You should be doing all of those things. You should be doing RAG, you should have a long-context window as long as you can get, and you should fine-tune that thing.” That’s how you get the best performance.

Jon: What has happened since then is that, and we’ll talk about how this is all getting combined in more sophisticated ways today, but I think it’s fair to say that in the past 18, 24, 36 months, RAG has caught fire and even become misunderstood as the single silver bullet. Why do you think it’s been so seductive?

Douwe: It’s seductive because it’s easy. Honestly, I think long-context is even more seductive if you’re lazy, because then you don’t even have to worry about the retrieval anymore, the data, you put it all there and you pay a heavy price for having all of that data in the context. Every single time you’re answering a question about Harry Potter, you have to read the whole book in order to answer the question, which is not great. So RAG is seductive, I think, because you need to have a way to get these language models to work on top of your data. In the old paradigm of machine learning, we would probably do that in a much more sophisticated way, but because these frontier models are behind black box APIs and we have no access to what they’re actually doing, the only way to really make them do the job on your data is to use retrieval to augment them. It’s a function of what the ecosystem has looked like over the past two years since ChatGPT.

Jon: We’ll get to the part where we’re talking about how you need to move beyond a cool demo, but I think the power of a cool demo should not be underestimated, and RAG enables that. What are some of the aha moments that you see with enterprise executives?

Douwe: There are lots of aha moments, I think that’s part of the joy of my job. I think it’s where you get to show what this can do, and it’s amazing what these models can do. So basic aha moments for us, is accuracy is almost kind of table stakes at this point. It’s like, okay, you have some data, it’s like one document, you can probably answer lots of questions about that document pretty well. It becomes much harder when you have million documents or tens of millions of documents and they’re all very complicated or they have very specific things in them. We’ve worked with Qualcomm and they’re like circuit design diagrams inside those documents, it’s much harder to make sense of that type of information. The initial wow factor, at least from people using our platform, is that you can stand this up in a minute. I can build a state-of-the-art RAG agent in three clicks basically.

That time of value used to be very difficult to achieve, because you had your developers, they have to think about the optimal chunking strategy for the documents, and things that you really don’t want your developers thinking about but they had to because the technology was so immature. The next generation of these systems and platforms for building these RAG agents is going to enable developers to think much more about business value and differentiation essentially, “How can I be better than my competitors because I’ve solved this problem so much better?” Your chunking strategy should not be important for solving that problem.

Jon: Also, if I now connect what we were just talking about to what you said now, the seduction of long-context and RAG are that it’s straightforward and easy, and it plugs into my existing architecture. As a CTO, if I have finite resources to go implement new pieces of technology, let alone dig into concepts like chunking strategies, and how the vector similarity for non-dairy will look similar to the vector similarity for milk, things like this, is it fair to say that CTOs are wanting something coherent, that can be something that works out of the box?

Douwe: You would think so, and that’s probably true for CTOs, and CIOs, and CAIOs, and CDOs, and the folks who are thinking about it from that level. But then what we often find is that we talk to these people and they talk to their architects and their developers, and those developers love thinking about chunking strategies, because that’s what it means in a modern era, to be an AI engineer is to be very good at prompt engineering and evaluation and optimizing all the different parts of the RAG stack. It’s very important to have the flexibility to play with these different strategies, but you need to have very, very good defaults so that these people don’t have to do that unless they really want to squeeze the final percent, and then they can do that.

That’s what we are trying to offer, is you don’t have to worry about all this basic stuff, you should be thinking about how to really use the AI to deliver value. It’s really a journey. The maturity curve is very wide and flat. It’s like some companies are figuring it out, it’s like, “What use case should I look at?” And others have a full-blown RAG platform that they built themselves based on completely wrong assumptions for where the field is going to go, and now, they’re stuck in this paradigm, it’s all over the place, which means it’s still very early in the market.

Jon: Take me through some of the milestones on that maturity curve, from the cool demo all the way through to the ninja level results.

Douwe: The timeline is, 2023 was the year of the demo, ChatGPT had just happened, everybody was playing with it, there was a lot of experimental budget, last year has been about trying to productionize it, and you could probably get promoted if you were in a large enterprise, if you were the first one to ship genAI into production. There’s been a lot of kneecapping of those solutions happening in order to be the first one to get it into production.

Jon: First-pass-the-post.

Douwe: First-pass-the-post, but in a limited way, because it is very hard to get the real thing past the post. This year, people are really under a lot of pressure to deliver return on investment for all of those AI investments and all of the experimentation that has been happening. It turns out that getting that ROI is a very different question, that’s where you need a lot of deep expertise around the problem, but also you need to have better components that exist out there in an open source easy framework for you to cobble together a Frankenstein RAG solution, that’s great for the demo, but that doesn’t scale.

Jon: Our customers think about the ROI, how do they measure, perceive that?

Douwe: It really depends on the customer. Some are very sophisticated, trying to think through the metrics, like, “How do I measure it? How do I prioritize it?” I think a lot of consulting firms are trying to be helpful there as well, thinking through, “Okay, this use case is interesting, but it touches 10 people. They’re very highly specialized, but we have this other use case that has 10,000 people. They’re maybe slightly less specialized, but there’s much more impact there.” It’s a trade-off. I think my general stance on use case adoption is that I see a lot of people aiming too low, where it’s like, “Oh, we have AI running in production.” It’s like, “Oh, what do you have?” It’s like, “Well, we have something that can tell us who our 401(k) provider is, and how many vacation days I get.”

And that’s nice, “Is that where you get the ROI of AI from?” Obviously not, you need to move up in terms of complexity, or if you think of the org chart of the company, you want to go for this specialized roles where they have really hard problems, and if you can make them 10, 20% more effective at that problem, you can save the company tens or hundreds of millions of dollars by making those people better at their job.

Jon: There’s an equation you’re getting at, which is the complexity, sophistication of the work being done times the number of employees that it impacts.

Douwe: There’s roughly two categories for gen AI deployment, one is cost savings. So I have lots of people doing one thing, if I make all of them slightly more effective, then I can save myself a lot of money. The other is more around business transformation and generating new revenue. That second one is obviously much harder to measure, and you need to think through the metrics, like, “What am I optimizing for here?” As a result of that, I think you see a lot more production deployments in the former category where it’s about cost-saving.

Jon: What are some big misunderstandings that you see around what the technology is or is not capable of?

Douwe: I see some confusion around the gap between demo and production. A lot of people think that, “Oh, yeah, it’s great, I can easily do this myself.” Then it turns out that everything breaks down after a hundred documents, and they have a million. That is the most common one that we see. There are other misconceptions maybe around what RAG is good for and what it’s not.What is a RAG problem and what is not a RAG problem? People don’t have the same mental model that maybe AI researchers like myself have, where if I give them access to a RAG agent, often, the first question they ask is, “What’s in the data?” That is not a RAG problem, or it’s a RAG problem on the metadata, it’s not on the data in itself. A RAG question would be like, what was, I don’t know, Meta’s R&D expense in Q4 or 2024, and how did it compare to the previous year? Something like that.

It’s a specific question where you can extract the information and then reason over it and synthesize that different information. A lot of questions that people like to ask are not RAG problems. It’s like, summarize the document is another one. Summarization is not a RAG problem. Ideally, you want to put the whole document in a context and then summarize it. There are different strategies that work well for different questions, and why ChatGPT is such a great product is because they’ve abstracted away some of those decisions that go into it, but that’s still very much happening under the surface. I think people need to understand better what type of use case they have. If I’m a Qualcomm customer engineer and I need very specific answers to very specific questions, that’s very clearly a RAG problem. If I need to summarize the document, put that in context of a long-context model.

Jon: Now, we have Contextual, which is an amalgamation of multiple techniques, and you have what you call RAG 2.0, and you have fine-tuning, and there’s a lot of things that happen under the covers that customers ideally don’t have to worry about until they choose to do so. I expect that changes radically the conversation you have with an enterprise executive. How do you describe the kinds of problems that they should go find and apply and prioritize?

Douwe: We often help people with use case discovery. So, thinking through, okay, what are the RAG problems, what are maybe not RAG problems? Then for the RAG problems, how do you prioritize them? How do you define success? How do you come up with a proper test set so that you can evaluate whether it actually works? What is the process for, after that, doing what we call UAT, user acceptability testing. Putting it in front of real people, that’s really the thing that matters, right? Sometimes, we see production deployments, and they’re in production, and then I ask them, “How many people use this?” And the answer is zero. During the initial UAT, everything was great and everybody was saying, “Oh, yeah, this is so great.” Then when your boss asks you the question and your job is on the line, then you do it yourself, you don’t ask AI in that particular use case. It’s a transformation that a lot of these companies still have to go through.

Jon: Do the companies want support through that journey today, either direct for Contextual or from a solution partner, to get such things implemented?

Douwe: It’s very tempting to pretend that AI products are mature enough to be fully self-serve and standalone. It’s decent if you do that, but in order to get it to be great, you need to put in the work. We do that for our customers or we can also work through systems integrators who can do that for us.

Jon: I want to talk about two sides of the organization that you’ve had to build in order to bring all this for customers. One is scaling up the research and engineering function to keep pushing the envelope. There are a couple of very special things that Contextual has, something you call RAG 2.0, something you call active versus passive retrieval. Can you talk about some of those innovations that you’ve got inside Contextual and why they’re important?

Douwe: We really want to be a frontier company, but we don’t want to train foundation models. Obviously, that’s a very, very capital intensive business, I think language models are going to get commoditized. The really interesting problems are around how do you build systems around these models that solve the real problem? Most of the business problems that we encounter, they need to be solved by a system. Then there are a ton of super exciting research problems around how do I get that system to work well together? That’s what RAG 2.0 is in our case, how do you jointly optimize these components so that they can work well together? There’s also other things like making sure that your generations are very grounded. It’s not a general language model, it’s a language model that has been trained specifically for RAG and RAG only. It’s not doing creative writing, it can only talk about what’s in the context.

Similarly, when you build these production systems, you need to have a state-of-the-art re-ranker. Ideally, that re-ranker can also follow instructions. It’s a smarter model. There’s a lot of innovative stuff that we’re doing around building the RAG pipeline better and then how you incorporate feedback into that RAG pipeline as well. We’ve done work on KTO, and APO, and things like that, so different ways to incorporate human preferences into entire systems and not just models. That takes a very special team, which we have, I’m very proud of.

Jon: Can you talk about active versus passive retrieval?

Douwe: Passive retrieval is basically old-school RAG. It’s like I get a query, and I always retrieve, and then I take the results of that retrieval, and I give them to the language model, and it generates. That doesn’t really work. Very often, you need the language model to think, first of all, where am I going to retrieve it from and how am I going to retrieve it? Are there maybe better ways to search for the thing I’m looking for than copy and pasting the query? Modern production RAG pipelines are already way more sophisticated than having a vector database and a language model. One of the interesting things that you can do in the new paradigm of agentic things and test-time reasoning is decide for yourself if you want to retrieve something. It’s active retrieval. It’s like if you give me a query like, “Hi, how are you?” I don’t have to retrieve in order to answer that. I can just say, “I’m doing well, how can I help you?”

Then you ask me a question and now I decide that I need to go and retrieve. Maybe I make a mistake with my initial retrieval, so then I need to go and think like, “Oh, actually, maybe I should have gone here instead.” That’s active retrieval, that’s all getting unlocked now. This is what we call RAG agents, and this really is the future, I think, because gents are great, but we need a way to get them to work on your data, and that’s where RAG comes in.

Jon: This implies two uses of two relationships of Contextual and RAG to the agent, there is the supplying of information to the agents so that it can be performant, but if I probe into what you said, active retrieval implies a certain kind of reasoning, maybe even longer reasoning about, “Okay, what is the best source of the information that I’ve been asked to provide?”

Douwe: Exactly. It’s like I enjoy saying everything is Contextual. That’s very true for an enterprise. So the context that the data exists in, that really matters for the reasoning that the agent does in terms of finding the right information that all comes together in these RAG agents.

Jon: What is a really thorny problem that you’d like your team and the industry to try and attack in the coming years?

Douwe: The most interesting problems that I see everywhere in enterprises are at the intersection of structured and unstructured. We have great companies working on unstructured data, there are great companies working on structured data, but once you have the capability, which we’re starting to have now, where you can reason over both of these very different data modalities using the same model, then that unlocks so many cool use cases. That’s going to happen this year or next year, just thinking through the different data modalities and how you can reason on top of all of them with these agents.

Jon: Will that happen under the covers with one common piece of infrastructure or will it be a coherent single pane of glass across many different Lego bricks?

Douwe: I’d like to think that it would be one solution, and that is our platform, which can do all of that.

Jon: Let’s imagine that, but behind the covers, will you be accomplishing that with many different components each handling the structured versus unstructured?

Douwe: They are different components, despite what some people maybe like to pretend, I can always train up a better text-to-SQL model if I specialize it for text-to-SQL, than taking a generic off-the-shelf language model and telling it, “Generate some SQL query.” Specialization is always going to be better than generalization for specific problems, if you know what the problem is that you’re solving, the real question is much more around is it worth actually investing the money to do that? It costs money to specialize and it sometimes hampers economies of scale that you might want to have.

Jon: If I look at the other side of your organization that you’ve had to build, so you’ve had to build a very sophisticated research function, but Contextual is not a research lab, it’s a company, so what are the other kinds of disciplines and capabilities you’ve had to build up at Contextual that complement all the research that’s happening here?

Douwe: First of all, I think our researchers are really special in that we’re not focused on publishing papers or being too far out on the frontier. As a company, I don’t think you can afford that until you’re much bigger, and if you’re like Zuck and you can afford to have FAIR. The stuff I was working on at FAIR, at the time, I was doing like Wittgensteinian language games and all kinds of crazy stuff that I would never let people do here, honestly. But there’s a place for that, and that’s not a startup. The way we do research is we’re very much looking at what the customer problems are that we think we can solve better than anybody else, and then focusing, thinking from the system’s perspective about all of these problems, how can we make sure that we have the best system and then make that system jointly optimized and really specialized, or specializable, for different use cases? That’s what we can do.

That means that it’s a very fluid boundary between pure research and applied research, basically. All of our research is applied. In AI, right now, I think there’s a very fine line between product and research, where the research is basically is the product, and that’s not true for us, I think it’s true for OpenAI, Anthropic, everybody. The field is moving so quickly that you have to productize research almost immediately. As soon as it’s ready, you don’t even have time to write a paper about it anymore, you have to ship it into product very quickly because it is such a fast moving space.

Jon: How do you allocate your research attention? Is there some element of play, even 5%, 10%?

Douwe: The team would probably say not enough.

Jon: But not zero?

Douwe: As a researcher, you always want to play more but you have limited time. So yeah, it’s a trade-off, I don’t think we’re officially committing. We don’t have a 20% rule or something like Google would have, it’s more like we’re trying to solve cool problems as quickly as we can, and hopefully, have some impact on the world. Not work in isolation, but try to focus on things that matter.

Jon: I think I’m hearing you say that it’s zero even in an environment with finite resources and moving fast?

Douwe: Every environment has finite resources. It’s more like if you want to do special things, then you need to try new stuff. That’s, I think, very different for AI companies or AI native companies like us. If you compare this generation of companies with SaaS companies, there is like, okay, all the LAMP stack, everything was already there, you have to basically go and implement it. That’s not the case here, is we’re very much figuring out what we’re doing, like flying the airplane as we’re building it sort of thing, which is exciting, I think.

Jon: What is it like to now take this research that you’re doing and go out into the world and have that make contact with enterprises? What has that been like for you personally, and what has that been like for the company to transform from research-led to a product company?

Douwe: That’s my personal journey as well. I started off, I did a PhD, I was very much a pure research person and slowly transitioned to where I am now, where the key observation is that the research is the product. This is special point in time, it’s not going to always be like that. That’s been a lot of fun, honestly, I’ve been on a podcast a while back and they asked me, “What other job would you think is interesting?” And I said, “Maybe being the head of AI of JP Morgan.” And they were like, “Really?”

And I was like, “Well, I think actually, right now, at this particular point in time, that is a very interesting job.” And because you have to think about how am I going to change this giant company to use this latest piece of technology that is frankly going to change everything is going to change our entire society. For me, it gave me a lot of joy talking to people like that and thinking about what the future of the world is going to look like.

Jon: I think there’s going to be people problems, and organizational problems, and regulatory and domain constraints that fall outside the paper.

Douwe: Honestly, I would argue that those are the main problems to still overcome. I don’t care about AGI and all of those discussions, the core technology is already here for huge economic disruption. All the building blocks are here, the questions are more around how do we get lawyers to understand that? How do we get the MRM people to figure out what is an acceptable risk? One thing that we are very big on is not thinking about the accuracy, but thinking about the inaccuracy, and what do you do, if you have 98% accuracy, what do you do with the remaining 2% to make sure that you can mitigate that risk? A lot of this is happening right now. There’s a lot of change management that we’re going to need to do in these organizations. All of that is outside of the research questions where we have all the pieces to completely disrupt the global economy right now, it’s a question of executing on it, which is scary and exciting at the same time.

Jon: Douwe, you and I have had a conversation many times about different archetypes of founders and their capabilities. There’s one lens that stuck with me that has three click stops on it, there is a domain expert, who has expertise in revenue cycle management, but may not be that technical at all, A. B, there is somebody who is technical and able to write code but is not a PhD researcher, and Mark Zuckerberg is a really famous example of that. Then there’s the research founder, who has deep technical capabilities and advanced vision into the frontier. What do you see as the role for each of those types of founders in the next wave of companies that needs to get built?

Douwe: That’s a very interesting question. I would argue how many PhDs does Zuck have working for him? That’s a lot, right?

Jon: That’s a lot.

Douwe: I don’t think it matters how deep your expertise in a specific domain is, as long as you are a good leader and a good visionary, then you can recruit the PhDs to go and work for you. At the same time, obviously, it gives you an advantage if you are very deep in one field and that field happens to take off, which is what happened to me. I got very lucky, with a lot of timing there as well. Overall, one underlying question you’re asking there is around AI wrapper companies, for example. To what extent should companies go horizontal and vertical using this technology?

There’s been a lot of disdain for these wrapper companies like, “Oh, that’s a wrapper for OpenAI.” It’s like, “Well, it turns out you can make an amazing business just from that, right?” I think Cursor is like Anthropic’s biggest customer right now. It’s fine to be a wrapper company as long as you have an amazing business. People should have a lot more respect for companies building on top of fundamental new technology and discovering whole new business problems that we didn’t really knew existed, and then solving them much better than anything else.

Jon: Well, so I’m really thinking also about the comment you made, that we have a lot of technology that is capable of a lot of economic impact, even today, without new breakthroughs that, yes, we’ll also get. Does that change the next types of companies that should be founded in the coming year?

Douwe: I think so. I am also learning a lot of this myself, about how to be a good founder, basically. It’s always good to plan for what’s going to come and not for what is here right now, and that’s how you get to ride that wave in the right way. What’s going to come is that a lot of this stuff is going to become much more mature. One of the big problems we had even two years ago was that AI infrastructure was very immature. Everything would break down all the time. There were bugs into attention mechanism, implementation of frameworks we were using, really basic stuff. All of that has been solved now. With that maturity also comes the ability to scale much better, to think much, much more rigorously, I think, around cost quality trade-offs and things like that. There’s a lot of business value just right there.

Jon: What do new founders ask you? What kind of advice do they ask you?

Douwe: They ask me a lot about this wrapper company thing, and modes, and differentiation. There’s some fear that incumbents are going to eat everything, and so they obviously have amazing distribution. They’re massive opportunities for companies to be AI native and to think from day one as an AI company. If you do that right, then you have a massive opportunity to be the next Google, or Facebook, or whatever, if you play your cards right.

Jon: What is some advice that you’ve gotten, and I’ll ask you to break it into two, what is advice that you’ve gotten that you disagree with, and what do you think about that? And then what is advice that you’ve gotten that you take a lot from?

Douwe: Maybe we can start with the advice I really like, which is one observation around why Facebook is so successful, it’s like, be fluid like water. It’s like whatever the market is telling you or your users are telling you, fit into that, don’t be too rigorous in what is right and wrong, be humble and look at what the data tells you and then try to optimize for that. That is advice that when I got it, I didn’t really appreciate it fully, and I’m starting to appreciate that much more right now. Honestly, it took me too long to understand that. In terms of advice that I’ve gotten that I disagree with, it’s very easy for people to say, “You should do one thing and you should do it well.” Sure, maybe, but I’d like to be more ambitious than that. We could have been one small part of a RAG stack and we probably would’ve been the best in the world at that particular thing, but then we’re slotting into this ecosystem where we’re a small piece, and I want the whole pie ideally.

Then that’s why we’ve invested so much time in building this platform, making sure that all the individual components are state-of-the-art and that they’ve been made to work together so that you can solve this much bigger problem, but yet, that is also a lot harder to do. Not everyone would give me the advice that I should not go and solve that hard problem, but I think over time, as a company, that is where your moat comes from, doing something that everybody else thinks is kind of crazy. So that would be my advice to founders, is go and do something that everybody else thinks is crazy.

Jon: You’re probably going to tell me that that reflects in the team that comes to join you?

Douwe: Yeah, the company is the team, especially the early team. We’ve been very fortunate with the people who joined us early on, and that is what the company is. It’s the people.

Jon: If I piggyback a little bit and we get back into the technology for a minute, there’s a common question, maybe even misunderstanding that I hear about RAG, that, “Oh, this is the thing that’s going to solve hallucinations.” You and I have spoken about this many times, where is your head at right now on what hallucinations are, what they are not? Does RAG solve it? What’s the outlook there?

Douwe: I think hallucination is not a very technical term. We used to have a pretty good word for it, it was accuracy. If you were inaccurate, if you were wrong, then I guess to explain that, or to anthropomorphize it would be to say, “Oh, the model hallucinated.” I think it’s a very ill-defined term, honestly. If I would have to try to turn it into a technical definition, I would say the generation of the language model is not grounded in the context that is given, where it is told that that context is true. Basically, hallucination is about groundedness. If you have a model that adheres to its context, then it will hallucinate less. Hallucination itself is arguably a feature for a general purpose language model, it’s not a bug. If you have a creative writing department, or marketing department, creative writing thing like content generation, I think hallucination is great for that, as long as you have a way to fix it, you probably have a human somewhere double checking it and rewriting some stuff.

So hallucination itself is not even a bad thing necessarily, it is a bad thing if you have a RAG problem though and you cannot afford to make a mistake. Then that’s why we have a grounded language model that has been trained specifically not to hallucinate, or to hallucinate less. Because one other misconception that I sometimes see is that people think that these probabilistic systems can have 100% accuracy, and that is a pipe dream. It’s the same with people. If you look at a big bank, there are people in these banks and people make mistakes too.

Jon: SEC filings have mistakes.

Douwe: Exactly. The whole reason we have the SEC and that is a regulated market is so that we have mechanisms built into the market so that if a person makes a mistake, then at least we made reasonable efforts to mitigate the risk around that. It’s the same with AI deployments. That’s why I’m talking about how to mitigate the risk with inaccuracies. It’s like, we’re not going to get it to 100%, so you need to think about the 2, 3, 5, 10% depending on how hard the use cases where you might still not be perfect. How do you deal with that?

Jon: What are some of the things that you might’ve believed a year ago about AI adoption or AI capabilities that you think very differently about today?

Douwe: Many things. The main thing I thought that turned out not to be true was that I thought this would be easy.

Jon: What is this?

Douwe: Building the company and solving real problems with AI. We were very naive, especially in the beginning of the company. We were like, “Oh, yeah, we just get a research cluster. Get a bunch of GPUs in there. We train some models, it’s going to be great.” Then it turned out that getting a working GPU cluster was very hard. And then it turned out that training something on that GPU cluster in a way that actually works, where if you’re using other people’s code, then maybe that code is not that great yet. Now, you have to build your own framework for a lot of the stuff that you’re doing if you want to make sure that it’s really, really good. We had to do a lot of plumbing that we did not expect to have to do. Now, I’m very happy that we did all that work, but at the time, it was very frustrating.

Jon: What are we, either you and I, or we, the industry, not talking about nearly enough that we should be?

Douwe: Evaluation. I’ve been doing a lot of work on evaluation in my research career, things like Dynabench where it was about how do we hopefully maybe get rid of benchmarks all together and have a more dynamic way to measure model performance. Evaluation is very boring. People don’t seem to care about it. I care deeply about it, so that always surprises me. We did this amazing launch, I thought, around LMUnit, it’s natural language unit testing. You have a response from a language model, and now you want to check very specific things about that response. It’s like, did it contain this? Did it not make this mistake? Ideally, you can write unit tests as a person for what a good response looks like. You can do that with our approach. We have a model that is by far state-of-the-art at verifying that these unit tests are passing or failing.

I think this is awesome. I love talking about this, but people don’t seem to really care. It’s like, “Oh, yeah, evaluation. Yeah, we have a spreadsheet somewhere with 10 examples.” How is that possible? That’s such an important problem. When you deploy AI, you need to know if it works or not, and you need to know where it falls short, and you need to have trust in your deployment, and you need to think about the things that might go wrong, and all of that. It’s been very surprising to me just how immature a lot of companies are when it comes to evaluation, and this includes huge companies.

Jon: Garry Tan posted on social media not too long ago, that evaluation is the secret weapon of the strongest AI application companies.

Douwe: Also, AI research companies, by the way. So OpenAI and Anthropic, part of why they’re so great is because they’re amazing at evaluation too. They know exactly what good looks like. That’s also why we are doing all of that in-house; we’re not outsourcing evaluation to somebody else. It’s like, if you are an AI company and AI is your product, then you can only assess the quality of your product through evaluation. t’s core to all of these companies.

Jon: Whoever is lucky enough to get that cool JP Morgan head of AI job that you would be doing in another life, is that intellectual property of JP Morgan what the evals really need to look like, or is this something that they can ultimately ask Contextual to cover for them?

Douwe: No. I think the tooling for evaluation, they can use us for, but the actual expertise that goes into that evaluation, the unit tests, they should write that themselves. In the limit, we talked about a company is its people, but in the limit, that might not even be true, because there might be AI mostly, and maybe only a few people. What makes a company a company is its data, and the expertise around that data and the institutional knowledge. That is what defines a company. That should be captured in how you evaluate the systems that you deploy in your company.

Jon: I think we can leave it there. Douwe Kiela, thank you so much. This was a lot of fun.

Douwe: Thank you.

Dropzone’s Edward Wu on Solving Security’s Biggest Bottleneck

Listen on Spotify, Apple, and Amazon | Watch on YouTube

This week, Partner Vivek Ramaswami hosts Edward Wu, the founder of 2024 IA40 winner Dropzone, which is building a next-generation AI security operation center. Edward decided to take the leap and start his own company after spending eight years at ExtraHop, where he rose to the role of senior principal scientist, leading AI/ML and detection. Now at Dropzone, he’s tackling some of the most pressing challenges at the intersection of AI and cyber security.

On this episode, they explore Edward’s decision to leave ExtraHop to build Dropzone, his thoughts on why generative AI is uniquely suited to addressing alerts and investigation in cybersecurity, and how Dropzone is redefining the role of AI in the security operations center. They unpack Edward’s decision to leap into entrepreneurship, how he landed key customers like UiPath, and why transparency is vital in a category often skeptical of AI. He also shares his perspectives on how AI unlocks new opportunities in cybersecurity, along with lessons he learned as a first-time solo founder.


This transcript was automatically generated and edited for clarity.

Edward: My pleasure.

Vivek: Let’s kick off with having you share a little bit about your journey into security. What sparked your interest in the space, to enter into security?

Edward: I would say, quite similar to a lot of security practitioners, I grew up playing with computers, playing games, cracking games, and I think that’s what got me started with security, because a lot of the, you can say, skills or tools you use to crack games or cheat in games, jive with reverse engineering and malware analysis. Then, after I got into my undergrad program at UC Berkeley, I really made the decision to eventually pursue a PhD in cybersecurity, and that’s kind of where I spent three years in my undergrad, doing cybersecurity related research, like automated malware analysis, binary analysis, reverse engineering, Android apps.

Vivek: Yeah, that’s great. So, even back then, you were thinking about security and cybersecurity and obviously there was a lot of attacks and things like that, even back then. You spent eight years at ExtraHop, which is a Madrona portfolio company, and eventually became the senior principal scientist, led AI/ML and detection there. Tell us a little bit about that journey, and then you can tell us a little bit about why you decided to leave and launch your own company in Dropzone.

Edward: ExtraHop was definitely a very fun ride for me. I joined when I decided to quit my PhD, due to a variety of reasons. Part of it was cybersecurity academic research, frankly, is not as interesting as the real thing in the industry. When I decided to quit my program, I applied and interviewed at, practically, any and every stage cybersecurity companies I could find. I remember one of them was Iceberg. I was offered to be employee number four, and Iceberg was a Madrona portfolio company as well, so while I was looking around, ExtraHop really struck me, because back then, ExtraHop wasn’t in cybersecurity at all. It was in network performance analytics.

When I saw the demo of ExtraHop’s product, I saw so much potential, because what ExtraHop had in terms of potential is very similar to what police departments and state agencies discovered about traffic cameras. You initially have a lot of traffic cameras for monitoring traffic, but after a while everybody discovered how much more valuable information you can get out of traffic cameras from tracking, whether it’s fugitives or helping to identify other sorts of suspicious activities, so I really saw that opportunity, and ended up joining ExtraHop. Essentially helping ExtraHop to build and pivot from a network performance company to a network security company and, along the way, built ExtraHop’s AI/ML and detection product from scratch, and really spent a lot of time working with ExtraHop customers in understanding how security teams actually work.

Vivek: How did you think about even joining a startup or a scaling startup back then? Obviously, you’re interest in security, you probably could have looked at Palo Alto Networks, Fortnite, or a much larger platform. What attracted you to a startup at the time?

Edward: While I was in college, I came across a couple of blogs talking about the founding journey of different security startups, and I think those really struck me and got me excited and interested to eventually start my own company. While I was looking for my first job out of college, the number one criteria was the opportunity to learn and how to build a startup someday in the future for myself. When I interviewed with ExtraHop, and I met ExtraHop Co-founder and CEO at the time Jesse Rothstein, I told him, “Hey, the reason I’m looking at startups is I want to start my own company someday,” which is great foreshadowing for when I told him I’m going to resign and start my own thing eight years later.

Vivek: So, he couldn’t act shocked, because he would’ve known eight years from before.

Edward: Correct, correct. Back then I was looking for the opportunity to learn how to build a product from scratch, and that’s kind of where, between the choices of ExtraHop and Iceberg, I picked ExtraHop, because it was a little bit more mature. I could learn from the existing lessons and the potholes ExtraHop fell into, and then dug themselves out of.

Vivek: It sounds like you had that kernel of idea in your head, from early on, that you wanted to start your own company. Before we get into the aha moment that led you to founding Dropzone, would you suggest to other founders that it’s helpful to spend time at a company? Even if you had that idea early on in academia, thinking about starting a company, would you suggest it’s good for founders to go and spend a number of years at another startup to learn, or how would you think about that journey that founders have to go on before they start their own business?

Edward: At least in my experience, I believe that if you’re going to start a B2B company, it’s vitally important to work somewhere first, because you’ll have the exposure to how B2B actually works. I think there’s a number of, you can say whether it’s processes, or structures, that all B2B companies have to go through, and by working at an established organization, it teaches you what good engineering looks like, what good customer success looks like, what good marketing looks like, and what good sales look like. All of these will become tremendously important when you do start your own B2B company.

Vivek: So, now you’ve been at ExtraHop for eight years, you’ve learned good marketing, and good sales, you’ve seen this journey, and you’ve obviously had this idea now for eight years in your head that you want to go found your own company. What was the aha moment? Walk us through the idea you had in your head? Where did you see the opportunity that led you to actually go out and leave ExtraHop and found Dropzone?

Edward: The biggest thing was, while I was at ExtraHop, I had been keeping track of industry movements and trends, because I know the only way I could found my own company someday was by looking for the next big thing. During my time at ExtraHop, I had done a lot of analysis and paid attention to every single RSAC Innovation Sandbox, as well as other movements within cybersecurity to see, “Okay. What are other people building?” And if I were to be an investor, would I invest my money or time, right? Because as a founder, to some extent, you’re also an investor.
You’re investing in the most precious resource you have, which is your time. I’ve been doing a lot of that for years. Then, when GenAI came around, that got me excited, because for the first time I saw an idea where we can tackle one of the holy grail unsolvable problems within cybersecurity by leveraging this new technical catalyst. That combination of a very concrete, universal pinpoint, and a new technical catalyst, which essentially means there was no way to solve this problem previously, makes starting a new company a lot easier, because you don’t have tons of incumbents to deal with, and all the factors combined are reasonings behind my departure.

Vivek: You bring up a good point, and I think many of the founders that listen to this podcast and that we work with, over the last few years, after college ChatGPT came out or after Transformers really were becoming a big thing, is that they also said, “Hey, there’s an opportunity in AI. I want to go found a business.” You mentioned that, if it wasn’t for AI or the current versions that we have in AI, some of these problems likely couldn’t have been solved in security. Maybe just take us through that. What, specifically, were you seeing in this intersection of AI and security that said, “Hey, there’s a technical change. Something is different now, that’s going to unlock problems that we couldn’t unlock before,” and then maybe you can tell us a little bit about how that led you to what your core focus is at Dropzone today.

Edward: For people who are not familiar with security, one of the biggest challenges within cyber security today is the ability to process all the security alerts. To some extent, it’s actually a very similar problem to modern day police departments, which is they have all sorts of crime reports, but not enough detectives to follow up on every single report. This is kind of where, historically, it has been a very difficult problem to solve, because the act of investigating security reports and alerts requires tons of human intelligence.
You cannot hard code your way through an investigation process, because when a security analyst is looking at security reports and alerts, what they’re going through in their head is a very detective recursive reasoning process, so that has been one of the biggest bottlenecks within cyber security. There’s a couple workforce reports out there that shared, as a world needs around 12 million cyber defenders today, and there are 12 million job postings out there, but the actual workforce is only around 7 million, so there’s this shortage of 5 million cyber security analysts or defenders that a world needs to truly protect themselves, but unless somebody invents cloning or some sort of mind transfer, then some sort of software-based automation seems to be the only other solution.

Vivek: As you say, there is a shortage in the number of security practitioners that can do these kinds of things. It’s interesting, because I feel like in this first wave of AI, we saw a lot of companies going after, “Hey, there’s this intersection of AI and security. Let’s just go secure the models, or let’s think about the models themselves.” It seems like what you were thinking about is there’s an existing workflow today that is understaffed, and that’s where we see AI actually helping. Had you worked with these practitioners before, in your time at ExtraHop? Had you seen these problems of alerting and alert fatigue, and how do we actually get AI to solve problems where we don’t have enough people to scale and solve these problems?

Edward: To some extent, what I did at ExtraHop was probably one of the reasons why security practitioners are overwhelmed by alerts, because what I built at the ExtraHop is a detection engine, so it looks at network telemetry and identifies suspicious activities. User A uploaded five gigabytes of data to Dropbox. User B established a persistence connection with an external website for 48 hours, right? User C, SSH linked to the database. All of these security alerts takes time to investigate, and those are exactly the type of alerts that historically have overwhelmed security practitioners.

So, to some extent, my work in the past eight years has contributed or maybe partially caused some of the alert fatigue and overload, so I’m definitely intimately familiar with this particular problem. The way you said when genAI came along, a lot of people had this idea, “Oh. Let’s just secure the models,” my train of thought is very similar to a post I saw on Twitter, which says, one way you can think of genAI is, essentially, we as humans are discovering a new island where there are a hundred billion people with college-level education and intelligence, willing to work for free. We just talked about this huge staff shortage in cybersecurity, so why don’t we take those a hundred billion people with college-level intelligence, willing to work for free, and have them look at all the security alerts and help to improve the overall cybersecurity posture?

Vivek: You have this great term that you were describing to us, Dropzone is having a number of interns or having a whole new set of staff. How do you describe it?

Edward: If we were to zoom out, we view Dropzone as essentially a software-based staff augmentation agency for cybersecurity teams. What we’re building is, essentially, you can say AI agents or AI digital workers that work alongside of the human cybersecurity analyst engineers to allow security teams to do 5X to 10X more than what they’re capable of doing today, but without 5 or 10X of budget or headcount.

Vivek: You’re primarily selling to CISOs, the Chief Security Officer, Chief Information Security Officer, but the actual practitioners of who is using Dropzone tends to be folks that are in the security operation center, right? Who are usually the people who are using Dropzone on a day-to-day basis or interacting with it?

Edward: The primary user of our product are essentially security analysts who work in SOC or security operation centers, and are responsible for responding to security alerts and confirmed breaches.

Vivek:
Going back to one thing you were saying before, which was the nice thing about building, when there’s a new tech change, what we have with AI, is that you don’t have these incumbents, right? Or the incumbents tend to be a little bit slower to move or they’re more reactive. In this case, you can build a net new business, and you can help create a category. One thing you and I have talked about is this is such an obvious problem, in the sense that every large company or mid-market enterprise company has an understaffed security operation center.

A number of startups have sort of popped up and started to build what they call AI SOCs or agents for the SOCs, and so, if we zoom out, how do you view this landscape, how do you view this category where, on one hand, it’s a total validation of the market, saying that something like this needs to occur because people clearly want this product. On the other hand, it’s like, “Okay. Well, how am I supposed to disaggregate and decide between 10 or 12 competitors that all maybe look the same on the surface?”

Edward: If you were to zoom out, the market Dropzone operates in, the AI SOC analyst market or autonomous SOC platform market is probably the single most competitive market within cybersecurity today. Like you said, one challenge is the intersection of cybersecurity and AI is tremendously interesting. The alert investigation use case, to some extent, is kind of an obvious use case a lot of people can see. The way we think about competition is actually not as different from all previous generations of the startups, which is having a lot of competitors is great validation for the market, but the reality is most startups or most players are not going to be successful for a variety of different reasons.

So, to some extent, it’s not a competition in terms of who gets the highest grades. It’s actually a competition of who finishes the marathon, so from our perspective, when we think about competition, a lot of it has to do with how could we do better? How can we ensure that we’re delivering real world, concrete value to our end users? Because we know we’re solving a very large problem with a lot of needs and very large market. We don’t need to worry too much about our competitors right now, because frankly most of them are still pre-product at this moment. Our focus is solely on, can we sign up 1, can we sign up 5, can we sign up 10, 20, 50 paying customers who are getting real world value out of our technology? As long as we could do that, the success will come, regardless of what our competitors do.

Vivek: So, focus. You just have to focus, focus on your customers, and make sure that you’re delivering a product and experience that they really like.

Edward: Yeah.

Vivek: You could say this about other areas of security in the past too, right? I mean, endpoint security 10 years ago was a very hot category and it’s created several, multi-billion dollar companies, CrowdStrike, SentinelOne, and others. As you say, the reason that there’s so many competitors is because people clearly see there’s a lot of value in this market, but as you think about the ecosystem many existing security tools already, and you went to RSAC, and you’ll see 1000 booths and everyone has a booth. So, outside of even the AI SOC space, but in security in general, as an early stage startup, that’s not as much on the map as some of these incumbents, what are the things that you find are valuable to have customers recognize you and think about you? What are some of the tips you have for other founders in a crowded market and how to stand out?

Edward: The biggest learnings we had so far, on marketing front, is making sure you are very precise on how you describe yourselves. Cybersecurity is so fragmented, if you say, “Hey, we are using AI to solve all the problems with cybersecurity,” that’s not going to work, because there are too many vendors out there, but instead, you need to be very focused in your messaging and positioning, so the prospects or security buyers can immediately tell where do you fit in the larger security ecosystem? There are no security teams that only uses a single product.

Most security teams has 5, 10, 15, 20 products. It’s very important to be precise so people don’t conflate you with other products, and they can immediately understand what you’re trying to do. That’s kind of where you mentioned RSAC. I always love RSAC, and I love walking through Expo force, because I find that to be a really good opportunity to level up product marketing. When you walk through the Expo halls and see 1000 vendors, you can really quickly tell who has good product marketing, because every time you walk through a booth, you might have like five seconds right before you start looking at the next fancy, shining booth.

Within that five seconds, you can immediately tell what they’re doing or you’re confused like, “What is this thing?” I think that’s a great exercise. I know I, myself, have been doing this, and I’ve encouraged a lot of folks in my company to do as well, to really make sure our positioning and messaging is very clear so people can immediately tell what we’re trying to do, versus some Panacea AI magic.

Vivek:
Well, there’s a lot of those. Now that we’re a few years into this post ChatGPT wave, we’ve seen so many of these vendors that say they do AI security. If you go to the last two RSA conferences, all you would hear is AI, AI, AI, but then what are you delivering to customers, right? And so, in that way, I think it’s really helpful to hear from you, Edward, about how you all landed UiPath as a customer, really impressive, and they’re obviously a very discerning and sophisticated business themselves. Take us through that journey. How did you land UiPath? What went into that? Are they finding value from Dropzone today?

Edward: UiPath, one of their security engineers reached out to me personally on LinkedIn saying, “Hey, I saw a Dropzone somewhere. It seems you guys are doing interesting stuff. Can I get a demo?” And then, we kicked off the POC, where the end goal of the POC is to evaluate how much time saving we can create for their security team, because UiPath is growing very quickly, and unsurprisingly their security budget is not growing linearly compared to the overall headcount. As a result of that, during the POC, we worked with UiPath very closely to, not only make sure our product is automating tasks that allow their security engineers to essentially get higher leverage, but also working with them to align on the future roadmap of the product.

They’re not only buying us for what the product can do today, but also what the product can be three months, six months down the road, and that’s very interesting, because most of the time it’s a founder reaching out to 1000 people, leading, begging for a demo, not the other way around, and I think we have a very large chunk of our customers and active prospects come from organic inbound. I think part of that is because, echoing my previous point, by having really good positioning and messaging, and also very transparent product marketing, it allows security buyers to find you, versus you trying to push the ropes and trying to force the product down people’s throats.

This is where we took a very conscious effort and a strategic decision to be very transparent. For example, our entire product documentation is public on the internet. We have over 30 interactive recorded product demos, as well as an un-gated test drive and full transparent pricing. We are able to allow interested early adopters within security community to complete, essentially, 80% of the buyer journey without talking to us, and that really allows us to get these high-quality handsraisers who have already, to some extent, self-qualified themselves and know they want to try this technology.

Vivek: I love the point you made about being very transparent and being open, and that’s not common in security, right? There’s a lot of clothes selling, and you never really know how deals are done. I think I’m sure there’s some set of new generation of buyers that want that transparency. What led you to sort of stray from the path of what we would call as normal in security to be more transparent than what the norm is?

Edward: A lot of it came from my time at ExtraHop. While I was at ExtraHop, I really advocated for an interactive online demo. Back then, ExtraHop was probably the single security vendor in the entire detection and response space, where you can access an un-gated interactive demo, like actual product, not like recorded video, but an actual product. I saw how much additional credibility that marketing tactic really helped, so I decided to bring that and keep that with Dropzone as well.

Vivek: Well, last point on this is that I’m sure, as you’ve noticed, CISOs are sold a lot of bad products, and we have a CISO Advisory council here at Madrona, and the one thing that they’ll say is that they’re just inundated with products and a lot of inbound to them. With you, with this transparent marketing, and being able to show the demo and show the value, is there another step that needs to happen for you to bridge that gap to have them come and say, “Hey, take a look at our products”? Is that an evolution? How do you think about the push versus pull nature of what you’re selling and how CISOs are typically sold into?

Edward: I think it’s definitely a combination of the two. Over time, generally, what I’ve seen within cybersecurity is initially most startups are in a push market, because there’s no category awareness. Most of the security startups solve a problem that’s more or less kind of obscure to the general public, so they need to do a ton of eventualization. I would say, for us, it’s a little bit easier, because the problem we solve, again, is one of the most universal and concrete and well understood problem within cybersecurity. It’s just that nobody has been able to come up with a technical solution to solve it, so that definitely makes our lives a lot easier, because to some extent we don’t really need to evangelize the problem we solve due to the fact that it’s already been there for 20 years, and every single team experiences that every single day.

Part of getting security teams to raise their hand, part of it also has to do with the overall macro environment. For example, people have heard of Stargate projects, $500 billion of investment, as well as DeepSeek and all sorts of interesting reactions from different vendors when they really start to see competition, as well as genAI becoming real, and I would say that played in a big part as part of our marketing tailwind, because now it’s very common. I mean, obviously, I’m sure you guys have been saying the same thing to your portfolio companies, right? Which is regardless what kind of business you are in, I want to know why you are not using genAI in every single business function, and that’s a question I would say every single board has been asking the executives. When that trickles down to security teams, alert investigation, and software-based documentation for SOC, it is generally one of the first places people look for.

Vivek: To your point, we’re seeing our own companies and the customers of the companies we work with, everyone is saying we’re using AI, but they don’t want use AI foolishly. They want to be smart about how they use AI, and to your point, in the security space, it’s hard to just put AI and say, “Hey, let’s walk away,” right? Security is security. It’s a very important piece of both the application and the infrastructure side of businesses, so being able to already have that pull from the SOC team, saying, “We’re already drowning in alerts. We need help. However way you can help us is going to be important,” and you can come in and execute against that, I think, is really interesting.

Edward: Absolutely. We have seen, thanks to ChatGPT, I think ChatGPT is probably the biggest marketing gift OpenAI has given to all these genAI startups, because it enlightens everybody, whether they’re technical or non-technical, on the potential and capabilities of genAI or this kind of new technology. I remember getting calls from my parents, asking like, “Hey, Edward. You have been doing AI stuff for eight years.This genAI thing looks very cool. Why don’t you go build a stock trading thing using this technology?” Because of that, I think that made a lot of security practitioners start to play with this technology themselves.

We have seen a good number of open source projects, and a good subset of the prospects we run into, a lot of times they’ll be like, “Hey, Dropzone seems very cool,” and by the way, we have been internally playing with GPTs and trying to build our own open source AI agents who automate small stuff within cybersecurity, so we know the technology can get there, but at the same time, we know, as a security team, we’re not like a hundred percent developers. This is not our specialization, so we already built confidence, have confidence in the technology. All we need to find is a reputable, trustworthy, actually technology solution provider. That definitely, again, makes it a little bit more kind of a pool-based marketing, versus trying to push ropes.

Vivek: Yes. Well, you can tell your parents that, “Hey, you may not be building a stock trading app, but stock trading apps can use Dropzone,” which is really cool.

Edward: Correct, yeah.

Vivek: I’m going to transition into some rapid-fire questions we have for you. Edward, you’ve been a founder for a couple of years now. You’re both a solo founder and a first-time founder, so what are the hardest-learned lessons that you’ve had so far? What is something that you wish you knew or wish you did better on this early journey of yours?

Edward: Probably the biggest thing, and surprisingly, as a solo, first-time founder with a engineering background, is I wish I learned more about sales before I started. One common misconception technical founders have is, as long as we build the best product on the planet, people will magically come to us. But that’s definitely not the reality. You could argue I couldn’t be further from the truth. So, sales is actually very important.

To be frank, while I was at ExtraHop, I obviously had a number of engagements with customers, but one thing I always wanted to do at ExtraHop, that I wasn’t able to, is work part-time as a sales engineer, for like six months. I never got a chance to do that, even though I always had this idea in the back of my mind, but after funding Dropzone, I think that kind of forced myself to learn how to be a sales engineer and how to be a account executive. I think those skills are tremendously important, because if a technical founder cannot sell a technology or a product with all the vision, enthusiasm, and in-depth product understanding, then nobody else could. I think sales capability and knowing how to use different techniques, how to qualify customers, and how to have a good sales demo are the key skills I wish I had before I got started.

Vivek:
Great point. Sales is so important. It doesn’t matter what your product or businesses. Sales is very important. What is something you believe about the AI market that others may not?

Edward: One thing I believe about the AI market is the fact that distribution is going to be a very important factor, and how I think most people probably underestimate the power of human trust, and how much that plays within the overall business ecosystem. This is where I’ve seen a number of startups trying to build technologies that completely substitute certain roles and responsibilities. I think, at least from my perspective, I think there are roles where the technical deliverables is maybe a fraction of the value proposition, but the other fraction is actually this human trust, human responsibility, and accountability.

This is where AI startups are looking at different industries and verticals, and try to identify insertion points for AI agents. I do believe we should be very respectful of the fundamental human trust, and how having automation itself is not completely obvious. That’s one of the reasons why I suspect software engineers will get more automation versus, for example, account executives because nobody is going to really build, have a relationship with an AI agent, posing as an account executive. This is where this human relationship, human trust building channel is something that I think it’s a lot more difficult for AI to substitute.

Vivek: Well, we see this when you’re driving down the 101 and you see multiple AISDRs. Which do I go with, right? Who do I have a better relationship with? I’m not sure right now, but outside of Dropzone, or you can even think outside of security, what company or trend are you most excited about?

Edward: Probably robotics. Part of it is I love watching animes, and there’s a number of animes where they talk about future societies with all sorts of cyborgs and robots, and I think humanoids robots. I think those are all very cool, but also part of it is a little bit maybe self-fulfilling, because obviously, as a cybersecurity vendor, I see more robots there are around us, I think the more important cybersecurity will become as well.

Vivek: Last question. This will be an easy one for you. There’s a 90s movie with Wesley Snipes called Dropzone. Is the company named after that movie, or what was the basis for calling the company Dropzone?

Edward: I actually have never heard of that movie, so maybe I should check it out or maybe ask ChatGPT about it. We named the company Dropzone, because we envision the future, when we have the resources and the needs to sponsor a Super Bowl ad. We want the ad to involve a scene where you have cyber defenders surrounded at the hilltop, overwhelmed by attackers, and then cyber defender essentially deployed Dropzone, which is, in my mind, I’ve been thinking about some sort of portal or Stargate, kind of a warp gate kind of a construct. They’ll deploy this portal, and through that, they can summon additional reinforcements to help them push by the attackers, so we named the company Dropzone, because we view Dropzone as a portal of, you can say, software-based staff augmentation for cybersecurity teams.

Vivek: Love that. Well, thank you so much, Edward. We really appreciate it.

Edward: Great to be here.

AI+Data in the Enterprise: Lessons from Mosaic to Databricks

Listen on Spotify, Apple, and Amazon | Watch on YouTube

How do AI founders actually turn cutting-edge research into real products and scale them? In this week’s episode of Founded & Funded, Madrona Partner Jon Turow sits down with Jonathan Frankle, Chief AI Scientist at Databricks, to talk about AI+Data in the enterprise, the shift from AI hype to real adoption, and what founders need to know.

Jonathan joined Databricks, a 4-time IA40 Winner, as part of that company’s $1.3 billion acquisition of MosaicML, a company that he co-founded. Jonathan is a central operator at the intersection of data and AI. He leads the AI research team at Databricks where they deploy their work as commercial product, and also publish research, open-source repositories, and open-source models like DBRX and MPT. Jonathan shares his insight on the initial vision behind MosaicML, the transition to Databricks, and how production-ready AI is reshaping the industry. He and Jon explore how enterprises are moving beyond prototypes to large-scale deployments, the shifting skill sets AI founders need to succeed, and Jonathan’s take on exciting developments like test-time compute. Whether you’re a founder, builder, or curious technologist, this episode is packed with actionable advice on thriving in the fast-changing AI ecosystem.


This transcript was automatically generated and edited for clarity.

Jonathan: Thank you so much for having me. I can’t wait to take our private conversations and share them with everybody.

Jon: We always learn so much from those conversations. And so, let’s dive in. You’ve been supporting builders with AI infrastructure for years, first at Mosaic and now as part of Databricks. I’d like to go back to the beginning. Let’s start there. What was the core thesis of MosaicML, and how did you serve customers then?

Jonathan: The core thesis quite simply was making machine learning efficient for everyone. The idea is that this is not a technology that should be defined by a small number of people, that should be built to be one-size-fits-all in general, but that should be customized by everybody for their own needs based on their data. In the same way that we don’t need to rely on a handful of companies if we want to build an app or write code, we can just go and do it. Everybody has a website. Everybody can define how they want to present themselves and what they want to do with that technology. We really firmly believed in the same thing for machine learning and AI, especially as things started to get exciting in deep learning. And then, of course, LLMs became a big thing halfway through our Mosaic journey.

I think that mission matters even more today to be honest. We’re in a world where we bounce back and forth between huge fear over the fact that only a very small number of companies can participate in building these models, and huge excitement whenever a new open-source model comes out that can be customized really easily, and all the incredible things people can do with it. I firmly believe that this technology should be in everyone’s hands to define as they like for the purposes they see fit on their data in their own way.

Jon: It’s a really good point, and you and I have spoken publicly and privately about the democratizing effect of all this infrastructure. I would observe that the aperture of functionality that Mosaic offered, which was especially about hyper-efficient training of really large models, putting it in the hands of a lot more companies, that aperture is now wider. Now that you’re at Databricks, you can democratize more pieces of the AI life cycle. Can you talk about how the mission has expanded?

AI+Data in the Enterprise: The Expanding Mission at Databricks

Jonathan: Yeah. I mean, it was really interesting. Matei, our CTO, I was looking at his notes for a meeting that we had for our research team last week and he had written in his notes casually, our mission has always been to democratize data and AI for everyone. I was like, “Well, wait a minute. That sounds very familiar.” I think we may chat at some point about this acquisition and why we chose to work together. It’s the same mission. We’re on the same journey. Databricks obviously was much more further along than Mosaic was and wildly successful, but it’s great to be along for the ride.

The aperture is widened for two reasons. One is simply that you don’t need to pre-train anymore. There are awesome open-source base models that you can build off of and customize. Even pre-training was the thing that wasn’t quite for everyone, but that’s not necessary anymore. You can get straight to the fun part and customize these models through prompting or RAG or fine-tuning or RLHF these days.

The aperture is also widened to the fact that now we’re at the world’s best company for data and data analytics, and the world’s best data platform. What is AI without data and what is data without AI? We can now start to think much more broadly about a company’s entire process from start to finish with a problem they’re trying to solve. What data do they have, what is unique about that data and unique about their company? Then from there, how can AI help them or how can they use AI to solve problems? This is a concept we call data intelligence.

The idea that it’s meant to be in contrast to general intelligence. General intelligence is the idea that there’s going to be one model or one system that will generally be able to solve every problem or make significant progress in every problem with minimal customization. At Databricks, we espouse the idea of data intelligence that every company has unique data, unique processes and a unique view on the world that is captured within their data, how they work, and within their people. AI should be shaped around that. The AI should represent the identity of the business, and the identity of that business is captured in their data. Obviously, this is very polemic to say data intelligence versus general intelligence. The answer will be somewhere in between. To me, honestly, every day at work feels like I’m doing the same thing I’ve been doing since the day Mosaic started, just now at a much bigger place with a much bigger ability to make an impact in the world.

Jon: There’s something very special about the advantage that you have that you’re seeing this parade of customers who have been on a journey from prototype to production for years now, and the most sophisticated among them are now in production. And so for that, I have two questions for you. Number one, what do you think it was that has finally unblocked and made that possible? And number two, what are those customers learning who are at the leading edge? What are they finding out that the rest of the customers are about to discover?

AI+Data in the Enterprise: Scaling from Prototype to Production

Jonathan: So, I’m going to, I guess, reveal how much less I’m a scientist these days and how much more I become a business person. I’m going to use the hype cycle as the way to describe this, and it breaks my heart and makes me sound like an MBA to do this. Among enterprises, there are always the bleeding edge early adopter tech-first companies; they’re the companies that catch on pretty quickly and the companies that are more careful and conservative. What I’m seeing is those companies are all in different places in the hype cycle right now. For the companies that are early adopters in tech-forward, the peak of inflated expectations, they hit that two years ago around the time ChatGPT first came out. They hit the trough of disillusionment last year when it was really hard to get these systems to work reliably, and they are now getting productive and getting things shipped in production. They’ve learned a lot of things along the way.

They’ve learned to set their expectations properly to be honest, and which problems make sense and don’t make sense. This technology is not perfect by any stretch, and I think the more important part is we’re still learning how to harness it and how to use it in the same way that having punch cards back in the 1950s or ’60s is still turning complete and still a little bit slower, but just as capable as our computing systems today from a theoretical perspective. 50 years of software engineering later, and it’s much easier to build an architect a system that will be reliable and build it in much other way, and all these principles we’ve learned. I think those companies are furthest along in that journey, but it’s going to be a very long journey to come. We know how big of a system we can build at this point without it keeling over and where the AI’s going to be unreliable, and where we need to kick up to a human, which tasks make sense, which tasks don’t make sense.

A lot of them I’ve seen have whittled it down into very bite-sized tasks. The way that I typically frame it for people is you should use AI either in cases where it’s open-ended and there’s no right answer, or where a task is hard to perform, but simple to check, and you can have a human check. I think GitHub Copilot is a great example of this where you could imagine a situation where you ask AI to write a ton of code. Now, a human has to check all that code and understand it. Honestly, it may be as difficult as writing the code from the beginning or pretty close to it. Or you can have the AI suggest very small amounts of code that a human can almost mechanically accept or reject, and you’re getting huge productivity improvements. This is a scenario where the AI is doing something that is somewhat more laborious for the human, but the human can check it very easily.

Finding those sorts of sweet spots is where the companies who have been at this the longest. They’ve also been willing to take the risk and invest in the technology. They’ve been willing to try things, they’ve been willing to fail to be honest. They’re willing to take that risk and be okay if the technology doesn’t work the first or second time, and keep whatever team they have doing this going and trying it again. A bunch of companies are in the trough of disillusionment right now, companies that are a little less on the bleeding edge. Then a bunch of companies are still at that peak of inflated expectations where they think that AI will solve every problem for them. Those companies are going to be very disappointed in a year and very productive in two years.

Jon: Naturally, a lot of founders who are going to be listening are asking, how do they get in these conversations? How do they identify the customers that are about to exit the trough, and how do they focus for them? What would you say to those founders?

AI+Data in the Enterprise: Landing Customers from Startups to Fortune 500s

Jonathan: I have two contradictory lessons from my time at Mosaic. The first is that VCs love enterprise customers because enterprise customers are evidence. At least if you’re doing B2B, you’re going to be able to scale your business, and you have some traction with companies that are going to be around for a while, that have big budgets, that when they invest in a technology invest for the long run. On the flip side, the best customers are often other startups because there’s no year-long procurement process. They’re willing to dive right in, they understand where you’re coming from and understand the level of service you’ll be able to provide because they’re used to it. You can get a lot more feedback much faster, but that is taken as less valuable validation. Even when I’m evaluating companies, enterprise customers are worth more to me, but startup customers are more useful for building the product and moving quickly. So, the answer is — strive for enterprise customers, don’t block on enterprise customers.

Jon: I think that’s fair, and optimizing for learning is really smart. There’s another thread that I would pull on, and this is something that I think you and I have both seen in the businesses that we’ve built, which is the storytelling. I won’t even say GTM, the storytelling around our product can be segmented even if the product is horizontal as so many infrastructure products are. Mosaic was a horizontal product. Databricks is a horizontal family of products, but there are stories that we tell that explain why Databricks and Mosaic are useful in financial services, really useful in healthcare, and there’s going to be a mini adoption flywheel. Not so many in each of these segments where you do want to find, first, the fast customers and then the big customers as you dial that story in. There may be product implications, but there may be not.

Jonathan: That’s a great point, and there are stories, I think, along multiple axes. These days in a social media world and in a world where everybody’s paying attention to AI, there are horizontal stories you can tell that will get everyone’s attention. One of the big lessons I took away from Mosaic was to talk frequently about the work you’re doing and have some big moments where you buckle down and you do something big. Don’t disappear while you’re doing it, but releasing the MPT models for us, which sound so quaint only a year and a half later. It really was only a year and a half ago that we trained a 7 billion parameter model on 1 trillion tokens. It was the first open-source commercially viable replication of the Llama 1 models, which sounds hilarious now that we have a 680 billion parameter mixture of expert model that just came out. The most recent metamodel was a 405 billion parameter model trained on 15 trillion tokens.

It sounds quaint, but that moment was completely game-changing for Mosaic, and it got the attention up and down the stack and across all verticals, across all sizes of companies, and led to a ton of business. Further moments like DBRX more recently same experience. Storytelling through these important moments, especially in an area where people are paying close attention, actually does resonate universally. At the same time, I totally hear you on the fact that for each vertical, and for each size of company, there is a different story to tell. My biggest lesson learned there is getting that first customer in any industry or in any company size or anything like that is incredibly hard. Somebody has to really take a risk on you before you have much evidence that you’re going to be successful in their domain.

Having that one story that you can tell leads to a ton more stories. Once you work with one bank, a bunch of other banks will be willing to talk to you. Getting that first bank to sign a deal with you and actually do something, even for the phenomenal go-to-market team we had at Mosaic was a real battle. They had to really fight and convince someone that they should even give us a shot, that it was worth a conversation.

Jon: Can you take me back to an early win at Mosaic where you didn’t have a lot of credentials to fall back on?

Jonathan: It was a collaboration we did with a company called Replit. Before we had even released the MPT models, we were chatting with Replit about the idea that we could train an LLM together, that we’d be able to support their needs there. They trained MPT before we trained MPT. They were willing to take a risk in our infrastructure, and we delayed MPT because we only had a small number of GPUs and we let Replit take the first go of it. I basically didn’t sleep that week because I was monitoring the cluster constantly. We didn’t know whether the run was going to converge. We didn’t know what was going to happen. It was all still internal code at that point in time, but Replit was willing to take a risk on us, and it paid off in a huge way. It gave us our first real customer that had trained an LLM with us and been successful and deployed in production. That led to probably a dozen other folks signing on right then and there. The MPT model came out after that.

Jon: How did you put yourself in a position for that lucky thing to happen?

Jonathan: We wrote a lot of blogs. We shared what we were working on. We worked on the open-source, we talked about our science, and we built a reputation as the people who really cared about efficiency and cost and the people who might actually be able to do this. We talked very frequently about what we were up to, and that was a lesson we had learned early on where I don’t think we talked frequently enough, but we wrote lots of blogs. When we were working on a project, we would write part one of the blog as soon as we hit a milestone, we wouldn’t wait for the project to be done. And then do part two and part three. And those MPT models were actually, I think, part four of a nine-month blog series on training LLMs from scratch. And that got Replits attention much earlier and started the conversation.

Maybe one way of looking at it if you want to be cynical is selling ahead of what your product is, but I look at it the other way, which is to show people what you’re doing and convince them that they can believe you’re going to take that next step. They want to be there right at the beginning when you first take that next step, because they want to be on the bleeding edge. I think that’s what got the conversation started with Replit and put us in that position. We were going to events all the time, talking to people, trying to find anyone who might be interested in enterprise that had a team that was thinking about this. There were a bunch of folks we were chatting with, but we had already started contracting deals with folks, but Replit was able to basically move right then and there. They were a startup. They could just say, “We’re going to do this,” and write the check and do it.

Jon: So being loud about what it is that you stood for and what it is that you believed.

Jonathan: And being good at it. I think we worked really hard to be good at one thing, and that was training efficiently. You can’t fake it until you make it on that. Like we did the work, and it was hard and we struggled a lot, but we kept pushing. At the strong encouragement of Naveen and Hanlin, our co-founders, they kicked my butt to keep pushing even when it was really hard and really scary and we were burning a lot of money, but we got really good at it. And I think people recognize that and it led to customers, it led to the Databricks acquisition. And I’m now seeing this among other small startups that I’m talking to in the context of collaboration, in the context of acquisition, anything like that.

The startups I’m talking to are the ones that are really good at something. It’s clear they’re good at something. It’s been clear through their work, I can check their work, they’ve done their homework and they show their work. Those are the folks that are getting the closest look because they’re genuinely just really good at it, and you believe in them and you know the story they’re telling is legitimate.

Jon: There’s one more point on this, which I think complements and extends what you said, that your folks believed in something. This is not about a story and it’s not about results either that you believe training could be and should be made more efficient. A lot of the work you were doing anticipated things like Chinchilla that quantified how it could be done later.

Jonathan: Oh, we didn’t anticipate. We followed in the footsteps of Chinchilla. Chinchilla was early visionary work, and I can say this, Eric Olson, who worked on Chinchilla is now one of my colleagues on the Databricks research team. I mean, there are a few moments if I really want to look for the pioneers of just truly visionary work that was quite early. When I look back is just like tent pole work for LLMs.

Now, Chinchilla is one of those things. The other is like a EleutherAI putting together The Pile dataset, which was done in late 2020, 2 years before anyone was really thinking about LLMs. They put together what was still the best LLM training data set into 2022. We did genuinely believe in it, I think to your point. We believed in it and we believed in science, we believed that it was possible to do this, and through really, really rigorous research. We were very principled and had our scientific frameworks that we believed in our way of working. We had a philosophy on how to do science and how to make progress on these problems. OpenAI believes in scale, and now everybody believes in scale. We believed in rigor and that doing our homework and measuring carefully would allow us to make consistent, methodical progress, and that remains true and remains the way we work. It’s sometimes not always the fastest way of working, but at the end of the day, at least ait is consistent progress.

Jon: So here we are in 2025, and amazing innovation is happening and there’s even more opportunity than there has been, it seems to me. Even more excitement, even more excited people. How do you think the profile and the mix of skills in a new team should be the same and should be different as to when you formed Mosaic?

AI+Data in the Enterprise: How Research Shapes Business AI

Jonathan: It depends on what you’re trying to do. We hire phenomenal researchers who are rigorous scientists who care about this problem and are aligned with our goals, who share our values, who are relentless, and honestly who are great to work with. I think culture cannot be understated, and conviction is the most important quality. If you don’t believe that it is possible to solve a scientific problem, you will lose all your motivation and creativity to solve it because you’re going to fail a lot. The first failure, you’re going to give up, but beyond that, I think this is data science in its truest form. I never really understood what it meant to be a data scientist, but this feels like data science. You have to pose hypotheses about which combinations of approaches will allow you to solve a problem. About measuring carefully and developing good benchmarks to understand whether you’ve solved that problem.

I don’t think that’s a skill that’s confined to people with Ph.D.s far from it. So, the fact that Databricks was founded by a Ph.D. super team now means that more than 10,000 enterprises don’t need a Ph.D. super team when it comes to their data. I look at our Mosaic story through to our Databricks story now in the same way. We built a training platform and a bunch of technologies around that, and now we’re building a wide variety of products to make it possible for anyone to build great AI systems. In the same way that when you get a computer, and you want to build a company, you don’t have to write an operating system, you don’t have to build a cloud, and you don’t have to invent a virtual machine. I mean, the abstraction is the most important concept in computer science. Databricks has had a Ph.D. super team to build that low level infrastructure that required it to build Spark and Delta and Unity Catalog and everything on top of that.

And now it’s the same thing for AI. The future of AI isn’t in the hands of people like me. It’s in the hands of people who have problems and can imagine a solution to those problems. In the same way that, I’m sure, Tim Berners-Lee who pioneered the web, did not exactly imagine, I don’t know, TikTok. That was not what he had in mind when he was building the World Wide Web. The startups I’m most thrilled about engaging with today are companies that are using AI to make it easier to get more out of your health insurance, making it easier for you to solve your everyday problems, making it easier for you to just get a doctor’s appointment or for a doctor to help you. For us to spot medical challenges earlier, that’s the people who are empowered because they don’t have to go and build an LLM from scratch to do all that. That layer has now been created.

The future is in the hands of people who have problems and care about something. For a Ph.D. super team these days, there’s still tons and tons of work to do in making AI reliable and usable, building the tools that these folks need, building a way for anyone to build an evaluation set in an afternoon so that they can measure their system really quickly and get back to work on their problem. There’s a ton of really hard, complex, fuzzy-like machine learning work to do, but I think the interesting part is in the hands of the people with problems.

Jon: How is your role changing as you adopt these AI technologies inside Databricks? And you try to be, I’m sure, as sophisticated as you can be about it.

Jonathan: I’m still a scientist, but I haven’t necessarily written a ton of code lately. But I spend a lot of times these days connecting the dots between research and product, and research and customer, and research and business. Then come back to the research team and say, “I think we really need to do this. How can RL help us do that?” And then go to the research team and say, “You’ve got this great idea about this cool new thing we can do with RL. Let me go back to the product and try to blow their mind with this thing that they didn’t even think about because they didn’t know it was possible.”

Show up with something brand new and convince them, we should build a product for that because we can, and because we think people will need it. In many ways, I’m a bit of a PM these days, but I’m also a bit of a salesperson. I’m also a manager and I’m trying to continue to grow the incredible skills of this research team, both the people who have been with me for four years and the people who have just arrived out of their Ph.D.s and make them into the next generation of successful Databricks talent that stays here for a while, and maybe goes on to found more companies like a lot of my former colleagues at Mosaic have.

It’s a little bit of everything, but I have had to make this choice about whether I’m going to be as really, really deep as a scientist, write code all day, get really, really good at getting the GPUs to do my bidding, or get good at leadership and running a team and inspiring people and getting them excited and growing them, or get good at thinking about product and customers, and what combination I wanted to have there. That combination has naturally led me away from being the world’s expert on one specific scientific topic, but towards something I think is more important for our customers, which is understanding how to use science to actually solve problems.

Jon: There’s an imaginative leap that you have to make from the technology to the persona of your customer, and the empathy with that that I imagine involves being in a lot of customer conversations, but it’s an inversion of your thinking. It’s not, here’s a hard problem that we’ve solved, what can we do with it?” It’s keeping an index of important problems in your head and spotting possible solutions to that maybe?

Jonathan: I think it’s the same skill as any good researcher. No good researcher should be saying, “I did a cool thing. Let me find a reason that I should have done it.” Sometimes very occasionally, this leads to big scientific breakthroughs, but for the most part, I think a good productive everyday researcher should be taking a problem and saying, “How can I make a dent in this?” Or finding what the right questions are to ask and asking them and coming up with a very basic solution. All of these sound like product scenarios to me. Whether you’re building an MVP, like figuring out a question that hasn’t been asked before that you think is important to be asking in building an MVP and then trying to figure out whether there’s product-market fit, or the other way around finding a problem and then trying to build a solution to it.

I don’t think much research should really involve just saying, “I did this thing because I could.” That is very high risk and it’s hard to make a career out of doing that all the time because you’re generally not going to come up with anything. I’m going out and trying to figure out what the important questions are to be asking, both asking new questions and then checking with my PM to see if that was the right question to ask, and talking to my customers. It’s just now, instead of, my audience being the research community and a bunch of Ph.D. students who are reviewers and convincing them to accept my work, my audience is now customers and I’m convincing them to pay us money for it. I think that is a much more rigorous, much higher standard than getting a paper and then the reps. I had dinner with a customer earlier this week and they’re doing some really cool stuff.

They have some interesting problems. I’m going to get on a plane in two weeks and go down to their office for the day and meet with their team all day to learn more about this problem because I want to understand it and bring it back to my team as a question worth asking. It’s not a 100% of my time, but I think you should be willing to jump on a plane and go chat with an insurance company and spend a day with their machine learning team, learning from them and what they’ve done, and hearing their problems and seeing if we can do something creative to help them. That’s good research. If you ever sent me back to academia, that’s probably still exactly what I do.

Jon: One of my favorite things that you and I spoke about in New York some weeks ago was the existence of a high school track at the NeurIPS Academic Conference about AI. I wonder if you could share a little bit about that and what you saw, and what that tells you about the next wave of thinking in AI.

Jonathan: The high school track at NeurIPS was really cool, and also controversial for a number of reasons. Is this another way for students who are incredibly well off and have access to knowledge and resources and a parent who works for a tech company to get ahead further, or is this an opportunity for some extraordinary people to show how extraordinary they are and for people to learn about research much earlier than certainly I did and try out doing science? There are generational changes in the way that people are interacting with computing. This is something that my colleague Hanlin, who was one of the co-founders of Mosaic has observed and I’m totally stealing from him, so thank you Hanlin. Seeing companies that are founded by people who clearly came of age in an era where your interface to a computer was typing in natural language, whether it’s to Siri or, especially now, to ChatGPT.

That is the way they think about a user interface. You want to build a system? Well, just tell the AI what you want. On the back end, we’ll pick it apart and figure out what the actual process is in an AI-driven way. Build the system for you and hand it back to you. That’s a very different way of interacting with computing, but that’s the way that a lot of people who have grown up in tech over the past several years, a lot of people who are graduating from college now or have graduated in the past couple of years who are in high school now, especially that is their iPhone, that is their personal computer, is ChatGPT. It’s not buttons and dropdowns and dashboards and check boxes and apps. It’s tell the computer what you want. It doesn’t work amazingly well right now. Someday it probably will, and that day may not be very far away, but that’s a very different approach and one that is worth bearing in mind.

Jon: I want to switch gears a little bit and get to a technical debate that we’ve had over the years as well, which is about the mix of techniques enterprises and app developers are going to use to apply AI to their data. And of course, RAG and in-context learning have been exciting developments for years because it’s so easy and appealing to put data in the prompt and reason about that with the best model that you can find. There has been a wave of excitement, renewed wave of excitement, I’d say, around complementary approaches like fine-tuning and test-time compute, reinforcement tuning from OpenAI and lots more. I wonder if now is the moment for that from a customer perspective, or if you think we’re far ahead of our skis. What’s the right time and mix of these techniques that enterprises and app developers are going to want to use?

Jonathan: My thinking has really evolved on this, and you’ve watched that happen. We’ve reached the point where the customer shouldn’t even know or care. I want an AI system that is good at my task, and I want to define my task crisply and I want to get an AI system out the other end. Whether you prompt, whether you do few shot, whether you do an RL-based approach and fine-tune, whether you do LoRa or whether you do full fine-tuning, or whether you use DSPy and do some prompt optimization, that doesn’t even matter to me. Just give me a system, get me something up and running, and then improve that system. Surface some examples that may not match what I told you my intention was, and let me clarify how I want you to handle those examples as a way of improving my specification for my system, and making my intention clearer to you. And now, do it again and improve my system.

Let’s have some users interact with the system and gather a lot of data. Then let’s use that data to make the system better, and make the system a better fit for this particular task. Who cares whether it’s RAG, who cares whether it’s fine-tuning? The only thing that matters is did you solve my problem and did you solve it at a cost I can live with? Can you make it cheaper and better at this over time? From a scientific perspective, that is my research agenda right now at Databricks, but you shouldn’t care how the system was built. You care about what it does and how much it costs, and you should be able to specify, “This is what I want the system to do.” In all sorts of ways, natural language examples, critiques, human feedback, natural feedback, explicit feedback, everything, and the system should just improve and become better at your task, the more feedback you collect. Your goal should be to get a system out in production even if it’s a prototype as quickly as possible. So you start getting that data and the system starts getting better.

The more it gets used, the better it should get. The rest, whether it’s long context or very short context, whether it’s RAG with a custom embedding model and a re-ranker or whether it’s fine-tuning, at that point, you don’t really care. The answer should be a bit of all of the above. Most of the successful systems I’ve seen have had a little bit of everything, or have evolved into having a little bit of everything after a few iterations.

Jon: In previous versions of this conversation, you’ve said, “Dude, RAG is it.” That’s what people really want. There’s other things you can do to extend it, but so much is possible with RAG that we don’t need to look past that horizon yet. I hear you saying something very different now. I hear you saying, customers don’t care but you care and sounds like you’re building a mix of things.

Jonathan: Yeah, I think what I’m seeing, the more experience I get is there is no one-size-fits-all solution, that RAG works phenomenally well in some use cases, and absolutely keels over in other use cases. It’s hard for me to tell you where it’s going to succeed and where it’s not. My best advice to customers right now is try it and find out. There should be a product that can do that for you or help you go through that scientific process in a guided way so you don’t have to make up your own progression. For me, it’s now about, how can I meet our customers where they are? Whatever you bring to the table, tell me what you want the system to do, and right now we’ll go and build that for you and figure it out together with your team.

We can automate a lot of this and make it really simple for people to simply bring what they have, declare what they want, and get pretty close to what a good solution or at least the best possible solution will look like. It’s also part of my recognition that this isn’t a one-time deal where you just go and solve your problem. It’s a repeated engagement where you should try to iterate quickly, get something out there and get some interactions with the system. Learn whether it’s behaving the way you want it to, learn from those examples and go back and build it again and again and again and again, and do that repeatedly until you get what you want. A lot of that can be automated too. At least that’s my research thesis that we can automate or at least have a very easy guided way of going through this process to the point where anybody can get the AI system they want if they’re willing to just come to the table and describe what they want it to do.

Jon: What’s the implication for this sphere of opportunity of new model paradigms such as test-time compute, now, even open-source with DeepSeek?

Jonathan: I would consider those to be two separate categories. I was playing this game with someone on my team earlier today where he was telling me, “Yeah, DeepSeek has changed everything.” I was like, “Didn’t you say that about Falcon and Llama 2 and Mistral and Mixtral and DVRX and so on and so on and so on?” We’re living in an age where the starting point we have keeps getting better. We get to be more ambitious because we’re starting further down the journey. This is like when our friends at AWS or Azure come out with a new instance type that’s more efficient or cheaper. I don’t go and look at that and go, “Everything has changed.” I go and look at that and go, “Those people are really good at what they do and they just made life better for me and my customers.”

We get to work on cooler problems, and a lot more problems have ROI because some new instance type came out that’s faster and cheaper. It’s the same thing with models. For new approaches, it could be something like a DPO or it could be something like test-time compute. Those are probably not comparable with each other, but these are more things to try. These are more points in the trade-off space. I think about everything in life as a Pareto frontier on the trade-off between cost and quality. Test-time compute gives you this very interesting new trade-off, possibly between the cost of creating a system, the cost of using that system, and the overall quality that you can get. Every time another one of these ideas comes out, the design space gets a little bigger, more points on this trade-off curve become available, or the curve moves further up into the left or up into the right depending on how you define it.

Life gets a little better, and we get to have a little more fun. For this product and the system that we’re all building at Databricks, things get a little more interesting, and we can do a little more for our customers. So, I don’t think there’s any one thing that changes everything, but it’s constantly getting easier and constantly getting faster and constantly getting more fun to build products and solve problems. And I love that. A couple of years ago, I had to sit down and build the foundation model if I wanted to work with it. Now, I already start way ahead.

Jon: I love that. Jonathan, I’ve got some rapid fire questions that I’d like to use to bring us home.

Jonathan: Bring it on.

Jon: Let’s do it. What’s a hard lesson you’ve learned throughout your journey? Maybe something you wish you did better, or maybe the best advice that you received that other founders would like to hear today?

Jonathan: I’ll give you an answer for both. I mean, the hardest lesson I’ve learned is honestly, it’s been the people aspects. It’s been how to interact productively with everyone, how to be a good manager. I don’t think I was an amazing manager four years ago, fresh out of my Ph.D.. And my team members who have been with me that long or the team members who are with me then will surely tell you that. I like to hope the team members who are still with me think I’m a much better manager now. The managers who have managed me that entire time, who have trained me and coached me, think I’m a much better manager now. Learning how to interact with colleagues and other disciplines or other parts of the company, learning how to handle tension or conflict in a productive way, learning how to disagree in a productive way and focus on what’s good for the company.

Learning how to interact with customers in a productive way and a healthy way, even when sometimes you’re not having the easiest time working with the customer and they’re not having the easiest time working with you. Those have been incredibly hard-won lessons. That’s been the hardest part of the entire journey. The part where I’ve grown the most, but also the part that has been the most challenging. The best advice I’ve received probably from my co-founders, Naveen and Hanlin.

One piece of advice from Hanlin that sticks in my mind is, he kept telling me over and over again that a startup is a series of hypotheses that you’re testing. That kept us very disciplined in the early days of Mosaic, stating what our hypothesis was, trying to test it systematically, finding out if we were right or wrong. That hypothesis could have been scientific, it could have been product, it could have been about customers and what they’ll want, but it was turning that into a systematic scientific endeavor for me, made it a lot easier for me to understand how to make progress when things were really hard and they were really hard for a long time. I know that wasn’t a rapid-fire answer to a rapid-fire question, but it’s a question I feel very strongly about.

Jon: Aside from your own, what data and AI infrastructure are you most excited about and why?

Jonathan: There are two things I’m really excited about. Number one, products that help you create evaluation for your LLMs. I think these are fundamental infrastructure at this point. There are a million startups doing this, and I think all of them are actually pretty phenomenal. I could probably give you a laundry list of at least a dozen off the top of my head right here, and I bet you could give me a dozen more that I didn’t name because we’re all seeing great pitches for this. I have a couple that I really like, a couple that I’ve personally invested in, but I think this is a problem we have to crack. It’s a hard problem, and it’s a great piece of infrastructure that is critical. The other thing that I’m excited about personally is data annotation. I think that data annotation continues to be the critical infrastructure of the AI world.

No matter how good our models get and how good we get at synthetic data, there’s always still a need for more data annotation of some kind. And revenue keeps going up for the companies that are doing it. The problem changes, what you need changes. I don’t know, I think it’s a fascinating space, in many ways, it’s a product. In many ways like my customers these days, the data scientists at whatever companies I’m working with are also doing data annotation or trying to get data annotation out of their teams. Building an eval is data annotation. I mentioned two things, these are both my second favorites. I think they’re the same at the end of the day. One is about going and buying the data you need. The other is about tools to make it easy enough to build the data that you need, that you don’t need to go and buy it.

I have a feeling both companies have made a lot of progress on AI augmentation, or both companies on AI augmentation of this process. When I do the math on the original Llama 3.0 models, this is the last time I sat down and did the math. My best guess was $50 million worth of compute and 250 million worth of data annotation. That’s the exciting secret of how we’re building these amazing models today. That’s only going to become more true with these sorts of reasoning models where I don’t know that reasoning itself is going to generalize, but it does seem like you don’t need that many examples of reasoning in your domain to get a model to start doing decent reasoning in your domain. And that’s going to put even more weight on figuring out how to get the humans in your organization or to get humans somewhere to help you create some data for your task that you can start to bootstrap models that reason on your task.

Jon: Beyond your core technical focus area, what are the technical or non-technical trends that you are most excited about?

Jonathan: There are two, one as a layperson and one as a specialist. As a layperson, I’m watching robotics very closely. For all of the interesting data tasks that we have in the world, there are a lot of physical tasks in the world that it would be amazing if a robot could perform. Thank goodness for my dishwasher, thank goodness for my washing machine, I can’t imagine what my life would look like if I had to scrub every dish and scrub every piece of clothing to keep it clean. Robotics is in many ways already in our lives. These are just very specific single-purpose robots. If we can make a dent in that problem, and I don’t know if we will this decade or in three decades. Like a VR, I feel like robotics is a problem that we keep feeling like we’re on the cusp of, and then we don’t quite get there, but we get some innovation.

I love my robot vacuum. That is the best investment I’ve ever made. I got my girlfriend a robot litter box for her cats a few weeks ago. I get texts every day going, “Oh my God, this is the best thing ever.” And this is just scratching the surface of the daily tasks we might not have to do. I would love something that could help people who, for whatever reason, can’t get around very easily on their own to get around more easily, even in environments where they’re not necessarily built for that.

I have a colleague who I heard say this recently, so I’m not going to take credit for it, but the idea of things that make absolutely no logistical or physical sense in the world that you could do if you had robots. In Bryant Park right now, right below our Databricks office in New York, there’s a wonderful ice skating rink all winter. If you were willing to have a bunch of robots do a bunch of work, you could literally take down the ice skating rink every night and set up a beer garden, and then swap that every day if you wanted to. Things that make no logistical sense because they’re so labor-intensive. You could do that, and suddenly that makes a lot of sense. You can do things that are very labor-intensive and resource-intensive. So that gets me really excited.

Jon: From data intelligence to physical intelligence?

Jonathan: Well, somebody’s already coined the physical intelligence term, but yeah, I don’t see why not. Honestly, we’re dealing with a lot of physical intelligence situations at Databricks right now. I think data intelligence is already bringing us to physical intelligence, but there’s so much more one can do, and we’re scratching the surface of that. It cost Google, what, $30 billion to build extraordinary autonomous vehicles. The whole narrative in the past year has completely shifted from autonomous vehicles are dead, and that was wasted money to, “Oh my gosh, Waymo might take over the world.” So, I’m excited about that future. I wish I knew whether it was going to be next year or in 30 years. I spend a lot of time in the policy world, and I think that’s maybe even a good place to wrap up.

Before I was an AI technologist, I was an AI policy practitioner. That’s why I got into this field in the first place. That’s why I decided to go back and do my Ph.D.. I spend a lot of time these days chatting with people in the policy world, chatting with various offices, chatting with journalists, working with NGOs, trying to make sense of this technology and how we as a society should govern it. It’s something I do in my spare time. I don’t do it officially on behalf of Databricks or anything like that. I think it’s important that we as the people who know the most about the technology, try to be of service. I don’t like to come with an agenda. I think that people who come from a company and come highly motivated to make sure a particular policy takes place are conflicted like crazy, and will always come with motivated reasoning and can never really be trusted.

I think coming as a technologist and asking, how can I be of service and what questions can I answer, and can I help you think this through and figure out whether this makes sense? It’s a very fine line and you need to be careful about it. If you come in with your heart set on figuring out how to be of service to the people whose job it is to think about what to speak on behalf of society or to think on behalf of society, you can make a real difference. You have to be careful not to come with your own agenda to push something. A lot of people have highly motivated reasoning about, we shouldn’t allow other people to possess these AI systems or work with them, so you’ve got to be careful. You’ve got to build a reputation and build trust over many years. The flip side is you can do a lot of good for the world.

Jon: That is definitely a good place to leave it. So Jonathan Frankle, Chief AI scientist of Databricks, thank you so much for joining. This is a lot fun.

Jonathan: Thank you for having me.

Scaling, AI, and Leadership after Big Tech: Lessons from Highspot’s Bhrighu Sareen

Listen on Spotify, Apple, and Amazon | Watch on YouTube

Thinking about going from Big Tech to startup?

In this episode of Founded & Funded, Madrona Managing Director Tim Porter sits down with Bhrighu Sareen, who took the leap from Microsoft to Highspot and has been leading AI-driven product innovation ever since. They talk about the realities of transitioning from a massive company to a high-growth startup, scaling AI-driven teams, and how Bhrighu’s helping transform sales enablement through automation and intelligence.

His insights on navigating change, working with founders, and executing at speed are a must-listen for any leader considering the leap from stability to startup chaos.

Tune in now for practical strategies and an inside look at how AI is reshaping go-to-market execution.


This transcript was automatically generated and edited for clarity.

Bhrighu Sareen: Hi, Tim. Good to be here and thanks for having me. It’s always a pleasure to talk to you. I’ll say if you’re thinking about making a switch, Tim’s a really good person to speak to. I remember the first time we met, I had spoken to Robert and the co-founders a few times, and on the fence, not sure, and then Tim met me at this coffee shop in Kirkland, Washington.

Tim Porter: Zoka.

Bhrighu Sareen: Exactly, yeah. He was only, I think, 20 minutes late, maybe more and gave me some excuse about it. He was trying to raise a lot of money for the next fund or something. Not that I could verify, but we’ll believe him. So, memorable, not just from that, but thank you for always being there in terms of providing guidance and how to think about it and also specific examples from your past.

Tim Porter: Thanks, Bhrighu. That is a true story. My only defense, I don’t remember why I was late, but at least I came to you. I drove across the lake and to a coffee shop in your neighborhood.

All right, Bhrighu, why don’t we start with your journey from Microsoft to Highspot? What were you doing at Microsoft? Say a little bit more about building Teams and what inspired you to make the switch from Big Tech to startups. This is definitely a very common thing. People who have had successful careers at Big Tech companies and “Hey, I want to do something earlier stage. How do I do it? Why do I do it?” You’ve done it super successfully. Tell us about that journey.

Bhrighu Sareen: Microsoft Teams was the last project I worked on, but prior to that, I had worked in Bing, MSN, and Windows. Microsoft is a phenomenal place because you can have multiple careers without leaving the company. I got an opportunity to learn a lot, started my career there and just worked up through the levels. Numerous challenges, great managers, great mentors, and a lot of very, very smart people that allowed me to grow and challenge myself.

Teams was a phenomenal experience. It was this small product. I joined before we even went to beta, and no one knew where it would land up, but the concept of redefining how information workers actually work, reduce friction, and having to use multiple tools to get the job done was appealing. It was a phenomenal challenge, and it gave me a chance to learn. When I joined, like I mentioned, we hadn’t even shipped the product or it wasn’t even in beta, but then went from there and took it from zero monthly active users to 300 million monthly active users when I left, so phenomenal growth.

I’d been there six and a half, seven-ish years and done a lot of different roles, taken on different aspects of the product, built in PM/ENG together, ecosystem, partnerships, customer growth, and I was looking for my next challenge. I think every product person has this thing in the back of their mind like you mentioned, which is like, “Huh, should I do a startup? Is the grass greener on the other side?” I had been in Big Tech for 17-ish years with Microsoft, and so I think in priority order for me it was that I wanted a challenge, and a place where I could learn. The two different aspects, could I have taken another job at Microsoft and learned? Absolutely. I would’ve taken on a new challenge, a new dimension to it, but the depth of learning that this change allowed me to take was drastic. I think the greater the challenge, the greater the learning.

The other end of the spectrum was going from a big company to a smaller company. I report directly to the CEO, and that provides a peer group like the CFO, CHRO, so learning in terms of what are the issues that are impacting the entire company. You’re not just focused on your product area and on your scope, and that’s the only thing I’m going to do, learning in terms of speed and agility. As a startup, you don’t have a billion dollars in cash sitting in the bank. You don’t have 25,000 developers that you could pivot into whatever area you want to go. It’s just speed and agility.

Tim Porter: Teams, zero to 300 million, it’s insane to think about that kind of growth.

Bhrighu Sareen: It is. It is.

Tim Porter: I remember our first conversation, and I was struck by a few things about you. One — that this point about agility is that you seemed very much about making decisions and cutting through broader group dynamics.

Bhrighu Sareen: Exactly.

Tim Porter: Sometimes part of the art of being a good executive at a big company is how you embrace big group dynamics, and you were more sort of cut through it and had lived through this hyper-growth, so that seemed like a great fit.

Even across our portfolio there were more than one company that were recruiting you. Why did you pick Highspot? I mentioned a little bit about the product and what Highspot did. You wanted the right scenario with founders and challenge, but you had ultimately different product areas to pick. Why build in this area?

Bhrighu Sareen: Another area of learning, in addition, was I mentioned all the teams I had worked in at Microsoft, and I’d never worked in Dynamics or in anything to do with sales or MarkTech (marketing technology.) It was another dimension where I could grow in terms of a new technology, a new space, and one that was evolving very rapidly. Then, coming back specifically to Highspot, I think there are a few dimensions why. One was the people. When you’re making such a drastic change, if it doesn’t feel right in terms of the people front, it could become very, very messy.

You hear these stories about people working, and oh, man, it didn’t work out. In three to five months, they’re looking for their next opportunity, so the people was one very important part. Highspot is super lucky. The three founders, Robert, Oliver, David, are good human beings to begin with, and they care.

Second, was the space was new, and then third was Highspot, when I joined, was known for content and guidance, and that was the key thing. How do you equip salespeople with the right content at the right time? They had all the right ingredients. A lot of companies never make the switch from being a single-product company to a two-product company to a multi-product company. With our release in October 2024, what’s happened is Highspot has made that transition from a single-product company to a multi-product company.

When I saw these different ingredients, I remember after the first meeting with Robert … I got a cool story. Should I digress a little?

Tim Porter: Let’s hear it.

Bhrighu Sareen: I remember I wasn’t actively saying, “Hey, I want to leave Microsoft,” because I really enjoyed working there. Somebody mutually connected us, and it was a meeting from 4:00 to 5:00 P.M. It was 4:00 to 5:00 P.M. on a Thursday, and our offices are downtown Seattle by Pike Place Market. So, I drive up and go and meet Robert 4:00 to 5:00 P.M. One thing leads to another. When I got back into my car, it was 7:15 P.M. I called the person that mutually connected us, and this person’s like, “How’d it go?” I said, “I think it went well, but I think Robert was just being nice because we were supposed to be done at 5:00. We finished at 7:15.” And he’s like, “No, no, no, no. Robert’s met other people. Usually, it doesn’t go this long.”

You feel that connection, and so when I went back the next time to meet Robert, I was like, “Hey, you’ve built an LMS. You have a CMS. You’ve got all these different pieces. If we stitch this together, Highspot’s time can be significantly greater than what it is right now based on the product offering you have.” So, when you take all those three or four things combined together, it felt like it was a good place to be.

Tim Porter: Fantastic. I want to come back to how you’ve worked with these founders. When you joined Highspot, there were, and there still are, three founders who are super active in the company: Robert, Oliver, and David. We did a podcast, Oliver and I, five years ago about Highspot. That’s one of my favorites ever. You and Oliver work super closely together now on product?

Bhrighu Sareen: Yes.

Tim Porter: All three are really involved, which has been such an amazing part of this company, but also, in theory, not easy coming in and having this big role. You’re running half the company, product engineering, yet there are these three founders who are all very opinionated around product and engineering. Yet, it’s worked out. What has it been like coming in in your role and working effectively with founders?

Bhrighu Sareen: I’m super grateful to Robert, Oliver, and David. One, for giving me the opportunity and two, how they welcomed and included me into Highspot. A bunch of friends of mine gave me advice that this was going to be a really bad idea because they’re actively, actively involved. Like you said, they have opinions. They have more contacts than I can ever have because they’ve been doing this for a decade.

Tim Porter: Absolutely.

Bhrighu Sareen: They have more knowledge. They have more connections. They have more foresight, because I had never worked in this space. David was doing the show before I joined, and I’ve said this to David. I would not have been even half as successful or delivered even one-tenth the impact for Highspot if it weren’t for David. David has been this voice in my ear all the time, selflessly giving me advice, giving me guidance, whether it was people, product, process, or strategy. Hey, Bhrighu, here’s the pitfalls. Watch out for this. Watch out for that. He does it in a super humble way. He’s not being arrogant. He’s not trying to show me down. You could have said, “Okay, hey, great job. We got someone, good luck. If you have questions, let me know.” But being proactive about helping me and helping us move forward, so super grateful to David for that.

Tim Porter: I think there’s a question I framed around how you navigated it, but there’s a great message to founders here around how you onboard a new exec, empower them, get them ready, work together with them, and I think that’s super critical. I was going to ask how did you get comfortable with that, if that was going to be the case? Unfortunately, I’ve seen some cases where, “yeah, I talked it through with the founders. They said they wanted me to be there, and then it turned out they didn’t.” On some level, it’s a human thing, and you build trust. Was there anything on how you got confidence that they were serious about not just saying that we’re ready to bring somebody else in and do all the things you mentioned that David has done to help make you successful?

Bhrighu Sareen: Two things. One is — if you remind yourself that we’re all going to win together, it isn’t as if David or Oliver are going to have a different outcome than I’m going to have. So, I tell myself, “Okay, are they coming from the right place? And wherever we’re going, whatever decision we’re making, is it going to take Highspot to the next level?” If you think about Highspot first — is it the right decision? Then it becomes easy. Then it’s not about ego. Is your idea right, or is my idea right? It doesn’t matter. Is it the right idea for Highspot?

Tim Porter: Clearly, these folks wanted to win.

Bhrighu Sareen: Exactly, and so if you come in with an attitude to win, with an insane bias for action, and are humble enough to say, “okay, we learned” because not all decisions will be positive, “We learned from that, and now we’re going to go back and fix it,” I think it’s hard. Even the human nature part I think because founders in general have a different mindset, but these three things should align with every founder out there.

Tim Porter: So, having a great partner on the engineering and product side in David and Oliver.

Bhrighu Sareen: Yes.

Tim Porter: Robert is probably the best strategic product vision exec I’ve ever worked with. I remember seeing that way back at Microsoft and now, in the last 12 years at Highspot, but it can also be hard for you. He’s always the head of product in some ways, and that’s made the company so successful. How has that dynamic been?

Bhrighu Sareen: One other suggestion I’d make to the folks listening is communication is key, and building relationships is key. Yes, all three founders are Microsoft, and so was I, but our paths never crossed. I had never spoken to these folks. A lot of people say, “Oh, yeah, this was easy. You picked Highspot because they were Microsoft. You probably interacted with each other, and then they pulled you over.” I’m like, “I did not know the individuals by first name, last name or even existed.” We had never crossed paths, whether in social circles or at work. When you’re starting new, commit to frequent communication and building trust.

One of the things that we put in place was every Tuesday, Robert and I have lunch. The second thing we do is the three founders, and I would meet every Tuesday as well. Initially, they committed to actually sharing context, helping me grow. We said, “Hey, we’ll do this for three months-ish. I’ll be ramped up. Four months, you’ll have enough context, and we’ll cancel the meeting. We don’t need it.” Fast forward two-plus years, we still meet every Tuesday, all the three founders and myself.

Tim Porter: It’s often simple, smart mechanisms that are persistently applied.

Bhrighu Sareen: Yeah, there’s no shortcuts.

Tim Porter: There’s no substitute.

Bhrighu Sareen: Exactly. There’s no substitute, absolutely. There’s no shortcut in building trust. You have to put in the time. I think, animosity builds up when you are making stories up in your mind. He said this, so he actually means that, or she said that, so this is what’s going to happen. Why don’t you ask them what they mean? But you’re so busy in your day-to-day life that you don’t. So, my lunch with Robert every Tuesday forces us to discuss a whole bunch of things.

Tim Porter: Maybe mention some of the initiatives or structural things that you’ve done at Highspot. As someone on the board, I’ve been struck overall by how the company’s gotten more efficient, but also shipping even more things, the velocity of things. Your shipping is amazing. You’ve done some things around team and offshore but talk about some of the things that you put in place, building on the great foundation that the founders and others had built and how that’s gone. Has it all been easy? Has it been challenging? What’s it been like?

Bhrighu Sareen: A super complicated question. I’m not sure where to go because you’ve thrown in “puting structural things in place, how do you increase product velocity, how did you get our costs into control.”

Tim Porter: We can throw you a softball. You can take it in any direction. Specifically, how would you describe the major initiatives that you had to put in place when you got to Highspot from a product and engineering standpoint?

Bhrighu Sareen: I lived it the last two-plus years now, and when I joined, I have to start by saying everyone’s heart was in the right place. The clarity of where we were going was always there, and that’s super important because if your founders, the exec team, the VPs, senior directors, directors, and so on and so forth aren’t clear on where we’re going, that’s a big problem. At least those two parts where everyone’s heart is in the right place.

So, then coming back to the question you asked with that context is we realized that the company was growth at all costs. Before I joined, the previous 18 months or two years was phenomenal. We’ve been hiring people all over North America, and so now that you have clarity of where you want to go, how do we make sure that we maintain the velocity by removing roadblocks for the crews. We identified that our smallest unit of impact is actually a crew, and one of the things we put in place was something we called an edge meeting because your ICs are the edge.

Tim Porter: How big are crews, roughly?

Bhrighu Sareen: Eight to 10 with a product manager, an engineering manager, a bunch of IC engineers, and a designer. So, the crew is your unit of work, and they’re the ones that get stuck because it could be cross-team dependency, it could be because the designer is working on some other project, they aren’t clear on the architecture, or they’ve taken a decision that three months later you got to rework. The product leadership team decided that every two weeks we’re going to meet every single crew. Every week we do crews one, two, let’s say 12, and then 13 to 24 will be next week.

We were spending about 15 hours a week, and initially, the teams were like, “Whoa, whoa, whoa. This is micromanagement. It’s a bad idea, or it’s a waste of our time.” And I said, “One second. Except for the product leadership team who’s going to be in all the meetings, for a crew, you’re only spending one hour every two weeks.” But what it allowed the crews to do, when you look back everyone’s like, “Oh, this was great.” The uber concept around that was enable every discipline to have a voice to raise any issue in an open and transparent manner that is predictable.

It isn’t because some PM and engineer had a meeting with our VP of engineering, and they took a decision, so the design team was like, “Hey, you left us out.” It wasn’t as if they were doing this out of bad intent. It was just because the speed at which we wanted to move, if you met someone, and you got on a Zoom call, you took a decision, you moved on. It wasn’t because I’m purposely leaving the engineer out, I’m leaving the PM out. So, whoever got to the decision makers, took a decision, got a roadblock moved, and they will move forward.

That edge meeting thing was initially hard for people to say if it was worth it, but now, when you look back, engineers bring in architecture documents. PMs bring in specs. We look at Figmas, and we’ve just been able to remove so many blocks.

Tim Porter: Making sure everyone has a voice, smaller teams where everyone can be heard, and then also the frequency increased. So, instead of every other week, how often do you have this now?

Bhrighu Sareen: No, it is still every two weeks.

Tim Porter: Okay, got it. They get much more out of those times together.

Bhrighu Sareen: Correct, but if they need additional time, they can always ask for it. But it’s predictable, as in everybody in the company knows, “Oh, this particular crew is going to have a review at this particular time.”

Tim Porter: That’s fantastic.

Bhrighu Sareen: Then one more thing that I did because you asked, “Hey, what are the structural things,” one other thing I did was cross-team dependencies, usually for companies, depending on the size, whether you’re mid-size or larger, even Big Tech, one crew can never ship an entire thing. They’re dependent on some infrastructure pieces, a cross-team, or UX, or whatever it might be. One interesting thing happened six months after we put this thing in place. A particular crew comes up and says, “Hey, we are unable to ship this.” We’re like, “Why is that the case?” “Oh, because we have a dependency on this other crew, and they’re unable to do it.”

So, we evolved the edge meeting to say any crew can summon any other crew, or any crew can join any other crew because it’s transparent. Everyone knows the schedule, which crew’s presenting when, and they can come in. It did a phenomenal thing. 90% of cross-team dependencies get resolved before they ever show up to the PLT. You have a crew. I have a crew. We’re PMs. Each of the crews are like, “Hey Tim, I really need you to show up to the product leadership team, our edge meeting. We want you in there because there’s dependency.” Tim’s like, “Oh.” You’re like, “Hey, hey, can we just resolve this right here?”

Tim Porter: Absolutely. Create an environment where teams can work it out amongst themselves and not have to bring everything up and bubble it up at a much faster pace. That’s fantastic. Maybe talk a little bit about how you’ve organized around offshore and a few centers of excellence. I know a big topic of conversation for lots of companies is back to office and being together, but then there’s also a need to be able to have engineering groups in other geographies for various reasons. I think you’ve done a nice job of finding a great way to do that. Maybe describe how you’ve done that.

Bhrighu Sareen: I started in September 2022, and if there are folks listening to this that are considering this move and you’ve always been like, “I was always in product and never in other kinds of roles,” if you’re thinking of joining or once you join, I would recommend sitting down with a CFO, VP of finance to understand how the numbers work. Chris Larson, our CFO was super gracious with his time, and after I started in September ’22, showed me the numbers. ’22 was this interesting year where, in the second half of ’22, things had started slowing down, and people weren’t sure what was going to happen to the economy. Are we going to enter a recession? Is it going to be soft landing? It could go any which way.

I look at the numbers, and I’m like, “Hmm, if our goal is to hit profitability, it doesn’t feel like we’re going to, this glide path isn’t moving in the right direction.” After getting educated, I realized we’ve got to make a few changes. One of the things we looked at was, we want access to a lot more talent, and can we do it, where can we do it, and how would it work out? The first thing we did was in November of ’22 with my direct reports, and through a connection at the Canadian Consulate in Seattle, we actually went up and met the government of British Columbia. From Seattle to Vancouver, it’s only a three, two and a half, three-hour, depending how fast you drive, Tim, two and a half drive or a three-hour train ride, and it’s super convenient. You could do a day trip if you needed to where you can meet the team, spend time.

We got a lot of good support from the government of BC, and we decided to open up an office in Vancouver, Canada. When we started, we said, “Hey, over a two-year period, we’ll put about 50 people in Canada in Vancouver.” Actually, in 18 months, we had already hit 50 people, and that provided us the ability to have our first distributed beyond remote. We were already doing remote before I joined Highspot, but it allowed us to have a center of excellence, access to a lot of talent, and allowed our engineers and leaders to see could we do a remote center and practice before we do something big.

Then after we saw that working, we decided to start a development center in India. Now, that was a lot of debate internally. A lot. You can imagine all the different reasons why we should not do it and maybe a couple of reasons why we should, but we moved forward. Again, this is about agility, speed, let’s try things out, and India offered Highspot access to an insane amount of talent. Over the last two and a half decades, talent in India has evolved. You could always get really good college hires. Over the last two and a half decades, so many companies have opened up offices there. What that has allowed us is middle management, and typically having the right managers will help you because they can coach, mentor, grow the early-in-career talent. That matters. Then senior leadership, there’s actually a decent population of team leadership who could run your development center in India.

Tim Porter: That’s been key for you. You have a great leader on the ground.

Bhrighu Sareen: Gurpreet.

Tim Porter: He’s been an awesome partner in making this work and growing it.

Bhrighu Sareen: That brings me to the other point I was going to make, which is the Indian population outside of India who have lived here, who have worked for these companies, a lot of them are actually moving back. Gurpreet actually spent, I think, 30-plus years in the United States, and then for family reasons, he wanted to move back. So, he moves back, and all of that knowledge on how to work with an American company, how to work cross-geo, how to work cross-time zone, how to actually have conversations with customers because when you’re building product, end-to-end ownership, there will be moments when they’re going to talk to your customers, and you want to feel comfortable that they can handle that situation.

Gurpreet is a great example of having a leader who can take an office of zero, and real stats, July of ’23 is when he signed his employment offer. Fast forward to September of ’23, we had four employees, Gurpreet, two engineers, and a designer. Fast forward right now, we have 125 people there and lots of open positions.

Tim Porter: Fantastic. We’ve been talking about things that have gone well, and coming to an earlier-stage company from a big company, I’m sure it’s not always that way. There’ve been challenges. We’ve managed through them really well. Just for the listeners, what was a surprise? You had a good sense with this team and everything, but there’s still surprises. Was there something that was a surprise that like, “Hey, this was a big transition to go from the big company, lots of resources to something smaller,” and how did you deal with that?

Bhrighu Sareen: I think if I were to prioritize all the challenges, I think the number one was how drastically the economy changed.

Tim Porter:
Yeah, there was a big externality that came about that wasn’t really expected to the extent that it hit.

Bhrighu Sareen: Exactly. I’m changing jobs, changing scale, changing employer, changing manager, changing technology, changing all these different dimensions, and then the last thing you expect is we tried coming from the middle of nowhere you’re like, “Oh, my gosh, what’s going to happen now?” But if you ask me how do you think about it, I actually look at it from another perspective, which is like I said earlier, my number one goal was learning a number of different dimensions. Guess what? That learning just got accelerated, and when you come out the other end, you’re going to be, “Huh, I did all of the things I was trying to go learn but then did it in an environment that might not show up.”

It gets cyclical, so it will show up at some point, but maybe not for another five years, four years, 10 years. Who knows what the exact timeline is? But that, I think, was the biggest challenge. You’re on our board, so those board meetings were interesting where, okay, burn. Your burn rate is crazy, but hey, can we just raise money like we have been every single round previously before. Somehow money dries up. The goals that the market is expecting you to hit change, and the company had never done layoffs before. Brutal. I think it’s just super hard.

Tim Porter: It was a hard period. The biggest thing is that a company like Highspot that had been growing meteorically in a large part as existing accounts were just hiring lots more go-to-market people, seat-based, and so the renewal rates just kept going up and up and up, and all of a sudden, all our customers budgets didn’t just get frozen, they got slashed.

Bhrighu Sareen: Correct.

Tim Porter: But we managed to keep innovating through that, find more things that give them an add-value while also getting efficient ourselves through some of the things we’ve been talking about.

Bhrighu Sareen: Exactly. So, the second thing I’d say on that topic, the previous question you asked, which was you touched on a really good point because we’ve all heard stories that x amount of companies during the dot-com bust actually decided not to do product innovation and just ride it, or through the financial crisis, not do product innovation and just ride through it and save money and conserve cash. But thanks to the folks on the board and the leadership team, we decided to just say, “No, we’re going to make this transition,” and now we’re seeing the fruits of that investment decision actually start paying off.

Tim Porter: That’s perfect. Let’s talk about that. Highspot’s always been an ML-based company. I think of the very early days, and how do you find all the sales content. Well, if you get the right signals and you put those into an ML system, you can find the right things more effectively, but it wasn’t AI in the way it is now. Talk about some of those. Give some examples. What are the things that you’re shipping? How do you use AI today? We can dig into that a little bit. Every company’s becoming an AI company, but Highspot is really there and has it in production with customers.

Bhrighu Sareen: You’re absolutely right. There’s patents with our co-founder’s names on it and other folks on there around ML technology and things of that. One of the interesting things, because also the stage we’re in right now, there’s the hype cycle where the trough of disillusionment, I think, is where we are right now. A number of our customers asked us, saying, “Hey, is any of this thing real, or is it all just hype?” What I had to land with them is if you’re Big Tech and you’re trying to reason over the significant amount of data you have for an individual, it could be across their emails and meetings and calendar and everything else, there’s three dimensions. It’s like how much compute do you want to put against it, or what’s the latency because if you give the large language models or your algorithm enough time, it will actually give you the right answer, or three is the cost. How many dollars do you want to throw at this problem, or every time there’s a query that’s sent?

I was talking to some of our customers saying in relation to that, the amount of data Highspot has to reason over for a particular individual or across an entire domain or entire customer is tiny. It’s super tiny in comparison to what the technology allows us to do today. It’s tiny. Two, there’s a number of dimensions that we could put together offline, so process it once a night, so the latency part gets taken care of, and there are real scenarios with real value.

As an example, most companies do a sales kickoff once a year. You bring your sales people together, you have a conversation, and you share the new products you’re launching. In some cases, they’ll ask them to do a golden pitch deck, which is marketing’s created the pitch deck. Let’s say we’re colleagues sitting around a table at the sales kickoff or you’re my manager. What I’ll do is I will read up, learn about it, and then I’m going to pitch it to you. You’ve got a rubric. You will score me on those things, and then I’ll do the same back for you. That’s expensive because you have to take people offline out of their day jobs, go get them to do this, or you have sales people record it, then the managers have to take time out and score.

A lot of managed sales managers are like, “I need to hit quota this month. I don’t have time for this.” So, we built a feature as part of our training and coaching product where the person creating the learning and development, the training has the training, has a rubric, verbal and nonverbal skills that they’re looking for for the salesperson to be successful, and saying the right words or how the pitch should come out. A fintech company used this and sent it around and got 800 responses back. Then the manager gets to see the video recording, the rubric. The AI uses the rubric that was provided to grade all 800 of these.

What we saw was in 55% of the cases, the manager left the AI-graded feedback as is. In another whatever 91% minus 55 is, in those cases, they either added or deleted one sentence, and only 9% of cases did managers actually delete what AI recommended as feedback and rewrite it.

Tim Porter: Wow.

Bhrighu Sareen: This financial services company, not fintech, huge financial service company, super impressed, and they’re actually now rolling it out. There’s a few hundred people, 800 or so that had access to it. Now, they’re talking about thousands of licenses for this one feature because the value and the time they’re getting back is significant.

The focus on features that will save salespeople, marketing people, support services, learning and development people that are creating it time and showing the right ROI. I can go on feature after feature after feature that’s been resonating.

Tim Porter: That’s awesome. This is this AI, real-time coaching in that the AI is really working both, and the feedback is accurate in that the manager doesn’t have to rewrite it. Then you see the impact with the end users.

There are a bunch of features. You talk about the Highspot AI Copilot. There’s the ways you do score carding. There’s ways you do content summarization, auto document generation. I think people are interested that maybe don’t know Highspot. Yes, they can go read the website, but maybe just rattle off a couple of these other new features in how you use AI for customers.

Bhrighu Sareen: Yeah, perfect. One other one that I’ll talk about is a lot of organizations have a system of record for their CRM, like for the customers. They have a system of record for, let’s say, that’s the ERP, but today, they don’t have a system of record for their go-to-market initiatives. It’s super interesting because you have so many different disciplines that have come together to actually take an initiative.

One of the initiatives as an example, and I’ll use that to just outline the capabilities we’re talking about, which is in during 2022, 2023, the CFO or the VP of finance inserted themselves in the buying process. We saw more and more sales cycles were getting elongated, and it wasn’t just procurement being able to, once the solution owner says, “I want to buy this particular product,” the procurement does the paperwork and is able to negotiate and get the deal done, but finance was like, “Whoa, whoa, whoa. We need to make sure this spend is happening. Is it correct? Is it worth it?”
There were initiatives that a lot of our customers wanted to roll out, which were: how do we train our salespeople to talk to the finance people? They do it with all our training. Now, if one of your initiatives was to increase expansion on product line ABC, there are so many pieces that have to come together for that to be successful. One is what content will get used, and is it being used? Is it created? Is it effective? How do we provide training? For example, one of the training could be financials, like how to talk to people in finance, and then product training on whatever the product line it is that you’re talking about. What’s the right sales play to use? Here’s a digital room. All these are capabilities that Highspot has, but you include it in your initiative.

Then, you want the ability to say, “Okay, all this training that I’m providing, the keywords I want to use, are they being used?” So, how do you check that? The really cool thing is today, a lot of meetings are getting recorded, and so Highspot’s conversational intelligence capabilities, again using AI and other aspects is not only able to draw out who said what, but then understand the intent behind that, so now you can have a single scorecard, a system of record for here’s all the initiatives that we care about. Here’s the cohorts because it could be a mid-market initiative, it could be an enterprise targeting enterprise customers or mid-market customers or commercial customers.

Then, in a single view, if you can have a conversation with your CRO and say, “All right, this is the content being used. Here’s how it’s being used.” Your CMO can look at how it’s performing and decide if they need to make changes. You can see the impact of all the training in real time and look at your salespeople. Are they saying the right thing? How are customers responding back to it? Again, it shows up in your initiative, and you can look at individuals, like I said in the case of the other previous example, where you can see, hey, this rep, for example, on these 12 metrics, on eight of them, you’re doing really well, nine of them you’re doing well, but on these three of them you could actually do a little better.

Highspot is that one unified platform that you go from content to guidance to understanding what’s happening with the salespeople, what skills and competencies they need to get better and then recommend the training because a lot of platforms out there today can tell you, “Hey, this meeting went well. Here’s the agenda. Here’s the topics that we discussed. Here’s aspects of the intelligence around the meeting.” That aspect is now a commodity. But when you have to take action on that, “Oh, they could have used the meetings over now. Send a digital room with this aspect. Highspot can do that. You want to take another action around, here’s a set of skills that you could do better in. Highspot can do that. Now, recommend a training based on the skills they need to improve in. Highspot can do that.

You want to now be able to follow a sales play because we’ve combined data with CRM to then say, “Okay, this is a pre-sales opportunity in the financial services business. Marketing has created this template. Let’s automatically generate a document, the presentation that will go for this particular thing, and bring in pricing if you choose to do that.”

Tim Porter: It’s so cool and illustrative that it’s such an integrative story from purely the customer’s perspective who doesn’t necessarily care about technology. They want outcomes. I see this across so many of the companies, even the very early startups, all this time and effort on we’re going to do this initiative, we’re going to do this campaign, we’re going to launch this thing, and so we have to get everybody ready. We have to train them. Then, at the end, or as it’s going, you see results like is revenue going up or down, and you can see individual reps are hitting quota or they’re not. But the why within that is so hard to tease apart.

Bhrighu Sareen: Exactly.

Tim Porter: So, you can go from all the way from the how do we train folks to what were the actions to what ultimately is working, and that way drive more revenue, more efficiency, the two golden things. Every company is like, “How do we get more revenue? How do we get it more efficiently?” So, not only is it an integrative experience for the customer, but you’re using AI in so many different ways.

Bhrighu Sareen: Exactly.

Tim Porter: Call recording and insights, automatically generating content, analyzing it, so that’s pretty neat, too.

Bhrighu Sareen: Recommending content.

Tim Porter: Recommending content.

Bhrighu Sareen: A lot of different aspects.

Tim Porter: Putting it together. What does that mean? We invest in new startups. There’s lots of innovation happening in AI, and one of the exciting things about it is you can build things really fast that have an impact. There’s a lot in your space. There’s a lot of cool things happening around the AI SDR or all the different pieces across sales even. How are you thinking about all of those startups and the opportunity for them to innovate versus what Highspot’s doing?

Bhrighu Sareen: Tim, I’m going to go back to something you said to me in ’22.

Tim Porter: Uh-oh.

Bhrighu Sareen: It was something interesting you said that there was a realization in the startup world that a lot of companies that got funding prior to ’22, it was a feature, but it could grow up to become a standalone company. I don’t remember your exact words, but my takeaway was that there was a number of startups that could have very, very good outcomes, but as opposed to being standalone, their outcome will be as a capability as part of a bigger product, something like that.

Tim Porter: Yep.

Bhrighu Sareen: That stuck with me because when I look at a startup, especially early stage, if you have a great idea, that’s awesome. Keep going, but know when it is time to be part of a bigger thing. You think it’s a really cool idea, it’s growing well, you’ve found product market fit. Either you have to start expanding into verticals or areas or product spaces that are adjacent, or you have to figure out when’s the right time to say, “Okay, I got to just be part of a bigger product.”

Tim Porter: It also, to connect back to something you referenced earlier, it often comes back to data too, doesn’t it?

Bhrighu Sareen: Oh, yeah.

Tim Porter: Access to data.

Bhrighu Sareen: Yeah, that’s another good point. Super good point, yes.

Tim Porter: You mentioned the big Co’s, like your former employer, they just have so much data across everything.

Bhrighu Sareen: Insane.

Tim Porter: Yes, it’s almost always better to have more data than less data, but it also creates challenges around, it’s just super expensive to process all of that. Then you have new companies — where it’s like, do you have enough of a data set? In your case, you said you don’t have that big of data relative to someone like Microsoft, but yet, I’m thinking through some of our biggest customers, some of the biggest technology companies, the biggest logistics companies, financial services companies, medical device companies, you have their whole corpus of sales, marketing, content docs, decks, white papers, et cetera. Is there something about having the right amount of data for these feeds? Say more about that.

Bhrighu Sareen: You’re absolutely right. People always say, “Hey, get all the data. Get all the data. Get all the data,” but you’ve got to figure out the right sweet spot for the scenario you’re trying to deliver. I think that’s something we’ve been very disciplined about internally is getting the data, cleaning the data, attaching it, and getting the right insights around the data. Then I think just as important, switching tracks as I agree with the point you made, is the user experience. I think as startups you have to decide, “I want to be the single pane of glass where everybody shows up.” But guess what? I think Teams, Slack, Zoom, Outlook, and Gmail are the horizontal applications where regardless of whether you’re a sales, marketing, finance, procurement, engineer, PM, or designer, you’re going to spend your time. That’s where you’re spending your time. Those are the tools you’re spending your time in.

Then you have role-specific. If you’re a designer, it could be Figma. It could be Adobe. If you’re a finance person, you have your own application. If you’re a salesperson, it’s CRM. So, there’s a set of horizontal applications that people spend their time in. The UX part, I think, is another aspect which a lot of companies need to make a decision is are you going to say that your data and your insights or the AI experiences that you’ve generated should only stay in your own and operated properties, or is it okay to show up inside Zoom, Slack, Teams, Outlook, Gmail where users are today?

Then there’s another question you’ve got to ask yourself is the new user experiences around these agents and co-pilots, do you surface your data, your insights there or not? Then, once you figure out the data, insights, user experience, once you figure out that stack, what do you do about the business model? Let’s say I’m a customer, and you’re here to sell me. And I said, “Hey, I’ve built my own internal copilot, like that’s our company-wide thing. I love this insight around content recommendation,” That’s a great example, “and our salesperson is sending out an email. Our co-pilot sits inside our email platform whether it’s Outlook or Gmail, and we want you to plug into that. So, we don’t really need to buy this other whole thing because we’re not using the product. We’re not using the scenario inside your product. We just want API access, so we’ll just pay for a data transfer fee, and that’s good enough.”

But you’re like, “Wait a second. I ran all the analysis, and I’m delivering you an insight. I’m not just delivering you something over an API for access to data.” These are interesting conversations that we’re going to have to have.

Tim Porter: Let’s put our future hats on here. So much continues to happen in this space with AI. Agents are a big topic of conversation. Salesforce is certainly talking about those a lot and lots of new startups. If you think a year or two ahead, maybe pick a year ahead, what are some of the things in AI you’re most excited about that Highspot will maybe be going to in that timeframe?

Bhrighu Sareen: Last year, you would’ve asked this question — you would not have used the word agent. You would’ve used the word co-pilot if we had recorded it. This year, you’re calling it agent. Two years from now, I have no idea what is going to get called, but the one thing that I am confident about is that month over month just because the speed of innovation is amazing. It’s such an amazing time to be in tech. I think every decade we say this is an amazing time to be in tech.

Tim Porter: It keeps being right.

Bhrighu Sareen: Yeah. It continues to be an amazing time to be in tech because every month, it doesn’t matter what you’re working with, whether it’s a big company or a small company or a medium-sized company or you’re thinking of starting something new, if you ask yourself, and we ask ourselves this thing is, how do we take advantage of the technology that we have access to provide value to that task in a day, to that user. So, regardless of what it’s called, if we focus on that and our list of capabilities where we can have measurable return back to our customers and delight, I think that’s the two things.

A lot of times it’s like, “Oh, the ROI is huge or not, but can you also have delight?” A lot of companies focus on ROI. Very few focus on the delight, and we have this unique opportunity where I have a 200-slide deck, which is actually a library that I have to then customize before I go present. I’m a salesperson before I present to the customer, and then they would spend, I don’t know, an hour, two hours to do that, putting in the right logo. The logo is not the right size. Making it smaller, making it bigger, all kinds of things. The delight of being able to complete that by answering four questions and getting it done in six minutes — mind blowing. The look on the reps’ faces is priceless.
So, Tim, I’m not sure what we’re going to call it, but I would say focusing on return and delighting the customers. I think that will be.

Tim Porter: What I hear you saying is there’s going to be ongoing radical productivity gains and being able to do tasks so much faster, maybe more accurately. Just so you’re not the only one putting yourself out there, I think there is a huge thing to this agent notion, and yes, it’s a newer naming, but where people will be able to interact with Highspot through natural language, through voice, and the system will complete tasks for them. So, to your point about reducing steps, et cetera, it might be the same outcomes, but getting there faster and the system does more of it autonomously, my guess is we’ll be talking about that in your user conference in the year or two to come.

Well, Bhrighu, thank you so much. Thanks for all you’re doing at Highspot. Thanks for this great advice on innovating in AI and thinking about making the jump from a big company to an earlier-stage company and super excited to see what we’re going to go build in years to come.

Bhrighu Sareen: Perfect. Thank you very much.

Tim Porter: Thanks so much for being here.

Bhrighu Sareen: Tim, thank you for having me. Thank you very much.

Serial Entrepreneur Mohit Aron on Founding, Scaling, and Leading Great Companies

Listen on Spotify, Apple, and Amazon | Watch on YouTube

What does it take to go from engineer to founder, from startup CEO to scale CEO? In this special live episode of Founded & Funded, Madrona Managing Director Karan Mehandru sits down with Mohit Aron, founder of Nutanix and Cohesity, to uncover the lessons behind his success. Mohit shares his unique hiring strategy, how to identify product-market fit, and the power of balancing vision with execution. Whether you’re navigating your first company or scaling your third, Mohit’s advice is a masterclass in resilience, grit, and building a legacy.


This transcript was automatically generated and edited for clarity.

Karan: I have a lot of questions for you, so I’m going to pull up my notes here to make sure I don’t miss anything. If you do it once, you’re lucky. If you do it twice, I feel like there’s a lot of skill. So the first question I have for you is, did you always know that you were built to be a founder? And what do you think the best founders have as far as capabilities, characteristics, traits? What does it take to be one that scales these companies to multiple tens of billions of dollars versus ones that fizzle out? Where do you begin?

Mohit Aron: All right. I’ll say for the first part, did I always know that I am meant to be a founder? Heck, no. All I knew was that I liked to put myself into uncomfortable situations. And that’s what it takes, number one, to be a founder because if you’re just sitting in a comfortable position, you’re not going to be a founder. So, trying to be a founder was just one more attempt to put myself in an uncomfortable position. That’s how I began.

Karan: Maybe they have cryogenic chambers for that. You can run a marathon, but you ended up starting a company.

Mohit Aron: It’s the one thing that I thought, “Let me put myself into yet more ways to make myself uncomfortable.” What it means to me to be a founder is, are you passionate enough to solve a certain problem? I’ve seen people build companies for the wrong reason. A lot of people build companies for the sake of building a company, or just to be a founder. That’s the wrong reason to build a company. Another reason why people build companies is, “Oh, I want to make a lot of money.” It’s also the wrong reason. If you want to shoot for that reason, if you want to build a company for making a lot of money, you’ll probably make some but you’ll not a lot. I promise you that. Because building a great company requires a lot of ups and downs. The minute you have a down, you’re probably going to get cold feet and you’re going, “Okay, let me go do something else.” Maybe, start another company, right? And now, you’ve potentially left a lot of potential on the table.

So if you are really passionate about some problem, then, and only then, you have an opportunity to really build a great company. As part of building a company, there’s going to be lots of ups and downs. And so, what your passion for shooting behind solving that problem means that you’re going to persist. Persistence and grit is one of the key things you need to have because you’re going to fall down a lot of times, no matter how many times you’ve done it before. For me, that’s what it means to be a founder.

Karan: That’s great. One of the things that I’ve repeated multiple times is that the probability you build a $10 billion company is inversely proportional to the number of times you state that as your goal. And it’s very true in your case, having known you for almost a decade and a half now.

You made three transitions in your career that are really hard to make. You went from a non-founding engineer to a founder, then you went from a founding CTO to a startup CEO, and then you went from a startup CEO to a scale CEO. Each one of these has its own set of challenges. Walk us through what the challenges were as you made each one of these transitions. What were things you had to leave behind? What were the new traits that you had to pick up?

Mohit Aron: Sometimes people who are employees ask this question, how come the founder has such an outsized equity or whatever? I also used to ask that. The answer is that eventually, the buck stops with the founder. Everything that goes wrong, eventually, is the founder’s problem. The employees are just doing that one thing.

Karan: It may not be your fault, but it is your problem.

Mohit Aron: That is right. You get blamed for everything, and that’s the first big jump I had to make. That as a founder, whether or not it was because of me, I owned it.

Karan: It’s the owner mentality.

Mohit Aron: It’s the owner mentality. That also means that you’re doing things that you’re not expert at. That was the first big thing that I had to become comfortable with. The second thing was hiring. Again, as a non-founder, maybe you hire people that are in your areas of expertise. As a founder, you’re now hiring people that so are not in your area of expertise. You have to learn and you have to keep them happy, because guess what? If you hire great people, they also have lots of other opportunities. Even if they decide to join you right now, if you can’t keep them happy, there’s plenty of companies. Plenty of great companies that VCs found, right? It doesn’t take long for them to jump ship.

So, becoming a people person at the same time as you are doing what you do best is very important. That’s another transition that you have to make. Companies are, after all, all about people. The best people. How do you hire people? How do you learn how to hire people? How do you learn how to hire people outside of your expertise? If you are a technical founder like I was, you have to learn how to hire people, let’s say in sales, and marketing, and stuff.

These are some of the things that sometimes I had to learn the hard way. If you’re a non-founder, somebody else set the vision for the company. Somebody else decided what the company is going to do. Now, it’s you to blame. If that doesn’t go well, it’s your on us. Learning how to set that long-term roadmap on what the company is going to do is another thing that I had to learn. I had to build a strategy for doing that. That’s the strategy behind my companies.

Karan: Let’s talk about hiring because that’s a big topic for all of us here. We actually heard a lot of folks here had to let go of people in the last two years, and then now we’re starting to hire. You mentioned something that has always stuck with me, which is you are a technical founder, you’re a product architect, yet you were able to hire some amazing go-to-market talent in the Nutanix and Cohesity journey. I remember pulling some people that are your lieutenants and saying, “Okay, use some words to describe Mohit.” And they used to say, “Tough but fair. He’s a hard boss.” The way I interpret that is that you have never tolerated much less celebrated mediocrity in your organizations.

What are the things that you look for? When you’re hiring somebody, for example, a sales CRO in a company where you’ve never been a CRO before. What are the things you’re looking for as you hire somebody in that role?

Mohit Aron: I’ll lay it out as a generalization. First of all, great companies are built by great people. I’ll say that again. If you have an underperformer in a job, either you have to do the work for that person because the buck stops with you, and then you cannot scale. By the way, I will talk about repeatability again and again. It’s all about repeatability. We’ll get to that.

To hire, I literally came up with hiring strategy. The way I do it is for every role, I come up with what I call a list of competencies you need in the role. For instance, if you are hiring a, let’s say a sales leader, maybe you need the person to have done it before. Maybe the person should have been a VP of sales at a prior company and maybe for a number of years, and maybe the person should come from enterprise background. Whatever those competencies are. I split those competencies into what I call scorecards. Three scorecards. The first one is what I call a pre-interview scorecard. This is competencies you can figure out by looking at the person’s resume or by maybe doing a phone screen because I don’t want to waste time. There’s a lot of time that goes in interviewing people. I don’t want to waste time if the person doesn’t even meet the basic competencies. So, that’s a pre-interview scorecard.

Then the next one is what I call an interview scorecard. This is what is used. These are the competencies I’m going to test for when the person is interviewed. I don’t test all of them. Maybe, I’ll test three or four of them and then I’ll have other interviewers test the other ones. Collectively, we form a very data-driven full picture of the person.
The last one is what I call a reference check scorecard. These are the things that you ask when you do reference checks. Now in the absence of this, here are some of the mistakes people make. The first mistake people make is they will interview someone. They really like the person, the way the person speaks, the way the person moves. Basically, it’s a chemistry match. But please understand that chemistry match is only one of the competencies you need in a role. You may need some 10 other competencies. Ten other big rocks, if you may. You need to look for those. Unless you have them written down and unless you explicitly test for them, you’re going to make mistakes. Yet, another mistake people make when they do reference checks. Everyone knows reference checks are important, but here’s the big mistake people do. They’ll call up the reference, “Hey, is this a great person?”

The person says, “Yeah, this is a great person. Hire the person.” And that’s the absolute worst reference check you can do. Again, you need to lay out what you need to ask in a bunch of competencies. Maybe the first question is, “Would you hire this person again on a scale of 1 to 10?” Because 6 is a failure. It’s a failure mark. People hesitate giving a negative one. So if you push them to put a number at it, they’ll say 6 or 7. That’s really a fail. Unless it’s a 8 or 9, I would not hire the person. Similarly, did good performers in your company. Did they value this person? Then maybe if you want to validate some strengths, “Does the person have a good methodology in a day-to-day execution?” Something like that on a scale of 1 to 10. Everything is on a scale of 1 to 10. You start getting the real answers. For instance, if they say 8, you ask them, “So, why not a 9?”
Then they’ll say, “Well, there is this one occasion when they didn’t do well.” Then, you can poke into that. But if you just ask, “Is this a good guy?”

“Yeah.” That’s the absolute wrong reference check that you can do. This is the hiring strategy I use. It has significantly increased my probability of hiring good people, but I would say it’s not 100%. No hiring algorithm is 100%. Even if you hire a good person, they might be good at the time you hired them. Couple of years down the line, especially if your company is growing at 100% or whatever in three years, it’s 8X or something like that, the person may no longer be good. People feel really uncomfortable with this, but you have to performance-managent.

Karan: That’s great.

Mohit Aron: When you hear people saying that I’m tough, it’s when that person is not working out. I don’t want to make it a hire and fire culture, so there’s a period of time when I’m trying to uplevel with the person. And guess what? The person is in pain. My goal at that time is to either uplevel the person, it’s up or out for me. Either the person elevates himself or herself, or they’re going to be out. As simple as that. And there’s a time threshold I have for that. That’s it.

Otherwise, I will end up doing the work for the person. Or worse, nobody does the work and the company is not doing well.

Karan: One of the things I’ve always appreciated about the way you manage and lead, there’s a combination of autonomy and accountability that you held very tightly, and I think that’s worked really well. By the way, there’s a survey in all of your phones right now. On a scale of 1 to 10, would you take money from Madrona again? So, we’re going to follow up on that part later.

Let’s switch gears a little bit. Some of the companies here are well past product market fit, and at this point, are thinking about product market pricing fit after Madhavan’s talk. And some of them haven’t reached product market fit. I’ve asked this question to many entrepreneurs, I’ve thought about it myself. What does it actually take to say that we have product market fit? Is it a feeling? Is it a scientifically calculatable metric? How do you think about product market fit? When did you know you had it at Cohesity?

Mohit Aron: I’m a B2B person, building enterprise companies. I have a very crisp definition for a product market fit. Again, it goes back to repeatability, but here’s my definition. If an average salesperson, again, average is the key, not an elite salesperson. If an average salesperson can sell to an average customer, again, not an elite customer, an average customer without involving people in headquarters, without involving me, without involving my C-level staff, then you have a product market fit.

If you think about it, if it’s a average person, you cannot hire all A-players as salespeople, and average person needs to be able to sell your product. You cannot all have elite customers who understand your product. The rank-and-file customers are average. The average sales guy can sell to an average customer without involving headquarters, because If I’m getting involved in every deal, it’s not repeatable, it’s not scalable. Once you can do that, you suddenly have repeatability. It means that now, I can hire tons of salespeople and they can sell to tons of customers without putting a load on the headquarters, and now you have a product market fit.
That’s my definition of a product market fit. When big deals start coming without any involvement, without me even stepping into the headquarters of the customer, suddenly the gong goes off and some big deal happens, I know there’s a product market fit.

Karan: That’s great. By the way, if anybody has questions, just raise your hand. This doesn’t have to wait until the very end. I’m sure folks have questions for Mohit, then I’ll pause. Just make sure somebody tells me if somebody’s hands are risen.

All right, let’s move to one of the things that I observed about you is that one thing that you navigated really well is that when you pitch for money or when you pitch for your idea to employees and other prospects to hire them, there’s this constant battle between you have to be focused enough to get the right wedge in the market, but you also have to have a big vision to be able to excite people to really understand this is a 10 billion whatever company. It’s hard for founders to navigate it because you’re constantly getting the feedback, it’s too niche-y, or you’re getting the feedback that it’s too unfocused.

How did you navigate that? Cohesity, in some ways, has replaced multiple companies as a platform, but you didn’t pitch that, well, you did, but you didn’t raise money and you unraveled those layers of the onion as time went on. Walk us through how you navigated that challenge.

Mohit Aron: Absolutely. First of all, if you want to build a great company, the vision has to be big. If you have a small vision, even if you build the best product, that vision can be copied. Very soon, your product will be a commodity. But at the same time, when you have a big vision, you can’t wait tons of years to build that vision. Customers are not going to pay for the vision, they’re going to pay for the actual product. It’s very important to have in my mind two components.

One is a vision. The second is, what I hesitate using MVP. I don’t like minimum valuable product. I think Madhavan was also using minimal, I think, valuable product. I prefer a minimum lovable product. You need to have a very clearly defined minimum lovable product that you can actually sell to customers.

The bigger vision is useful, A, for hiring great employees because after all, they’re joining you on a mission. They don’t want to do some small thing and then there’s nothing more to do. Second, the bigger vision protects you against competitors. By the time they try to copy your minimum lovable product, you are traded on the vision and moved ahead. And third, it’s also important for the customers. They want to solve a problem right now, but they want to solve it in a way so that they can keep your product for years and you can add further value.

Having these two components is key to building a sustainable business. There’s plenty of companies that were flash in a pan, did well for a few years and then became a commodity. We can go on and on and on. They eventually had to shut down or got bought by someone for a small price. It’s basically the problem of not having that bigger vision. Or conversely, some companies only have a big vision but don’t have a very well-defined minimum level of product. Maybe you spoke about feature creep when Madhavan was here. Some of that stuff is lots of vision but no lovable product.

Karan: By the way, one of my favorite stories of Mohit and he’s too humble to admit to it, but he’s one of the best engineers in the Valley. Period. When I invested in Mohit in the Series B, it was pre-product, and I remember him pitching that he’s going to launch the GA. This GA product was pretty big but it was still that minimum lovable product of an even bigger vision. He said the product is going to be GA’d four months later. It was October. We were somewhere in June, July. He was like, “It’s going to be October 14th.”

I was like, “All right. Well, I’ll just add another three months to it,” because no product goes GA when the founder tells you that it’s going to go GA. And I remember calling him like a month into it, in the afternoon, and I asked his EA, “I want to talk to Mohit.” He’s like, “Oh, no. He doesn’t take calls between 1:30 and 4:30.” and I’m like, “What the hell? I just invested $20 million in this company.” It turns out that from 1:30 to 4:30 p.m., Mohit would put on headphones, and he would sit there and code. He wasn’t taking any calls from investors or customers or anything. October 14th was the launch day, you launched the GA product on October 15th. And we sold, what, a million bucks?

Mohit Aron: Yeah, in the 1st quarter itself. That day when we GA’d the product, one of the customers with whom we were doing alpha testing, he surprised me. He stood on the stage and said, “I’m going to buy it for 300K.” That blew our numbers right there, the target right there. More orders came the very first quarter. So the very 1st quarter, we actually surpassed a million dollars.

Karan: Now, you found product market fit. Your average salesperson is selling to an average customer an average product. What do you do after? What is the next thing you do as a CEO? You feel it, you’ve see it. You see it it’s happening. How do you change the operational cadence of the company right after you got product market fit?

Mohit Aron: Repeatability. I mean, the repeatability is a necessary condition for product market fit. I think the company needs to operate very differently when pre-repeatability or pre-product market fit. And very differently, post-product market fit.

Karan: You stopped coding for three hours with your headphones right after that.

Mohit Aron: So pre-product market fit, your job is to get to repeatability. Your job as a founder and the rest of your C-level suite is to get involved in anything that’s important and get it finished. My job was to finish that code pre-product market fit, get a solid. If there’s no repeatability, there’s no big business. Post-product market fit, the equation changes. By the way, pre-product market fit, your job is also to conserve capital. You’re trying to do more with less resources because you want to extend the runway, whatever.

Post-product market fit is actually a mistake to be conservative because now you have the product market fit, a lot of companies are watching you or a lot of competitors are watching you. They want to copy you, that sort of stuff. You want to press on the gas. Nobody should be able to surpass you or catch up to you. That’s also where delegation comes. Use as much leverage as possible. You hire for the right roles, you want to delegate responsibilities.

Pre-product market fit, my job is to finish anything that needs my attention. Post-product market fit, it’s to, A, delegate as much as possible. Look, you’re always going to have people who may not be able to do the job that they’re assigned. Your job is to be a symphony or kind of like a symphony orchestra player. I’m sorry, a symphony conductor, not an orchestra. But not an orchestra player. You’re a symphony conductor to oversee a lot of things. You take on a breadth roll rather than a depth role, and your job is to descend and parachute down in areas that may not be doing well with the goal of coming out quickly. What that means is either you replace the person who’s not able to do the job or train them. Or you bring it to a point where it’s going to operating at 80% efficiency, and now the team can take it onwards. Not to finish the thing because you’ve got to watch out for a bunch of other things.

It’s a very different mindset. You have to change that mindset. Once you attain product market fit, suddenly if you don’t change that mindset, you’re going to kill the company.

Karan: There’s a really good quote by Aaron Levie who said, maybe like 8, 9 years ago now. He said, “The job of a startup CEO is to do as many jobs as possible so the company can survive, and the job of a scale CEOs to do as few jobs so the company can survive.”

Mohit Aron: Yep, that’s right.

Karan: I think that articulates what you were saying really well.

Audience Question: I love this theme and so I’m curious. A correlator to that is that a lot of times, you’re kind of the hub spokes, and all the spokes come to you to solve the problem. What did you learn in terms of finding ways for the spokes to solve the problem so that everything didn’t have to go through you, the hub? That’s just not scalable, especially as you get bigger. I’m even thinking about it the right way as you upleveled yourself and tried to work with a very capable executive team and not have to solve every problem.

Mohit Aron: That’s a great point. Rule number one, when you bring a problem to me, I also want you to bring the solution to me even if the solution is wrong. I will not give you the solution otherwise. It’s too easy for them to just bank on you. You present the solution and you keep doing that, they become more and more dependent on you.
Even if the solution you bring is wrong, that’s okay, but you need to bring a solution. If you do that, very soon, you’ll realize that they’ll start solving problems themselves.

Karan: That’s excellent. I have to use that. We have to use that internally, too. Let’s switch a little bit to generative AI. You can’t go through any conference, any session, any meeting without talking about it. We’re seeing a lot of investments in the infrastructure layer and the model layer. What’s your view of where we are in AI today? What do you think the world believes about AI that you don’t think is true?

Mohit Aron: I have a balanced view on AI. On the positive side, I think it’s a mistake for any company to be ignoring AI. If you’re doing any company and you don’t have AI as a component within it, it’s probably a mistake. Every company needs to view AI seriously. AI is here to not just stay, but also change most businesses in a fundamental way. I think I’m not saying anything that is outlandish there. But on the other side, I also believe there’s a lot of AI-washing going on. Companies attach .AI in their domains and project themselves as AI companies when they really don’t have any deep AI in them. A lot of people I know who are basically doing nothing more than after the popularity of ChatGPT, they basically have some sort of a chatbot attached to their product, and they’re calling themselves an AI company. Or basically, they have some twist of RAG, retrieval-augmented generation, and they extract information in some way. They call it an AI product. So, there’s a lot of that going on.

Look, the fundamentals of doing companies has not changed. You need depth in your product. You need a significant competitive differentiator to be able to build a sustainable business. And even with AI, you need all of that. If you don’t have these things and you’re just slapping on AI in one of the ways I said, it’s not a real AI company, in my mind. There’s a lot of that going on. I think people know a lot of companies have come down, they raised funding at big valuations, calling themselves an AI company but then they couldn’t prove all of this, and their valuations came crashing down. So, that’s also happening. I think a lot of the negative returns, you see. I think back in the last two years, a lot of companies got funded at big valuations and then they couldn’t live up to the promise. And so, that’s also happening.
So, it’s a balanced view. I think it’s a huge tool that we have now at our hands to really push technology forward. It’s irresponsible to just throw the name AI out there and pretend that you’re an AI company when you’re not. So, the fundamentals of doing business have not changed.

Karan: Any more questions? Otherwise, I’ll keep going. Somebody there.

Audience Question: Thank you. When you talk about vision and minimal lovable product, that’s something I sometimes struggle with because some visions like Disney is to make the world a happier place or whatever it is, which is sort of ambiguous enough to pretty much play in any particular field. Then I hear other visions that are almost like the product on steroids. You know what I mean? It’s kind of like, “Our vision is to make the best banking software in the world,” which that doesn’t sound that awesome.

I’m curious about how you set the vision at the right altitude to not be, maybe, so broad and nebulous that no one really believes it but also not so narrow that it’s really just a supersized version of what your product already is.

Mohit Aron: Yeah, great question. I think the answer will come from asking what additional problems you can solve? Otherwise, it’s a big abstract thing or the best thing since sliced bread or whatever. It means nothing. So, your minimum lovable product is solving some problem for the customer. As you build on your vision, what additional problems are you solving for the customer? What additional things are you enabling for the customer? And if that is a growing thing, then you have a bigger vision, otherwise your vision statement doesn’t probably make any sense. You need to keep on adding incremental value, keep on adding that value to customers, and then you have a bigger and bigger vision.

Said another way — you better have a roadmap beyond your minimum lovable product. What’s the next thing you want to deliver? When you talk about that stuff, by the way, with customers and they see the additional value you’re going to bring, that’s when they buy your vision. Otherwise, all these buzzy words mean very little for the customers.

Karan: Mohit, you’ve managed to, obviously, build great companies, but you’ve also raised money with Tier 1 investors along the way. You’ve got Sequoia, Excel, ourselves, Battery, a whole bunch of venture investors that know you, that want to invest in you. As you think about picking investors, as you think about building a board, what are the things that you would advise the founders here as they’re going on to raise their follow-on rounds? And their first round, sometimes. What are the things that they should be optimizing for?

Mohit Aron: The first thing I would start by saying is be very clear on what you need from your investors and your board members. I’ll tell you what I look for. Number one, I look for investors and board members who will stick with the company in not just the highs. Everyone sticks through the highs. But what about the lows? Even in the highs, sometimes when the things are heading high, they’re like, “Oh, let’s make an exit. This is the time to get the money back.” When it’s a low, it’s like, “Oh my God, let’s sell the company before it gets too low.” So, do they have the stomach to stomach the lows? That’s number one for me.
Number two, you’re hopefully trying to build a company that will become much bigger later on, and this is not the only round of funding you’re raising. So, will this person be able to stand up to his or her partners in future rounds and thump his fist and say, “I believe in this company.” There’s always going to be naysayers who try to push back, “Hey, let’s not fund this company at this bigger valuation.”

Karan: Nonbelievers, yeah.

Mohit Aron: Yeah. So, can this person actually stand up and say, “No, I want to fund it again”? That’s a huge benefit. If you have one of your existing investors standing up in future rounds and saying that, “I believe in this company. I will put money again,” that’s a huge incentive for people who are non-investors right now to come write checks for you. That your existing investors believe in you so much.
In the absence of if a person doesn’t have that conviction, okay, now I’m left with my existing investors who are not standing up to say they want to invest. Why would another one invest, right? The key to raising big rounds, and I’ve raised some pretty big rounds, is that your existing investors have the cojones to stand behind you.

Karan: You had the quality problem of saying no to people more than most people.

Mohit Aron: Now, trust me. When you raise big rounds, everyone is jittery, right? That’s when the river meets the road. I thank Karan. Karan’s always been very good at that. He always fought his partners, “I’m going to put money behind.”

Karan: Took a lot of flack for that in the early days, but I think it worked out for everybody. There was a question there.

Audience Question: This intersection of vision and product market fit, like how you set the vision, you said something that I wanted to pick on, which is that you solve additional problems, especially as there’s fundamental technology shifts happening always but nowadays with AI. How do you think about existing workflows and tackling existing workflows versus new workflows as you’ve built multiple companies?

Mohit Aron: Yeah, great question. The first thing that you need to do, and this is one of the things I always do when I start companies, is you need to build a hypothesis of what it would take for the company to succeed. Haven’t you run into people who are, basically, running failed companies or failed products, failed companies, or the company has already failed or the product has already failed? You’ve said, “What were they thinking?” I could have told them that this is not going to succeed.

It’s actually amazingly true. I thought that why not move back to the time when you’re conceiving of the company and do that at that point. Literally, I have this framework where I write a hypothesis. It consists of four parts. The first one, the first section, the first part is elevator pitch. If I literally run into my prospective customer in an elevator, in five minutes, how am I going to convince that person? Number one, it needs to be a real problem that the customer cares about. Number two, I need to have a solution that solves the problem in a good way. Number three, the product needs to be differentiated. These are the key components of an elevator pitch. That’s section one.

Number two is that minimum lovable product. What does it look like? The fact that the Fire Phone was what it was because they didn’t have a crisp definition of a minimal lovable product is. They kept throwing features. You need to have a very crisp definition of a minimum lovable product. Number three is why would the company succeed? What are the trends that are helping you? Number four is why would it not succeed? What would a naysayers say? What technology shifts might happen in the future that would disrupt this? You are to rebuttal against each of them. And then, you write this hypothesis. I literally do it for every one of my companies. I share it because I might be drinking my own Kool-Aid. I share it with people who are objective and knowledgeable. If they say, “Okay, this is a solid hypothesis,” then I have a company.
What I’m trying to tell you is that upfront, I’ve thought about these shifts. I remember Doug Leone from Sequoia, he asked me when we did Nutanix. After Nutanix achieved the product market fit, he asked me, “What has changed since your initial vision?”

I’m like, “Zero. Nothing.” Cohesity, I shared my initial deck that I used to raise my Series A funding with some of my employees. They were surprised to see that 10 years later, we are basically building on the vision that I said 10 years earlier.

Karan: That’s great, yep.

Mohit Aron: These things are thought upfront. Now, there are some shifts that are going to happen that you can’t foresee, and that’s where you put your best friends together. You collaborate with them and then you align that, “Okay, this is how the vision needs to shift to accommodate for these technological shifts.” And if you do that, basically what I’m going to say is that you not only have a hypothesis at the beginning, you also keep building and improving this hypothesis as the company goes long. I think you have a fair chance of building and foreseeing upfront these technological shifts that might come and interrupt what you’re doing, and then navigate around them.

Karan: We’re almost out of time. I have one last question for you. I’m a fan of Patrick O’Shaughnessy. I don’t know if you’ve listened to his podcast Invest Like the Best. I know Matt does, and I do. He ends all his podcasts with a question that I love, so I’m going to ask you that, which is what is the kindest thing that anyone’s ever done for you?

Mohit Aron: Look, I’m blessed. You don’t get here without kind things being done to you. I think if I may pick a few, the kindest things that people have done to me is when nobody believed when I was in a tough spot, the few kind words that were said meant a lot at that time. Of course, when the going gets tough and I’m having a hard time raising funding, when there are some backers like Karen who backed me when things were tough, that’s very kind.

So look, you make mistakes, you fall down, you get up again, you run again. When you fall down, who’s there to actually show you kindness is what matters.

Karan: Thank you for that. Thank you for the kind words. Thank you for your partnership. Thank you for your leadership. Most importantly, thank you for your friendship over 15 years. Thanks for being here today.

Mohit Aron: Thank you for having me here.

Terray’s Jacob Berlin on The AI-Powered Future of Medicine

Listen on Spotify, Apple, and Amazon | Watch on YouTube

Terray Therapeutics CEO Jacob Berlin returns to Founded & Funded after three years to share how the company has scaled from an ambitious startup to an industry leader in AI-driven biotech. Learn how Terray’s proprietary hardware, combined with the world’s largest chemistry data set, is powering new discoveries in small molecule drug development.

Jacob also discusses how the company’s $120M Series B fundraise will prepare their internal programs for clinical trials and further enhance their AI platform. He shares insights on where he sees the future of AI and drug design and dives into how founders can balance internal innovation and high-profile partnerships.

Don’t miss this deep dive into the intersection of AI, biotech, and innovation!


This transcript was automatically generated and edited for clarity.

Jacob: Thanks, Chris. Super fun to be back here. It’s pretty wild that it’s been three years and everything that’s gone on, and I’m super excited to be back and talk about Terray some more. One of my favorite topics.

Chris: So, since it’s been a while, can you give me the brief overview and maybe even the elevator pitch of what Terray does and what’s happened since then?

Jacob: Absolutely. Terray is a biotech company focused on autoimmune disorders and immunology down in Los Angeles. We bring our unique proprietary hardware and experimentation to enable AI-driven small molecule drug discovery in a way that’s impossible without it. We’re deploying it for that internal pipeline on autoimmune disorders and also for our partners across a range of indications.

Chris: That was a good elevator pitch. It’s succinct. Before we get into everything, you mentioned small molecules there a couple of times. Could you explain why small molecules and why that’s an important thing to be working on?

Jacob: Small molecules would be the medicines that you’re probably all most familiar with, the pills in your bottle that you can take by mouth, you can carry them around when you go travel, and probably some of the oldest types of remedies available to humans. At this point in time, there have been incredible advances such that there are other classes. We now call those small molecules because there are large molecules, which are typically antibody-type therapies or protein-type therapies, so large molecules are typically made by biology or analogous to biology. Then there are, of course, also cellular therapies, genetic therapies, and others on the scene. For us, we’re exclusively focused on small molecule therapies which remain the world’s most abundant, most impactful medicines. There are a lot of opportunities still to develop new and better medicines in that area.

Chris: I think about it as if, for most cases, if you could develop a small molecule therapy for something that worked equally well, not all cases, but for many, it’s better for patients and very impactful to have your medicine delivered in that form factor.

Jacob: 100%. All of the different forms of therapy that bring relief are incredible, but they do have really different levels of complexity in terms of manufacturing, distribution, and investment to do it. You can see it today, for example, in genetic medicines, which are really amazing, like lifelong cures to previously uncurable diseases, but they take many months per patient to do and millions of dollars in cost. Although that one in particular, I don’t know if you can make a small molecule analog to, you can see that as you move down to a pill that you can carry in your pocket and take for your disease while you travel the world and go out with friends, that’s obviously an advantage modality provided that’s safe and effective. I do think they remain the medicine of choice when you can make them.

Chris: There’s a funny story that I wasn’t planning on sharing, but I’m going to share, which is something that you shared with everybody when you last came and spoke to Madrona to give the update, which I think was close to a year ago now, and we were working through the pitch deck and talking about how the Series B was going to go. I remember this special appendix that you brought, and it was basically — here’s where we were, and here’s where we are. The “where we were page” had one chart with a couple of dots, and that was it. The “where we are now” page had, I don’t know, as many dots as you could fit in while still making a legible, and that was just a sub-sample. That explains a lot of just the amount of scale that you’ve put into this company since then.

Jacob: Yes, it’s really incredible. When we came and chatted last time three years ago, by the way, we count zero measurements the way we count today because the first three and a half years of the company were about taking that experimental innovation, the proprietary hardware from an academic invention where my co-founder, Kathleen Elison, was pushing buttons on the syringe pump. We were doing one microarray a week, and it was all artisanal, and we’d measure 32 million measurements maybe that week, which was a lot. It was huge. It’s way more than I’ve ever done in my career, but nothing like what we do today. When I last came in, we were at that moment where we had industrialized it, and we had this incredible array of automated systems both making and using those arrays and measuring those data sets, following up and making molecules to put into downstream assays and following them through the drug development pipeline.

We were, at that moment, though, at the beginning of our pipeline journey and the beginning of our AI journey because we didn’t yet have the data set to drive those two. In the last three years, I say zero because we had made, of course, many, many measurements before that day, but that was the day that we locked to a consistent format, and there’s a bunch of needy technical details that nobody needs to know about, nor will I tell you to go into what we decided to do with certain elements of the science on the chip. But once we locked it, now we’ve measured over 150 billion raw measurements on the interaction side, which mapped to 5 billion unique measurements because every data point that we use in our modeling, we measure about 30 times and replicate to make sure it’s a high quality, precise data point.

In the intervening three years, we’ve made 5 billion unique measurements, which has led us to build then unique generative AI tools to design small molecules. That, most importantly, led us to then really move our pipeline and move our partnership work. We’ve realized a whole number of significant milestones between here and there. Looking back is always, I don’t know, shocking, exciting, terrifying for a founder. You look back at the old deck, and you’re like, “I’m so glad somebody funded that. We’re doing a lot better today.” And that one’s true. Looking back at what you all backed in the beginning, it was very much the vision and the core of what we do, but the realization’s come in the last few years. It’s been exciting.

Chris: It’s a fun one for me to think back on because the first time we met in person was a couple of weeks before the COVID shutdown. And since then, even dealing with that, it’s just a different company. It’s fun to be able to have this conversation when you’re now much more of a scaled-up founder leader of a company. You’ve learned a lot of these lessons, and so I want to jump into some of those. Since you mentioned, and we’ve talked about it a couple of times, one of the major milestones that you recently hit over the summer and announced in the early fall was this large $120 million series B fundraise. We’re all super excited about that. I think it’s a pivotal moment for the company, but I’d love for you to share a quick overview of what that is going to get for you. That’s a lot of money. People think biotech companies raise a lot of money. You’ve got big plans for it. I’m curious just what exactly will that enable?

Jacob: It’s really incredible. I think all the founders out there know the saying, “The first dollar you raise is the hardest.” I think that is still true, but the markets have certainly been, as I think everyone involved with biotech knows, a little bit bumpy. Maybe all times are interesting to start a company, but we started it right before COVID, ran into all their operational challenges like that you mentioned, and then maybe one of the greater boom markets for biotech funding and progress and then one of the greater devastatingly bear markets that followed on the heels of that. We’ve stayed really focused on execution from day one through all of that and into today. We’re really excited because my life’s mission has been to cure somebody for decades now, and we’re finally coming up on that. Owing to the nature of the market, we’re probably not going to cure one person. We’ll probably cure many, many people hopefully.

This money is so important to us because we’ll be bringing our first programs into the clinic from the first wave of targets we worked on, one of the unique aspects of the scale of the integration of experimentation and computations that we can work on far, far more targets than your average biotech company of our scale and then pick out the best opportunities and move those forward. We have the first couple of those headed into the clinic out of what we call our first wave of targets. We have a second wave of targets behind that that we’re also super excited about, and we’re continuing to invest in the platform as well, both for our own pipeline progression, like those programs, which I should note everything internal is in autoimmune disorders and immunology, but also delivering for our partners.

When you think about the milestones from three years ago to today, not only did we industrialize the technology, generate the data, and build these AI tools, but we moved those first programs of our own toward the clinic, and now we’ll move into the clinic. We started to build the second wave and all the programs behind it, and we saw scale in partnerships with Calico and BMS. Now we have a co-development deal with Odyssey, which is another exciting opportunity for us to use our technology advantage to really deliver medicines that matter. And then, as you know, just recently, we signed a deal with Gilead to bring the same approach forward and solve really challenging problems for them. We’re really excited to become the AI-driven small molecule provider of choice for large pharma, and it’s been incredibly gratifying to see that. We’re going to continue to invest in the platform and be best in class at the intersection of large-scale, precise iterative experimentation and AI, but also super excited. The primary piece is moving that pipeline into the clinic.

Chris: It’s the partnership velocity since you started to sign these partners or go after them has been pretty incredible. We’ll get back to that because that is a hot topic for companies across the board. Something you mentioned I think is also a hot topic, which is this platform versus product debate that seems to rage on a cyclical basis and biotech investing, whether it’s coming from the investors or the companies and Terray is very much building a platform. I mean, we certainly have lots of products, but there is a big platform vision there. I think it would be great for you to talk for a second about what being a platform means in biotech, why you have conviction in that approach, and why you think for Terray that’s the right path to take.

Jacob: It is probably one of the existential and repetitive questions in our industry. Are you platform? Are you asset? And the market certainly moves back and forth with its own opinions about which one is in and out, but I think for us, we’ve always been drawn to trying to solve the problems that are unsolvable and really transforming the cost, the speed, but most importantly by far the success rate of small molecules in development. I think probably almost everybody out there knows drug discovery is really hard, that with all of the incredible expertise and all of the tools that are available today, the failure rate even after reaching the clinic is the vast majority of molecules. That doesn’t count needing to go from the idea to the clinic. Overall, it’s clearly a very, very hard problem. It has an urgent need always for better approaches that give you a transformative opportunity to bend the whole curve and transform what can really be done out there.

For us, we came at it from that macro, good for the world, good for value, good for our science approach, and have been a platform company since day one. The core innovation was transforming how you measure chemistry, which then let us transform what you can do on the AI compute side. We faced the same tensions though because our product is not, our microarray chips, it’s not our AI model, it is the molecules. The back end of it is, of course, assets, the molecules themselves, and as they move through, and as you know, and probably many listeners know, the market has moved towards the asset world, move towards the clinic, but that’s why we do the partnership work. It’s why we have a diversified internal pipeline. We feel very strongly that the right way to monetize, realize value and deliver maximum impact from a platform is to basically translate it into as many assets as is possible, leveraging both private capital but also partner capital and partner resources to move multiple programs across different opportunities.

There’s room for both. There’s a lot of patience and need that you can address either by working off of a singular item and finding a clever way to do it. But also I think there’s a lot of room for transformative new approaches. I think you see that right now. Obviously, AI-driven small molecules and large molecules have been a huge topic of interest because they offer the opportunity to really transform success rates, which would be worth millions of lives and billions and trillions of dollars. Goodness knows if you really can change the whole thing.

Chris: One thing you mentioned in there, which involves this platform strategy, and you’ve mentioned to me really since day one, is creating a long-term company versus something that, “Oh, you can build an asset maybe for three to five years, and that’s going to look really great for a pharma to go acquire.” Either I made this up or you said it to me, but I remember asking you, “Hey, what are you going to be doing 10, or15 years from now?” Your answer was running Terray. I think that’s a great answer, but it’s a lot about how you’ve thought about the vision, what you’re building, and where this can go on a true long-term perspective.

Jacob: I guess this comes from the quintessential entrepreneur too naive to know you’re wrong type plan. I’m eyes wide open that the number of new biotech companies that transition to full-fledged commercial scale pharma companies is, I don’t know, one a decade, one every couple decades, but I really think Terray can be that one. We’ve always been focused on returning, now I sound like a broken record, but maximal impact to patients and of course maximum value that comes with that. That’s always seemed to me to be realizing the inherent advantage of the platform at scale and bringing those medicines all the way through, which means building the whole thing.

Obviously, in our industry along the way, sometimes people show up and make offers that everyone says yes to, but I think you’ve got a plan for the stuff you can control and plan for the strategy that you can execute by yourself and plan for the strategy that you think overall is most valuable and most successful. For us, literally since day zero, that’s been we’re going to make and sell our own medicines one day, a whole bunch of them, and we’re going to change the way the world does this, and we’re part of the way along that journey now, which is really exciting, but we still have a long way to go. As in our industry, the timelines and capital costs and scientific risk and discovery and development remain large, but we put ourselves in a position to execute it now.

Chris: I’ll say for me, it’s super fun to be able to work with a team like Terray and you and Eli, your co-founder, because of that true long-term view. It’s really differentiating. We’ll get back to a couple of your thoughts on the business-building side of this, but I think it’s a good time to take a nice detour or deep dive into the AI and the science that’s going on here. Given I think it’s the hottest topic in biotech right now, maybe besides the GLP-1 obesity drugs, we’ve got to talk a little bit about the AI that you’ve built. You said this before, but I think it’s really interesting. The AI came a little bit after the data generation came, but since then you’ve built a ton of it. I’m curious how you think the small molecule AI world is different than the protein design world or the antibody world and what you’ve done internally to build out this AI infrastructure.

Jacob: Now you’ve wandered into my favorite topics, although I love talking about everything. I can’t resist my origin story, anything science, risk, sending me down the rabbit hole for the rest of the podcast. Come for the AI discussion, stay for the enantiomer discussion that follows in the organic chemistry section. In all seriousness, it really follows the data. I say this a lot, but I think about the world as AI is transformative when it rests on top of the right type of data, which I think are those three pillars, large, precise, iterative. In every case where that data comes about and is transformative, it rests on top of hardware innovation that compresses the cycle time, transforms the cost exponentially and allows you to realize it. The one out there in the world that’s easy to pattern match to is digital photography.

You go from old photography, where you probably never have enough images to build DALL-E or Sora or any of these tools or facial recognition, to digital photography. Now, there are millions and billions of images and you can train the models and retrain them and refine them. As people probably talk about other podcasts, you teach what a cat is, and you teach you what a dog is, and you need all the images to train it into which one’s a cat, which one’s a dog before you then go ask it for like, “I’d love a picture of my kids cuddled up with a bunch of cats.” Now it knows, and it makes you a picture. The same problem exists in our space. That’s why I did my postdoctoral work. It’s why I ran the lab. It’s why I started Terray, which is that chemistry data is hard to get.

And traditionally it was me and people like me making molecules and putting them in a flask or putting them on a 96 well plate or 1536, yes, they know they’re different well formats, and measuring them and it just is slow. It takes a lot of time to make those molecules. There’s some interesting automation chemistry approaches to it, but mostly that problems remain very stubborn. Making molecules at scale and putting them into assays is still pretty slow and still pretty expensive. Where AI has come into our world, it’s come into where there have been curated high-quality data sets like AlphaFold of course, where the government fortunately curated a large crystallography database, but also there’s an enormous sequencing database that came about thanks to next-generation sequencing and the plunging price of sequencing and the time taken to do it.

That’s done transformative things for AI around protein design, obviously, protein folding large molecule design, and I think that’s why you’ve seen AI be most successful in biotech, first in large molecules. The question we’re tackling is, “Great, now I want to put a small molecule in there.” That data set has been smaller. The entirety of public data, there is maybe a hundred million measurements spread across a variety of different assays. We’re really convinced that the unlock for AI there is the data, the measurement of small molecules interacting with proteins at a large enough set and across enough targets and enough molecules to build generalized models that can solve these problems quickly and go work for humans that couldn’t before.

That’s what we’ve been after and that’s why the sequence we always knew our data would fit with, back then we called it ML, but now we call it ML AI. We always knew it would fit with these large computational approaches because we generate too much data. We generated 5 billion data points in the last three years. What human is going to flip through that and do anything? But what to build? We needed to get the data in first, and now we’ve been able to build really transformative tools, the first of which was COATI, which actually doesn’t depend on the data. It’s the large language model of chemistry that we built such that we can work with our data in a computational way and smoothly traverse chemical space to optimize molecules.

Chris: Can you explain exactly what COATI unlocks, maybe what it is, and then what it unlocks for doing AI in this world?

Jacob: Oh yeah, that’s an easy one because COATI is a South American raccoon, so I think that pretty much wraps it up. But no, in all seriousness, in addition to being a South American raccoon, it is our large language model of chemistry. For any of these AI applications, you need basically a mathematical space within which the optimization is taking place, but you need to take the real thing that you want at the end and convert it into math, if you will. That’s what COATI does for chemical structures. Chemical structures can be represented in a variety of ways. One is as a three-dimensional object, which is probably the closest to what’s really going on out there in the body in the world. That’s a series of atoms and bonds that make a three-dimensional shape, but they can also be written down in abbreviated notation like a word.

You can write them down as both and people use them interchangeably in different applications in our industry, but neither of those is a math representation. What COATI did was it’s a contrast optimization where you train on those two to build a common math language that can translate back and forth between either of those. I think of it as like a chemistry map where it’s basically mapping how similar or different molecules are in a math space so that if you optimize within that space and move close by, the molecule looks similar, and if you go far away, it looks dissimilar.

Getting that right took a lot of work and the team did an incredible job. It was published recently, it was on the cover of JCIM, and we open-sourced the first version for people to work with it. It’s done tremendous things for how you can translate structures back and forth into math and then move around to optimize. That’s just the first building. If the data’s the foundation, the COATI large language model is the next piece that allows you to traverse, but then you need the AI module, if you will, that will combine those two and move around and solve the problem after solving.

Chris: What’s interesting to me is you haven’t been able to take off-the-shelf machine learning or AI tools. There’s some of them in the workflows and say, “Hey, go to work on our data. You’re going to get great molecules out of this.” You’ve built this whole AI infrastructure, including the data infrastructure from scratch alongside some partners like Snowflake and NVIDIA who have been part of this conversation. I’m curious how you think about the reasoning for doing that and why we’ve had to build all of the models internally and what that does for our scientists.

Jacob: It’s been an incredible journey and one that, I don’t know if when we started, we knew how much of each of them we would do. This part’s always stressful. There’s so many people that have gone into making that possible. Narbe, who’s our CTO has been a driver, John and the entire ML team, Kevin, the whole data team. Because as you mentioned, you have to first be able to get at the data. Our workflow is very custom. We obviously have invented proprietary hardware. The way we read it is with imaging, and so we generate over 50 terabytes of images a day that we need to convert into the numerical values that we’re going to use to drive the models. That was a whole process that we built from scratch because nobody else made exactly what we made, and nobody processed it like we needed to process it.

Obviously, we stand on the shoulders of giants like all scientists, and there was stuff we borrowed from, but we built our own because we needed to be able to do that really quickly and efficiently. We work with AWS and Snowflake because we generated a data set the world hadn’t seen before in the early years of Terray. We want to not invent stuff. We want to use stuff off the shelf that’s cheap and works and does what we need and move on to other hard problems. But when we showed up to vendors and we’re like, “Hey, we have 5 billion measurements coming up soon, can we put them into your stuff?” They said, “Chemistry measurements?” And we’re like, “Yeah.” They’re like, “Ooh, no, that’s a lot.”

We worked with instead, on the flip side, like Snowflake, which is obviously a service built for data sets that large. We saw the same thing with the foundation model of chemistry. We tried every model that was available out there, and we found that when we applied them and use the power of our unique data set to ask, are these models really then connecting molecules the way we want to connect them for optimization? We got some suggestions we didn’t think were that reasonable, and I think we’ll come to this, but it’s one of the real keys of having expert humans in the loop when you build and use these models, because they were answers that our medicinal chemistry teams were immediately like, no way. This is off. It’s a rocker, it’s way off.

We had to go build something that constrained it and gave answers that made sense and really allowed us to optimize molecules. The same has happened with the generative side of the AI problem. The team’s done incredible work building all the way from the ground up, from the data processing through the foundation model of chemistry to the generative and predictive models that go into designing molecules to solve the problems. We built it all because we couldn’t find what we wanted out there.

Chris: It’s interesting. I joke and we’ve joked before, biotech companies are obviously not software companies in many senses, but on the other hand, you’ve pretty much built an entire software company from the bare metal infrastructure up within a biotech company, and it’s about equal to size of the science that is going on. It’s a fascinating change in how companies are built.

Jacob: We use as our slogan, everything small molecule discovery should be, and we picked it intentionally because we feel really strongly that you can’t anymore be all one thing, that you’re at a huge disadvantage if you’re only compute or only traditional discovery. Our intersection is of compute, so AI ML software, so that’s a huge piece of the business building all of that, but also the experimental side. We have a huge investment in build and robotics automation, large-scale data with precision, the iterates now, like I said, I repeat myself a bunch, but it goes then in the service of the pipeline and the preclinical development.

We still have the teams that you would identify anywhere else, Medcam, your biological assays, and everything else that goes into that cell assays. I think you need all of the teams working really closely together. The last piece is that we also essentially have a little mini manufacturing business in that we make our microarray, proprietary microarray technology by assembling a variety of different things and building our custom libraries in-house. We have four businesses under the hood at Terray, but they all go together to drive the one singular value driver, which is the outcomes. I don’t think you can do just one of them and be successful in the way that we are.

Chris: I tell people at Madrona all the time, and other people, if they find themselves in East LA, they should visit Terray because it’s just visually so striking, the amount of automation, hardware innovation, and robotics that it’s just there and required. Every time I go and take a peek, it blows me away. I know we’ve seen that with when the New York Times visited, for example, and other investors, you have to see it to believe it.

Jacob: It is really different. As my brother Eli and co-founder advertises it, it’s not just a lab tour, although it is just a lab tour, but it’s an awesome lab and it’s one of the other milestones. Since three years ago, we’ve really lived the startup physical footprint journey. It’s been incredible in that same look back, we started the company in a local incubator at a shared bench and a shared essentially closet that we did our imaging in. Last time we did the podcast, we had moved from there and matured into a step-up space and we were working in a couple suites in a shared building, but we’ve been really fortunate since then that we moved into a, 50,000 square foot headquarters in Northeast LA, Monrovia for those in the know, great spot. And we’ve really then been able to build our workflows the way we wanted into the physical footprint of the building.

If you come to Terray, or our partners or the New York Times, or others, you have seen this whole first floor where the automated imaging and liquid handling systems are running to use these little microwave chips and make them millions and billions and billions of measurements all in a way, like a whole field of them. It is strikingly different. The interesting thing is that upstairs then looks in many ways, like a canonical biotech drug discovery company, although with a lot more robots in the hoods than on average. You can see and almost feel how the pieces fit together and work together, except perhaps as we talked about for the AI piece where you just see really smart people working at computers, but you see the impact as you move upstairs and downstairs and see the molecules that are being made and tested. It’s pretty incredible watching it all come together. I encourage anyone who’s interested to reach out and let me know. We’d love to show people what we’re doing at Terray.

Chris: It’s a pretty great tour. I’m lucky I get to go all the time, but it’s pretty fun. I want to get into a couple of your business theses and lessons you have to share. But before that, circling back to the AI side, I think one of the things that Terray is really good at doing is predicting completely de novo structures. And so when I say that, I contrast that toward a bunch of other AI platforms, which are very good at predicting things, especially binding molecules, but they look 99% similar to known binding molecules. That’s impressive in itself, but it’s very different than how you’ve approached the problem and how you think about this pure de novo or unrelated structural prediction. Talk a little bit about why that’s hard and why you think that’s also the way forward.

Jacob: It’s interesting. In my mind, this one connects back to the platform versus asset question in the same way that there’s value to a company being wholly invested in one medicine and bringing it through and being successful. There’s value to patients and to the ecosystems, to taking previously known molecules that either do work or almost work for something and making them better. There are innumerable examples of that, including the statins. Everyone knows and we’re taking, it wasn’t the very first one that became the most ubiquitous. There was refineman and the most ubiquitous one was an optimized version thereof. Those are exciting problems and problems that we can tackle. But I think the most exciting and the biggest benefit for both human health as well as value is solving the problems that just can’t be solved out there. That’s like, as you mentioned, would be what we call de novo where nobody knows where the molecule is or what it looks like.

It’s out there in COATI’s chemical space mapping somewhere, but goodness knows. The key then is to be able to do your own measurement to get a starting toehold where there wasn’t data before. As I talked about, AI always needs data. I don’t think it’s any surprise that AI’s first impact on small molecule design has been predominantly working in areas where there was already data, working around known molecules, patents, things that were out there and making better versions of those, which again, are very valuable, have impact and are also honestly often much quicker to bring to the clinic because of the path that’s been trod before you. We work on a different approach, which is to bring us your hardest thing. I think this is why you see the partnerships with large pharma because they’re bringing us of course, the things that they can’t do themselves otherwise they do them.

We’re out there working on very hard things where often there is no known starting point. We do this for our internal programs as well. For us, that’s why we use our sequential iterative process where we use our platform to measure very, very, very broadly across chemical space, 75 million plus molecules, but we’re obviously chemical space, infinite. We’re very sparsely sampling, looking for a starting point, where can we possibly get going on this? And that gets you going on the de novo, but it doesn’t possibly give you enough data for the model to be impactful. We followed that with a design and test cycle where we then build a new library of millions of molecules around that area of interest such that we massively enrich the models with a lot of local knowledge around the area where we do know now that there’s an answer in there.

That sequential build lets the model both broadly understand chemical space, what doesn’t work mostly, and then enrich into what does work and become essentially like a AI co-pilot for the Medcam team where they’re able to ask it questions as they go about their work and think about, “Hey, I need this molecule with these improved properties, where should we go?” I’m super excited about it. As you can tell, there’s nothing more exciting than finding a totally new answer to a intractable science and health problem. I think our approach really gets it done.

Chris: I know that you’ve now found many of those molecules because I get to see the outputs of that not in real time, but on a regular basis. It’s impressive how that’s been able to occur with all the work that you’ve done. I have two more questions for you, both are more on the business side and the business philosophy. Now that it’s three years in, I would say you’re an experienced founder.

Jacob: Three years since the last podcast.

Chris: That’s true.

Jacob: Six plus total. I’m a deep expert now.

Chris: That’s right. You’re a very experienced founder. I mean, you’ve scaled the company a bunch since the last time we talked. I want to hit two things. On the business side, Terray’s always been about the partnerships as well as the pipeline. I am curious how you think about this strategy because there are a lot of companies that will only focus on their internal pipeline and that’s not how you approach the business strategy.

Jacob: This also goes to the platform build question. I was influenced by an article I read long time ago looking at expected returns across numbers of assets. Back then I think the conclusion of that particular analysis was like, well, if you have 20 programs in the clinic that are appropriately sized to their market and whatnot, you’ll positively return over them. Of course, you see the real world version of this large pharma is a successful profitable industry that makes many bets, but in part also does it through acquisition and letting the bets play out outside of their ecosystem. If you can resource enough thoughtful bets, you’re likely to be overall successful. The inverse of that, as we certainly know, that one singular bet actually is odds on to lose.
As you know, I’m a baseball fan and so I talk about this a lot is sequence luck. One team, all their hits come together, they score a bunch of runs. The other team only gets one every inning, they lose. And then one, I’m a boss, I just like to point out one team sometimes also drops the ball for an entire inning and blows the World Series, but that happens. Coming back to what we’re talking about, we work with partners because I mean, it would be great if you guys would give us a few billion dollars and then we would resource all of our own programs, but that just hasn’t worked out yet.

Chris: Someday.

Jacob: Partners give us both. They give us the opportunity to resource more programs than we otherwise could through their capital commitment, not only in what they pay us to do the partnership, but the fact that they’re going to then carry the backend development of those molecules through the clinic and out to patients. We have an opportunity to realize value where we otherwise wouldn’t reach patients. The other piece is that it also realizes expertise. Internally, we were fully on immunology and autoimmune disorders, but with our partners, we touch a variety of other therapeutic areas that would’ve been a whole another build for the company to move into those types of indications.

It’s a way to realize the promise and value of the platform while you’re still a smaller company and be capital-efficient as you build and grow, tilt the odds of overall success in your favor from a singular coin flip, if you will, although the coins from very negatively weighted in biotech, to an ensemble approach that starts to give you a leg up on sequence luck. If you do seven programs and the first two fail and the last five succeed, that’d be incredible. That’d be the biggest home run ever. You might not get to do the last five if you only have the first two bets. This is a way to do them all at the same time, do them with really expert, wonderful partners who are just well-resourced and well experienced to be successful at the programs we do with them. So yeah, it’s always been both for us.

Chris: That’s really well put. Finally, I want to ask you about one of my favorite parts of Terray, which is the very unique and extremely high-performance culture you’ve built, and you’ve set an incredibly high bar just to get a job at Terray. I can think of one time, maybe in the last four years since I’ve been deeply involved, it is actually, I guess, closer to five now, since I’ve been deeply involved in the company when we’ve lost anybody, or we’ve lost someone, we’re even like, “Oh man, that was really terrible that we lost that person.” How have you done that?

Jacob: It’s really remarkable to me because as opposed to some of my friends and colleagues in other markets, it’s not one of the things I worry about when I go to work very much. It’s like, oh, we’re unexpectedly going to have a large churn in the company. We’ve been really fortunate to work with wonderful people including yourselves and the rest of in the investing ecosystem too. It’s really remarkable to me how mission-aligned everybody involved with Terray is. I’ve always been, as you probably can feel through this, super mission-driven, I’m here to make the world a better place and this is how I want to do it. I think that shines through as we hire people and build the team. It’s been one of the most incredible parts of this last journey because the other piece of the milestones is last time we talked, the company must have been four times smaller, and we’ve been through that growth and maintained, as you noted, the quality people we want, intensity.

It’ll sound cliche, but it’s because we hire for the person and the culture and the way we work together, not necessarily just for the skillset, which does make our searches take forever. We talk about this all the time. The trade is always time, because you can find the person who not only does what you want, but also does it how you want to do it. It’s going to take if you hold to both bars. There are times when that’s really tough, and it’s like, we really need somebody, but overall, we’re always happier and more successful when we get both. We’ve built an interview for that since the beginning. As we’ve talked about, I’m not a huge fan of just canonical words for values like, “Oh, we’re about excellence.” Of course, we are. So is everybody else. I hope. Otherwise, I don’t know what you’re doing. We’re really focused on how we work with each other and the operating principles, how we communicate with each other, how we make decisions, how we treat each other.

It’s been just a real joy to watch that cascade down through the teams. I have a little rotating lunch I do across the company, like three or four people every week just to say hi. It’s explicitly non-work, they just get to hear my awesome baseball jokes and thoughts about movies and TV and whatnot. One of the new employees was there, and I was like, “Oh, how’d you find Terray?” They’re like, “Oh, well, my friend who used to work here. She left for a school opportunity.” That was awesome for her. Was like, “You got to work at Terray. It’s awesome.” And nothing makes me happier. The science part obviously motivates me. I love science still. I’ll go back and tell you more about organic chemistry if you’d like, but the building and the people side is every bit and maybe even more gratifying to see such a wonderful teamwork together. I don’t know what the secret is except not making compromises on that aspect. There’s never anybody who’s good enough that you’re willing to compromise how you want to do it.

Chris: Well, I can’t think of a better place to end the discussion on that note about amazing people. You are one of them. It’s been really fun to work together and I really appreciate you joining me three years later for this discussion.

Jacob: Well, I appreciate it, Chris. Not only the awesome conversation today, but as you know, you guys have been convicted supporters of our work from the beginning, and it’s not that easy to find people who want to take the big, big bet and go for the whole journey. It means a lot to me and the conversation we just had, you guys have been mission aligned and how we want to work together aligned from the beginning. So appreciate it. So excited to be back, and thank you so much.

Chris: Thank you.

Curt Medeiros on Revolutionizing Precision Medicine and Scaling Ovation

Listen on Spotify, Apple, and Amazon | Watch on YouTube

Imagine a future where healthcare isn’t just reactive but deeply personalized — where every patient gets the right treatment at the right time based on a precise understanding of their biology. This is the future Ovation is building, and on this week’s Founded & Funded, Madrona Partner Chris Picardo dives into it all with Ovation CEO Curt Medeiros.

From transforming underutilized clinical data into rich multiomics datasets to forging industry-leading partnerships with companies like Illumina, Curt shares how Ovation is shaping the future of precision medicine. He also opens up about the challenges of building a scalable, privacy-first platform, the lessons he’s brought from leading large healthcare businesses, and why collaboration and diversity are key to solving complex problems in healthcare.

This transcript was automatically generated and edited for clarity.

Curt: Thanks for having me on, Chris.

Chris: Yeah, thanks for being on. This is really fun. I think it would be great for everybody. If you could just do a little bit of a reintroduction to Ovation. Why was the company created? What’s the role in precision medicine and data innovation? And how do you think about Ovation in this emerging data world?

Curt: Absolutely.

So, the founders put together Ovation with a simple concept. There’s a ton of clinical data and samples flowing through labs across the entire United States, in fact, the world, that are being used for clinical care, but the data and samples are not really being used for research. And so, how can we tap into that?

And that was the founding vision for Ovation. As we’ve evolved, we set up a software-enable platform that works across clinical laboratories to be able to help them with their workflows but also be able to understand what are high-value patients for research flowing through those systems — identifying them, de-identifying their data and samples, and then being able to bring them to the market as multiomics data. What we see, as you know, a really important role in precision medicine is enabling large-scale multiomic data sets. Right now, it doesn’t really exist in diseases outside of oncology and rare diseases. Folks have been building data sets in oncology for almost 20 years, and they’ve seen the benefit of that.

As the landscape gets more competitive with precision medicines and oncology, and there’s still a lot of room to grow there, we see pharma, biotech, and other researchers in academia really starting to turn to — how can they leverage data and these tools to create precision medicines, outside of oncology and rare disease. That’s where we see our critical role.

Chris: What is the relation of data to precision medicine, and how important is it? I know, broadly speaking, people have heard the term precision medicine. I think it’s not always precisely defined, but certainly, in Ovation’s view of the world that the data side of this and the precision medicine side of this are pretty linked. So, I’m curious if you could expand on that a little bit and how you see those two fitting together.

Curt: I mean, if you go all the way back to the beginnings of the industry, people were looking for extracting different chemicals from plants. That has evolved to understanding animal models and how they can represent different human systems. But, obviously, animals are not really a close link. They provide valuable data, and they provide a roadmap to how people react in the clinic. But by no means is it perfect. We’re focused on human genomics and multiomics. It’s the closest you can get to understanding what’s going on at the biological level across an entire human, across their different organs and systems. And having that data from the genomic side, so whole genome through RNA expression of how that is actually expressed in different tissue to the proteins that are produced and ultimately to glycomics and metabolomics, really helps paint a broad picture of the cascade that’s going on at the biological level. And that’s what’s necessary because what people are looking for is what is that target protein or that biomarker that’s going to help people understand what patients will respond to a particular treatment and what patients won’t.

And that way, you can get them on the right treatment at the right time.

Chris: Yeah, that makes a ton of sense. It’s basically like you need this giant cross-sectional database of individual patients and their associated data to really figure out what are the best treatments that we can build going forward.

Curt: Absolutely. And it’s really there to power models for future discovery. So, we’re getting close to the point where there will be enough data in the next five or ten years where you can start to model again all the way from your germline genomics all the way through what’s happening with the proteins at the cellular level. And that’s what’s going to enable a much more precise approach to these targets and biomarkers, much higher success rates in the clinic, and ultimately much higher success for patients, which is the ultimate goal.

Chris: As we continue to accelerate in this world of AI models for everything, a popular one has been applying AI to human health on the protein-folding side. It has been really interesting and very compelling, but it’s less built out in the rest of the healthcare world. Would you say that’s largely due to the data challenge or people not having resources like Ovation before in order to train and iterate on these models?

Curt: Well, it’s both an availability challenge as well as an economic challenge, right? So, let’s start with the economic challenge. To sequence a whole genome of an individual not so long ago was tens of thousands of dollars. We’re now entering the area where it’s hundreds, and soon it’ll be only a couple hundred.

And that’s really exciting because a big part of the challenge was — it was so expensive to create this type of data in the first place. Now that’s being solved. The second part — as people start to allocate budgets to doing that, as the prices come down — are really the availability of high-quality samples that can be de-identified and linked to clinical records.

Because the genomics and the multiomics are really important, but they have to be correlated with highly, highly curated clinical data on the individuals so that way you understand not just what diseases they have and what medications they might be taking but also what’s their journey. Are they getting worse? Are they getting better? Are they having side effects? It’s important to understand all this clinical context to really understand what are the right targets and biomarkers to go after.

Chris: Yeah, so it’s basically, the single point type data is fascinating, but without the deeply annotated clinical record, the data on all of the sort of related conditions around that individual patient and their biomarkers, it’s hard to piece together the insight that you need.

Curt: Absolutely. And scale is another challenge. There are some really good data out there. The UK Biobank has done a great job putting together a tremendous asset. But it’s a really good start. Most of the clients that we’re talking to are talking about millions and millions of patients worth of data.

And so, we’re looking to build upon the success of places like the UK Biobank. And one other thing that’s unique about our model is it’s not something that we’ve collected over 20 years, like other people that have assembled data. We’ve put together our biobank of over 1.6 million samples on over 600,000 patients, about a third of which is tissue, which is really hard to get.

But also importantly, it’s very representative right now of the United States and, hopefully, over time, the rest of the world, where we have a very diverse patient population. One of the challenges with the existing data sets is that it’s greater than 80%, sometimes greater than 90 percent, Caucasians of European descent. It’s hard to find diverse answers when you don’t have a diverse population. And so that’s part of what we’re building.

Chris: I’m curious just as you give people the context on why everything you’ve done at Ovation is so unique. Why has this been hard? I mean, there is data out there, right? Like we know people, like you said, UK biobank is out there. I’m sure that people have heard about 23andMe and some of those approaches, but clearly, it’s harder than that. And so, I’d love to give you a little bit of a window to say, this is a really hard problem that we’ve solved.

Curt: It is a hard problem. And it goes back to what we were talking about earlier in the economic equation versus the scale equation. So when you’re talking about consumer or clinical tests, they have to be affordable. Whether someone’s paying out of their pocket or whether it’s going through their health insurance. And what ends up happening is the amount of information that actually gets sequenced is a small fraction of someone’s actual genome. A very small fraction. It wasn’t until we started to see the whole genome come down to the $100 of dollars that this is starting to become scalable.

What we’ve been able to do on our side is build a platform that not only creates the scale that we talked about, but we continue to add the potential in the next year is hundreds of thousands of samples per month.

And so, what that allows us to do is find through our software and our data, the most interesting patients to study.

And that way we can focus on sequencing those first. When you go into some of the other data sets, part of the challenge is they’re big at the top level. But then when you get into individual diseases, and then you want to segment that population into, mild, moderate, severe, let’s just take the most basic one. And then you add a segmentation on top of that around what drugs they are on.

You start to get to really small numbers. And so having not only an existing biobank that has been scaled but the ability to continue to add new patients, add scale, add diversity, and then capture patients as new drugs are launched in the future is really important to power this type of research.

Chris: That’s a really important point — the Ovation approach there is to continue to build out both the depth and the detail in real time, so you have cohorts that are representative of what’s happening right now with patients and a diverse set of patients.

I think one question that people ask when they hear all this talk about patients is the privacy aspect of it and how you think about that. And I think the corollary to that is, when you talk about larger pharma companies, say, using this data for modeling or approaching. What are they trying to do with it? Is it building amazing new therapies? Is it building great diagnostics? How do those pieces fit together?

Curt: Let me first address the privacy question because that’s obviously of critical importance. We work with common technology in the industry to tokenize, which means we remove all of the patient-specific information, and the software basically translates that into almost a complicated serial number. And the way the software does it, there’s no way to go backward. So once that patient information is removed, it’s completely de-identified. Then, as we add in the clinical information, similarly with the matching token, so we’re not exchanging any information on the patient at all. With that matching token, we then have to construct what the data set will look like and get that certified — so it will not be able to be re-identified in the future. And there are complicated statistics and what data is included and excluded and at what level that goes into that. But we are 100 percent following the same type of processes that I did in my prior life to make sure we ensure the privacy of the folks who are contributing their data. Absolutely.

In terms of what happens on the pharmaceutical company side, at the end of the day, they’re looking to make better medications for as broad a population as possible. But in the precision medicine world, the way that they’re doing that is by identifying biomarkers that help them understand what treatment is going to be right for them at the specific moment in their care journey. And then understanding how they build a portfolio of treatments that help those patients both across diseases as well as along that care journey. The pharmaceutical companies and biotech companies take the privacy piece just as seriously as we do in the rest of the healthcare industry. They understand that having the trust of the patient and having the trust of all of the different stakeholders is of paramount importance.

And so, we work tirelessly with them to ensure that we’re all using and transmitting the data in the right way because that end objective is getting to those precision medicines. You know, I was thinking about “Anchorman” and Ron Burgundy, and I forget what he was talking about, but he said something like, this works 100 percent of the time, 50 percent of the time. And when you think about broad-based medications, they work 100 percent of the time, 30 percent of the time, right? And so, when you’re talking about precision medicines and some of what we’ve seen in oncology, I mean, we’ve seen complete patient populations get very close to 100 percent endpoints in recent trials. And on average, you’re talking about 70 plus response rates, as long as they have the correct biomarker identified. That’s tremendous. It’s not just tremendous for patient health. Which is obviously the first priority. The second is by making it more affordable. Because you’re not spending money on treatments that aren’t going to work.

You don’t have to go through 2, 3, or 4 treatments that aren’t going to work before you find the right one. And that’s the ultimate objective is first on patient health and then on how we can bend the cost curve in the healthcare system at the same time.

Chris: It’s amazing to look at some of the trials of drugs that have failed simply because they gave them to the wrong set of patients. And yet they would have worked incredibly well if they gave them to the right set of patients. And I, And I think that this sort of data approach that Ovation is taking to say, “Hey, go find those right set of patients. There are better therapies that are already out there for you — if we can just figure out what subgroup you are in? Who’s the right set of people to give this to?” And there’s going to be new medicines that will be created using that data that are going to be even better.

Curt: 100%. And so when you look at the first wave of precision medicines that came on, it often was through the route that you said, either the trial failed, and then they did a reanalysis of subpopulations, and then they went and found the biomarkers, and then they redid the trial, or they had enough data to submit the data from the original trial, but with that subpopulation Or sometimes it was even after they were launched on the market, right? The early medicines were launched without biomarkers. And then biomarkers came later as they learned from the market and what their success rates were. We’re enabling people to do that from the beginning on purpose. And that’s the key thing is — why go through that trial and error? And the whole system is learned, right? They’ve learned these lessons tremendously with oncology and rare diseases — that’s why this is a great time for Ovation because as we’re getting this data out to the market, people have already figured out how to do this and now they’re just as they’re changing priorities and moving toward other disease areas, they’re ripe for this type of data to follow the same playbook they did in those other diseases.

Chris: Yeah. It’s incredibly valuable, and I think that it’s endlessly needed by the pharma companies. And that actually brings me to the next great point, which is that on the Ovation side, you’ve really been on a roll for the last year, really landing some of these partnerships with larger companies. I’d love to have you talk about that a little bit — and why now is the time and why it’s been such an exciting time for Ovation.

Curt: I will answer that, but before we, we, we go on, I wanted to go back to the research point. So, our objective, we’ve talked a lot about pharma and biotech, and they’re obviously the most active, and they spend the most money on R and D. But our objective is not just simply to serve those customers. It’s really to make sure that, and we’re exploring, academic partnerships with health systems that do research.

We want to enable research with this type of data broadly and eventually get it in the hands of the payers. Being able to have a set of truth in the data on what’s happening from the research all the way through the market, again, is going to help not only get better medicines and have a better effect for patients’ health. But ultimately, we want to enable folks to make better decisions.

And not just clinically, but on coverage. So, this also starts to bend the cost curve. So our vision isn’t just pharma and biotech. That’s where we need to start, but we want to enable research and decision-making in healthcare broadly with this type of data because that’s the type of change that’s needed.

Chris: I think that’s a great clarification that this data is exceptionally valuable and useful across the entire healthcare and care paradigm. And that whether it’s from the payers or the doctors or the academic researchers who are working on the science, this is the type of thing they need to accelerate their work.

Curt: Yeah, so on the traction front, there has been so much going on. We created our first pilot data set at the very beginning of 2024 and launched it. One of the things that we learned, as you can imagine, is if you were selling just genomics and multiomics data or you were making available just the clinical data, either one of those is completely crazy scientific sales. You have to be in every single detail, not only of what can you do with it, but what is it? Where did it come from? How’s the data model set up? So on and so forth.

A big part of our learning was we needed to show people that we could do this. For a long time, we were a startup with a presentation and a big biobank. And although people got very excited, when you’re asking people to invest in creating these data sets — because even though the cost of sequencing is coming down, it’s still not an inexpensive endeavor.

Chris: Sure.

Curt: So, we created the pilot data set and then were like, how can we get this in the hands of folks so they can see the quality of what we’ve been able to produce? We had to go where the researchers were. And we did that in a couple of different ways. We partnered with DNA Nexus, really the leader in managing and analyzing this type of data across the globe and worked with them both to get to different conferences where researchers in the space — we did irritable bowel disease first, so IBD — where are those people going to learn about what’s the latest and greatest going on in the space.

We put together posters and abstracts, and we got accepted at a couple of the top conferences and were able to present. We actually had small little luncheons as well for people to come and ask questions and give us feedback, and being able to connect where those technical researchers, those technical buyers were — where they normally would show up to learn about new things was really important. And DNA Nexus was a huge part of that.

And then we did pilots. So, we worked with DNA Nexus to get the data into the hands of actual customers and get their feedback. And that was an absolutely tremendous set of learnings for us. And that’s what’s really created the momentum. Showing that we can do it, letting them touch the data, and then being able to say, yeah, we’ve done this in IBD. Now, we can start to do this in other areas.

We just signed our first contract right after Thanksgiving with GLP-1-treated patients. So, both in the diabetes space as well as obesity as, you know, a lot of these patients have multiple comorbidities. So, this is a really interesting group to study. And they’re also having challenges getting reimbursement for these drugs. So, there’s a big push if you could find a response marker, who’s actually going to do well and who’s not. That’s not only going to help the patients, but it’s going to help people understand how to get this to the right people and not spend money when it’s not going to work. Right? Really, really important.

Also, got our first contract in the IBD space at the same time. And we’ve now built a pipeline across, primarily in IBD, but also in metabolic and cardiovascular. I don’t think anyone would accuse those of being precision medicines. So, the opportunity now is we can do a lot, lot better. Especially as we’re learning how different racial and ethnic groups respond to medications differently — male, female. The opportunity to study those from the genetic level all the way forward is right in front of us, and that’s what’s helping us build our pipeline. It’s really exciting.

Chris: Yeah. That is super exciting. And it’s allowing, you know, both obviously Ovation and then your partners to deeply understand what’s going on in all of these areas of health. Obviously, there are the extremely big current ones like GLP-1, but also, to your point, the pervasive ones like IBD and cardiometabolic and places where people have been chipping away at that for a long time and now have an amazing resource that can help them accelerate and move a lot faster.

Curt: Yeah, the other big opportunity that we have in front of us is our collaboration with Illumina which we’re really excited about. We signed that collaboration back in October and have been working tirelessly with them to bring this out to the market and really what excited them about Ovation was not only do we have a large biobank of samples that are already banked and collected. So, we have a lot of inventory where we can work with them and pharma partners to sequence.

And we’re talking about potentially hundreds of thousands of patients’ worth of data through this collaboration, which is really exciting. But the other part is, when we started to show the numbers, because of the mechanism and the platform we have, because we can match the data and understand what diseases folks have before we ever biobank the sample, we have a lot of very high-value patients.

So, if you go out in the normal population and I, customers have told us about other data sets, and I won’t mention who they are. Sometimes, only 20 to 30 percent of the patients are actually interesting because there are a lot of people who got sequenced that don’t have serious diseases that might be healthy.

Chris: We’re both one of those people.

Curt: Yes, well, right now, yes, at least me. But I’m sure there’s something in store for me in the future. And so, being able to get folks that have serious diseases that have multiple comorbidities — and not have to spend the time or the money on the healthy 25-year-old who hasn’t had a chance to get any diseases yet — is really, really important how you put this data together. So that was another thing that was really exciting to them is really high quality, high disease burden set of population.

Third, is really the diversity of the population. So, we have over 190,000 patients and underrepresented minorities. That’s huge in terms of the diversity of the population and then the diversity of the results. The last part is the ability to continue. A lot of the work that has been done is with biobanks that have been collected over 15, 20 years. You go through and sequence it, you create the data, and then you’re done. Either because they don’t have any more samples or the number of samples flowing in is small on a monthly basis.

I mentioned we had over 600,000 patients and 1.6 million samples. That means we have multiple samples on average per patient, and that will continue to grow as we collect more. And so, to look longitudinally at patients, what’s changing, especially when you get into the proteomic side of things, is really interesting because you would expect to see different data and different results over time as their disease progresses or gets better with treatment. And so those are the things that we’re excited about Illumina, and we’re very excited to be and honored to be in a partnership with them and look forward to getting a couple of pharma partners on board to get going.

Chris: Huge congratulations. It’s such a big achievement and such a big partnership that was the result of years of work and reflection of how unique the platform and the data asset is.

Curt: Absolutely, Chris. And I think the good news is we’re just getting warmed up. The team at Evasion has done a tremendous job building the network, building the platform, and bringing the partners in who are contributing data and samples. I’m really excited about some of the academic and health system partnerships because not only will it enable a return of data to them for their own research and put into clinical practice, but also drastically expanding our access to tissue, which is absolutely critical in understanding, what’s going on in the organs at the disease level.

Chris: Yeah. I think the vision, the value already, the acceleration into 2025 —that it’s a good time to be at Ovation, and it’s pretty impressive how this all continues to come together and accelerate. You know, I think one thing we want to spend a little bit of time on is you have an interesting journey to Ovation. You used to run a large business unit at Optum and are a deep expert in the space. I think it would be great just to share both how you thought about that transition from a large, massive company but running a big business unit to now Ovation and also how you think about your leadership philosophy and company building, certainly as you’re building all this momentum.

Curt: Yeah, absolutely. So, for all my former Optum colleagues, it was a big business for everyone outside of Optum, but within Optum it was not by far the biggest business. But it was a really exciting and innovative team and business that I had the pleasure of leading for many years. I really enjoyed my time at Merck another large company, for a big portion of my early career. And then at Optum for the decade before I joined Ovation. One of the benefits is you get to see so many different things.

A lot of what I’m applying here came from my experience at Merck working, watching, and learning from some of the top researchers in the world and understanding how they think about identifying targets, and what does a good clinical candidate look like, and how do we put together the infrastructure to go after biomarkers and bring them into the clinic with the drug candidates. Absolutely tremendous learning experience, and similarly, at Optum, I got to see every single aspect of the healthcare system. Me and my team’s role was how do we bring analytics to solve those problems? And that’s data software and, and people.

Moving to Ovation — a startup — I was really looking forward to it. Optum is very much like a large company that is an affiliation of small companies in certain ways, so it’s not quite the same as a Merck, but still, your flexibility, your ability to pivot, your ability to gain investment and go try new things is, always limited when you’re talking about a large company with know, quarterly earnings and lots of sign-offs and decision makers to get things done. I love the hustle, the ability to move quickly, to try things — not all of them going to work. You try things, you learn from it, you pivot, or you augment what you tried and try something. That part of it’s really exciting, the pace and the flexibility.

Growing up in a couple of those large companies, you are surrounded by people who have had a set of broad experiences and broad relationships and a lot of the same development paths in their careers. In a startup, you’re a much smaller team. And so, by definition, you have people with very different experiences. That is both a positive and a challenge because sometimes you take for granted that person X should understand this or person Y should understand this. But also, on the other end, sometimes you don’t ask the question you should ask because you don’t know that they have that experience. A big part of what I try to do with the team is make sure that we bring the best minds together on any particular problem. But also create a culture of — we’re all going to make mistakes, and we’re all going to fail at certain things. It is absolutely the right thing to do to ask for help. I ask for help from the team at all times. There are a lot of topics that we talked about today where there are much better experts on the team than I, and that’s who I go to when I need to understand something or there is a critical decision. We make sure we get the right folks in the room to make the critical decisions — this isn’t an army of one in any particular area. This is a team. We succeed or fail together. And so, asking for help and asking for people to rally is absolutely the culture that I think we have. And we’re continuing to try to foster.

Chris: Yeah. And I think, you know, to your point, too, it’s a much smaller team, but having those diverse perspectives that are different, right. And sometimes maybe unpredictably different, going after problems that haven’t been solved. And I think that brings me to my last question, which is what is most exciting for you about Ovation and the potential in the coming years?

Curt: I see us as, first and foremost, the world’s leading multiomics data provider with the ability to have very, very dense data for each and every patient. So the way people are doing these things, in general, today, they might have very small data sets with each of the critical components together, but if they have anything scaled, usually they have one component with one population, a second component with a second population, a third component with a third population, and then they’re trying to use analytics and AI to sort it all out. And it’s not to say that that’s not a good approach, given the history of how this field has evolved, because it’s really the only thing you could do.

We have an opportunity to put together all of those different pieces of the multiomics puzzle for the same patient with rich clinical data in a way that can really speed everything up, speed up the model development, speed up candidates into the clinic, and ultimately, to the market. That’s what gets me super, super jazzed. I also think that we’ll have an opportunity, as we continue to grow the data set and add more data, to become true experts in this and start to transition to building some of those models or providing some of the analytics as well. So, our clients can, instead of focusing on how we find the targets and biomarkers, they can start to focus more on model building and application after that.

How do they actually speed things into the clinic? How do they speed things to the market, which is where their true expertise lies. Not to say that they’re not experts in, in finding the targets and biomarkers because they are, but if we can be an essential resource in helping provide those answers, they can then apply their expertise downstream, which we will never have. Right. So that’s, that’s super, super exciting to me.

Chris: What you can do to your point with both the data and the modeling and the analytics on top of it, is pretty incredible. And I think that brings me to the last question, which is you really are an expert in precision medicine. And so broadly, on top of Ovation or outside of Ovation, what most excites you or what’s most going to surprise everybody to the positive in the field of precision medicine over the next five years?

Curt: I think there are multiple aspects to that answer. So, the first is, you know, the benefit to the patient. When I think about 10 years, 20 years from now, what does that look like? It’s actually having multiple biomarkers, so you can actually discern even more granularly what is the best fit because as competition increases in individual diseases or individual disease states — so say mild-to-moderate IBD. People are going to come in; they’re going to copy those individual biomarkers. So, it’s going to start to expand where you start to look at a host of different biomarkers. And I don’t know if it’s three or seven or 10 ultimately where the science goes. But you’re going to be able to discern the patient population even more and more finely.

And what that’s also going to enable is a much more structured way of how you select treatment for the individual, but then how do you actually select treatment across the care journey? So, when the first medication, which is a great fit and is working fantastically, starts to work less effectively in year two. Already have the data and information on that patient as to what, what are the signs to look for, and what’s their next treatment. And being able to have it not just at the population level but across the care journey. I think that’s going to be really, really important.

And the second part of it is, you know, what we mentioned earlier. So, as the population gets continually refined and more personalized, right? And competition increases because there’s more data available, you can go through the development and commercialization process faster, the cost of developing these drugs is going to come down, but competition in the market is going to go up. Ultimately, it’s about getting better care for the patients but also being able to do it at a much more affordable price. Piling on more expensive drugs is not a long-term outcome, but enabling this type of innovation in development and commercialization, and then enabling the coverage and clinical selection with this type of data across the entire industry, is really going to enable people to do this at a more affordable price. And that’s part of the ultimate goal. It’s not just better patient care. That’s number one, but it’s also how can we build this in a way that’s economically sustainable and competitive? So that way we can help drive the cost of health care down for the individual at the same time.

Chris: I think that’s such a compelling vision and to think, if you can really leverage all this data, the ultimate outcome is better care for more patients, much more affordably — such a good vision to have and to build toward. And so Curt, I really appreciate you diving into all of this with us on Founded & Funded. It’s super fun to talk about all things Ovation and precision medicine and all of the incredible acceleration at Ovation. And we really appreciate you having this conversation.

Curt: Thanks for having me. If you ever have an empty spot in your podcast schedule, I can talk for three more hours. So let me know. Thanks again, Chris.

Breaking Conventional Wisdom: Jama Software’s Blueprint for Profitable Growth

Listen on Spotify, Apple, and Amazon | Watch on YouTube

In this episode of Founded & Funded, Madrona Managing Director Tim Porter and Jama Software CEO Marc Osofsky discuss the intentional strategies that fueled Jama’s transformation from $20M ARR to a $1.2 billion acquisition by Francisco Partners. Marc shares his blueprint for scaling with purpose and embedding efficiency into the DNA of a company. Tim and Marc unpack Marc’s approach to turning conventional wisdom on its head, exploring how focusing on intentional strategies can lead to sustainable success.

Whether you’re looking to refine your go-to-market strategy, enhance operational efficiency, or prepare your company for an exit, this conversation is packed with actionable insights.

This transcript was automatically generated and edited for clarity.

Tim: Let’s start with a quick introduction. I was fortunate to be on the Jama board for almost 10 years after Madrona invested, and you came on several years into that. can you share a bit more about Jama Software? What it does, its journey, and a few of the milestones that led up to this exit to Francisco Partners. And then, we’ll dive into some of the lessons learned along the way.

Marc: Sounds great. Jama Software basically helps innovators succeed. And so we’re focused on companies that build complex products. A product that has software, hardware, electrical, those types of things. So our customers are in the automotive space, semiconductors, aerospace, medical device, and industrial. It’s things with hardware and software. We’re really the first company that’s helped bring those different engineering disciplines together to be able to improve the performance of the product development process. So they can go faster and deliver higher-quality products.

Tim: Marc part of what inspired this podcast was you were gracious and came and spoke to all the CEOs in our portfolio at our recent CEO summit, and that was super well received. And as we were talking, I realized, wow, there are a lot of learnings here that you’ve been thinking about — we should share this even more broadly. And this framework you had for thinking about those lessons learned I also thought was really interesting, and that is to contrast conventional wisdom with this idea of intentional wisdom, so this idea of accelerating growth — so many companies across our portfolio in the world right now has gone through this period of slowdown, and it’s like, how do we reaccelerate growth? How do we keep growing? But how do we do it efficiently?

And the conventional wisdom around accelerating growth is — You have to do it fast. You have to scale your sales team. You’ve got to hire more people, you’ve got to focus on quota and headcount and capacity. But you really focused on something a little different starting from right when you started at Jama Software — tell the group what your intentional wisdom was in this area.

Marc: I’ll share things here that work for us. They might not work for everybody. But hopefully, everybody steps back and thinks a little more critically about conventional wisdom and doesn’t just apply it.

Tim: That is the core point of this notion of intentional wisdom, isn’t it? That you have to find the right strategy for your company and not just think cookie-cutter — what everyone is saying you have to do.

Marc: Right, exactly. This pattern application takes the place of critical thinking. I think the conventional wisdom is you have to hire more salespeople. You’ve got to have more quota capacity, right? It’s very capacity-centric and expectation-centric, right? We can’t hit the number unless we have enough capacity.

The interesting thing that happens if you follow the conventional wisdom approach is you hire new salespeople. They’re less experienced. They generally have a lower win rate because they’re less experienced. They don’t know the market as well. They don’t know the product as well. So you end up lowering your win rates. So you’re increasing headcount, but you’re lowering win rates. So you get a negative capacity effect.

Obviously that’s, that’s one way of looking at it. The way we ended up looking at it was a bit different. So we started thinking about how much demand is in the market? How much TAM is being activated? I mean, this gets into another concept around TAM. There’s a lot of focus on TAM. How big is this market? How big is the potential market? How much is addressable or serviceable? But it’s all about the total potential size of the market. There’s no real focus on how much of that TAM is being activated in a given year. That’s your actual demand — the TAM activation. And so once you realize you might have a huge market potential, but only a small portion of that’s being activated, well, you’ve got to get as much of that as possible.

So what determines how much of that activated TAM you get? It’s your win rate. And so we focused first on win rates. And our win rates were in the 30 percent range when I started. It’s probably pretty typical in a space where you’ve got multiple competitors. We focused on that, and we ended up taking them up to north of 70%, almost 80%. That alone more than doubles your capacity, right? If you think about all the time your salespeople spend on an opportunity and then you lose, all of that time is wasted, right? It’s wasted capacity. So we essentially doubled and even 2.5Xed our capacity just by improving win rates.

So it’s not as simple as just hiring more reps and getting more capacity.

Tim: And just to give folks context — over the period you were at Jama Software, I know you’re a private company, but just to ballpark — $20 million of ARR and 6Xed it in the timeframe. So just, I think that’s also important because sometimes these approaches are either much easier at a big scale or are much easier at a small scale, but you took it through the classic growth range in implementing this approach to growth.

Marc: Yep. No, that’s right.

Tim: I teased the efficiency side of this equation in teeing up growth, and clearly doubling or 2.5Xing your growth rates creates a lot of efficiency. But that is also only part of it, and you got to a very high rule of 40 at Jama, which is growth rate + free cash flow rate. And, if not the conventional wisdom around efficiency, it certainly has been the pattern to grow rapidly — grow at all costs, maybe, and really just grow into efficiency. The thought is — “If we get to enough scale, we’ll get efficient. But if we don’t, then we have to cut.” So you end up in this grow-then-cut trap, which is a phrase you actually used. Talk a little bit about what Jama Software did to avoid this grow-then-cut trap — in addition to this focus on win rates.

Marc: I think it’s, it’s an interesting topic. I mean, maybe some companies have pulled it off. God bless them if they did. I think it’s a really hard thing to do. So, I was always nervous about falling into that trap. And the main reason is I think it’s a completely different culture. An efficiency culture — if you go back to the Walmart days or something and the whole management discipline and culture around Walmart — just how cheap they were about everything they did as a corporation, right? That got embedded into the culture and embedded into how they did business. They don’t want to waste a single dollar and have that increase the prices of the products.

So that culture, I think it’s really hard to flip. Take a company that has a spend-like-crazy culture and then flip it to all of a sudden being efficient, right? Nobody knows how to operate. Everybody thinks it can’t be done. They need more resources. And so I think you have to start that culture early and really demonstrate that yeah, we can grow and we can do these things in a very efficient way and build on that. So we were very intentional about that. It was one of the reasons why we didn’t want to go into the successive spend environment.

Tim: I remember some of the things that you did in this area. Some of it’s a big part of its hiring and team building. So there was very little overhead in the organization. Everyone was either building product or selling product for the most part. Right now, a lot of people are coming back to the office. Frankly, we’re seeing a lot of benefits for innovating and things back in the office, but you went very much remote and found subject matter experts wherever they were. Maybe talk a little bit to some of those actual decisions and approaches you use to bring this culture of efficiency to life.

Marc: We probably are a bit unique in terms of our virtual-first approach. So we shut down the office right after COVID, like most folks. But at that same time, we decided to shift our whole go-to-market team — so sales, pre-sales, consulting, and customer success — to be aligned vertically.

And that we wanted to bring vertical expertise into the company. And when you think about, well, where’s the expertise for automotive versus medical device versus aerospace, they tend to be in very specific geographies in the U.S. and in Europe and in Asia. So there’s, there’s nowhere we could find all of that expertise that could be commuting distance to Portland, Oregon, which was where headquarters was.

For that reason, mainly, we decided to stay virtual first. And we left the office open and available for local folks and they could use it if they want, but we didn’t force people into it. So that was really the main driver. We focused on talent first, not geography or focusing on an office. I mean, there are great things about getting people together, and we still do that. Obviously, there’s huge energy and creativity when that happens. But our primary driver was how do we attract the best talent we can, no matter where they live. And that’s really proven to be effective for us.

Tim: There’s a broader point here that I think was one of really the key strategic decisions Jama Software made, which was to go vertical, to focus on these specific industry verticals that you’ve alluded to a few times. And I think that had an impact on growth, this concept of activated TAM, which maybe you can say a bit more about cause I think that’s really interesting. And a lot of people, including a lot of investors get that wrong, probably. To keep with our approach here, the contrasting and conventional wisdom there is like, you gotta be broad to grow and be big, and that was something that we were trying to do for a number of years at Jama — is be the really broad, horizontal requirements product management platform.

And that just didn’t quite land the way the company started to take off once you put in this vertical orientation.

Marc: I think this horizontal view of the market tends to lead to conventional wisdom for the go-to-market team to be geographically aligned. The original origin of that, I believe, was around assuming face-to-face visits for salespeople and you want to minimize flight time and costs and how many calls could a sales rep make in a day, so you focused on geography and costs. That also leads to treating Europe like a single geography and having a head of sales in Europe, and then that leads to tension always between the U.S. and Europe around accounts because so many accounts are spanning geographies. And that tension never is productive and it’s actually challenging to manage. So that’s the conventional wisdom — do it by geography. So the reason we didn’t, it’s going back to the win rates — as we were digging into the win rates and saying, okay, why are we only winning 30 percent of the time and doing win-loss analysis and really digging into why did we lose and why did we win?

We realized that, fundamentally, we had this horizontal product that was highly configurable. And when we had a knowledgeable person in the sales cycle who knew that industry and knew our product and could bring those two things together in terms of here’s the best way to deploy and configure the product and meet industry standards, we’d win almost all the time.

When we had someone there who just happened to be in that geography and didn’t know their industry and really didn’t know what they were talking about and couldn’t get to the depth of the details, we’d lose. And so that led to say, okay, well, we should really focus on industries and make sure in every one of these meetings that the people leave a conversation with Jama Software saying, wow, these guys really get us, they understand our industry, they understand what we’re trying to achieve.

They speak our language, right? It’s all those things you want to hear. And none of the other companies we talked to do. So that’s, that’s why we focused on industries, and that’s the main reason why the win rates came up.

Tim: I think companies sometimes resist this notion of going vertical because they, I don’t know, overestimate the complete overhaul that it takes to be able to do this. I mean, you did a really authentic job of tailoring product and messaging and customer empathy and understanding into these specific verticals: medical device, automotive, and semiconductor. But I was also impressed that the process didn’t take a decade. You were able to make it happen pretty quickly. And I guess it’s no one thing, right? It’s — you hired people that had the right experience. You put the right playbooks in place from a go-to-market side. You just have to really commit, I guess, across each of those areas, and then you can make this happen.

Marc: Yeah, we tried to be a bit systematic about it. Cause it is a lot when you take it all the way, and now we’ve moved to business units even. So that’s the full extension of a vertical focus. But at the starting point, we focused on the customer and that interaction and the expertise that mattered — start there and start with the demos. And how do we tailor those? And how do we get the right people there from a pre-sales perspective and then post-sales and consulting. So we took each step at a time, and we created templates and approaches that we can apply across all the industries, right? Here’s how we do those things. And we created industry solutions that we productized, right? So, those configurations of the product for each industry become products in themselves. But we didn’t start there. We just started right with the expertise in the moment of truth with the customer at that moment. How do we demonstrate in the product, in the demo, in the person’s expertise that we can solve their problem better than anyone else?

Tim: I’d be remiss not to mention — I think part of what helped you execute here so successfully was there was a great product and a great insight into market need, and the Jamanians that had been there before you had done a great job from the founders to other people. So there was a great foundation, but you were able to tune in ways that really helped growth and efficiency both take off. This concept of culture is come up a number of times. Again, conventional wisdom, right? What is culture? Culture is defined by mission, vision, and values. And those things are super important. Like you need to have those. Jama Software has those. But I think part of your intentional wisdom is that it only really becomes impactful if you turn it into daily habits and behaviors. Maybe talk about how you were able to do that and just your philosophy on how to drive and build a high-performance culture.

Marc: We started with the definition, which I think is always hard. Like what is culture? Most people can’t define it, right? It’s one of those challenging terms. The definition I like the best is, is a very simple one. It’s the sum of what we do and say, which is a very large definition and simple. But I think it helped us focus on very tangible things. We ended up going from that definition of the sum of what we do and say and realizing, well, what drives what we do and say, and it’s largely habits.

And we tend not to think about habits in a work environment. We think about habits in our personal lives, we often think — we want to eat healthy. We want to exercise. Those are habits. But the reality is almost everything we do is habits, and we all have work habits. And I don’t think a lot of us really reflect on those. And so we said, okay, well, what are the habits at work that we want to focus on that we think is going to drive our success? I think one that’s interesting to talk about is our habit to follow the scientific method.

And most people, again, don’t talk about the scientific method when it comes to business. It’s not taught in business schools. But we were really trying to reflect on how decisions are made at companies. What do we do with ideas? How do we prevent the negative things that happen with ideas? People become attached to their ideas. It’s I have my idea. Tim, you have your idea. We’re gonna argue about our ideas. We’re going to get very emotionally attached to them. Which ideas win? It depends more on position or politics — not very rational things. So we said, okay, well, how could we enable more rational decision-making and more transparent decision-making in the company? And how we could get better outcomes from decisions? And so the scientific method is the best method that’s been proven to do that in a structured manner. And so we just modified that for more of a business context.

Then, we used our own software because I wanted to get everyone in the company to use our software. So we’ve used our software to enable the scientific methods. So people go in there and they enter hypotheses and tests and impacts and collaborate with others. And so everybody can see what the potential hypotheses that we might implement are. And they can see the ones that are being tested. So we get the huge benefit of being completely transparent about the thinking that’s going on in the company. So they don’t have to wonder what’s happening.

Tim: I love this. There are three big things, at least, that I took away from what you’re just saying.

One — there’s a mindset shift from ideas to experimentation. And people emotionally holding on to polarizing ideas. But experimentation, that’s a mindset shift.

Two — you actually created a mechanism to do that. It’s not just, Hey, don’t think A, think B. You put in this mechanism.

Three — That mechanism is your own product.

Well, let’s talk about the process — Jama Software has a great land and expand, right? Sometimes, you get adopted for a project and end up being the platform that entire large product organizations at huge enterprises get run off. There are post-sales questions around who owns the renewal? Is customer success just there to drive CSAT, or do they own the renewal?

Almost every company that I’ve worked with juggles how you organize the customer success group, and we juggled it a bit at Jama Software over the years. And it’s also, I think, not uncommon for it to evolve as companies scale and grow, but maybe nuts and bolts here, just explain a little bit how you organize this customer success side of the business.

Marc: The way we went about it. We start with the customer, and we start with the expertise we want to bring to bear. And I think for our situation, we are on the more complex extreme, right? We’re dealing with engineering departments building very complicated products and very advanced industries. And so the expertise someone on our side needs to bring to be able to go toe-to-toe and bring value to a customer is a pretty high bar. That’s really a consultant, I think that’s too high a bar to expect out of a CSM profile. I think, on the other extreme, if you’re more of a horizontal product that’s more workflow or more structured, it doesn’t have this level of complexity, right? Then a CSM can play more of that consulting role.

So I think that led us to put the expertise for best practices and post-sale adoption in the consulting group. And so the CSM team leads all the commercials, leads the relationship with the client, leads expansion conversations and quarterbacking the whole experience, but we’re not putting the extra burden on them to try to, okay. And you have to be an expert in this very complicated engineering field.

Tim: And did that function report to you? You also had a great head of sales on that side — I was trying to remember who actually owned this responsibility. Was it sales, or was this a direct to you as CEO?

Marc: It evolved over time, so now it all rolls up to our CRO, Tom Tseki, who’s wonderful. Previously, while I was in a nurturing state, all of us had so much to do. So I was taking that on. I’m a former consultant, so it’s an area I enjoy.

So, a bit of an evolution.

Tim: Yep. Makes sense. Let’s talk about the process of selling to Francisco. It was an important process and milestone for the company. I think, back to our conventional versus intentional wisdom, there’s conventional wisdom, at least in my world in VC that great exits require sexy market and really rapid growth rates. You had solid growth rates. I mean, your growth rates were 25, 30%. Which are not easy to come by in this market. So your intentional wisdom was a balanced approach with growth and profitability as well as some strategic positioning. Maybe talk about how you really ran that process, including what precipitated it, that it seemed like the right time. And then going all the way through, which was a pretty lengthy set of discussions and strategic decisions that culminated in this majority sale to Francisco Partners.

Marc: It was a bit of a blur then.

Tim: First, it went slow, then it went fast.

Marc: exactly. I mean, the most important thing I think for founders and CEOs is if you can generate the rule of numbers, right, if you can have profitable growth and get that rule of 40 or even rule of 50 if you can get there, as you scale up that lets you really control your destiny more. Because then financial buyers come to the table that really buy on the numbers. And so we were able to attract that whole swath of potential investors because of the numbers and then because of the numbers, then they dig in, some already knew the space, those that didn’t, learned about it.

But that’s the best thing about profitable growth — you bring all these financial buyers to the table. There’s so many of them now, and there’s so many great ones, and there’s so much capital and dry powder out there — and there for high-quality assets that are delivering those high rule-of numbers, the valuations are quite fair and exciting.

The strategics are always more challenging in terms of how they make decisions, how long it takes them to make decisions, what’s really driving their interest in certain areas, and how do you fit into their existing portfolio. So that’s a much different process in terms of your ability to control the outcome and timing.

We had both strategics and financial sponsors involved. And so, we went through all of those different experiences, but I think that’s the fundamental difference, right? It’s, if you did, if you can do profitable growth, and deliver these kinds of numbers and have these kinds of win rates, you can control your own destiny. And then, hopefully, you’re in a fortunate position. You might have a chance to choose between a financial sponsor or a strategic. But only betting on strategics and burning cash and hoping one of them is going to buy you for some really strategic reason, right? That’s a lower probability strategy.

Tim: Yeah. The whole process really was catalyzed by interest from some strategics. I think you and the company did a good job of not thinking about an exit, but just from a business development, a partnership standpoint, who do we need to work with? Who do we need to go to market with? Who does our product work best with? So that kind of catalyzed it. But then, as you said, it turned into, We had a banker — it was more of an auction. Ultimately, I was also struck by the fact that successful M&A comes down to relationships and building trust with who you’re going to go work with afterward.

And even though there was a bunch of interest, and it was private equity, it struck me that that that was really ultimately a big part of how you did that successfully with Francisco. Any thoughts or lessons around, even in these competitive, tense, transactional things, how it’s still really critical to — maybe this sounds like, “Oh, duh,” but it’s hard at the moment to form personal trust-based relationships with who you’re going to work with going forward.

Marc: For those that haven’t been through this, I did more lunches than I think I’ve ever done. Um, and so, yeah, I had the good fortune to meet all sorts of top private equity folks. And I’d say the most experienced ones and the most senior ones do focus very much on that, that personal relationship and spend a significant amount of time with me one on one. It’s all about trust when you get down to it. It’s a very big part of the process. And when I look at the other bidders at the end that came close, it was, again, the strong personal trust that had been built and relationships through the process.

Tim: So, of course, this was a $1.2 billion exit, which is one of the biggest in the SaaS space over the last couple of years. Of course, now it’s like, well, from here, like, what’s the next 3X, 5X, 10X? How is the transition gone? And any thoughts on what this next phase at Jama Software involves? I mean, there’s, there’s always an evolution, but anything to share there on how this transition has gone thus far?

Marc: I think the reason I’m still here and so many of the Jamanians are still here is we’re just at the beginning of this journey. This is not something that we felt we had finished achieving our vision or our mission.

And so we have very good clarity on where we’re trying to get to. And we still have many years ahead to fully achieve that. So I think for all of us, it’s just felt like a continuation of what we’ve been doing. Obviously, we got to scale, and we got to deal with scale. I think the biggest thing we’ve done is moved to this general manager structure, which is a big shift.

And for a company of our size, it’s probably a bit early going to conventional wisdom. I just saw a public company just starting to do this at a much bigger scale. We’re probably early in making this move, but it’s just another doubling down on our belief in the power of being focused on segments and trying to push authority, responsibility, and decision-making down in the organization as much as possible to really be successful and it also creates all these great roles for people.

And that’s been a big part of our success. We’ve promoted from within, we delegate authority and responsibility, we hold people accountable. But so many folks in the company have meaningful roles and having huge impacts and I love seeing that. And frankly, it terrifies me if everything had to flow through me and I had to be an expert in all these industries, and I mean, we’d fail.

Tim: Yeah. And, obviously, we wouldn’t be on this topic if things weren’t going well, but that being said, um, this is super authentic, right? I mean, you and I have talked about it a bunch offline and the fact that things are off to a great start. It’s just not always the case, right? You hear a lot of horror stories about acquisitions, whether it’s strategic or private equity.

And I think it’s just a real testament to how thoughtfully you aligned on a plan and a partnership going forward. And, of course, the foundation that you’d, that you’d built. So, just in wrapping up lots of great thoughts here, I think for companies to think about at different stages of their journey and any last thoughts you give? Most of our investments, of course, are at the early end of this spectrum, right? So, thinking back to when you first took the reins at Jama Software or even before any, any last thoughts or things you’d reinforce for founders and startups out there who are trying to get on the path that you’ve been on?

Marc: Yeah, I think at the early stages it’s really focusing on how much TAM’s being activated. Like, where’s the demand? Really understand where’s the demand? Can you measure it? I mean, you can do a rough measurement of identify all of your competitors or alternatives. How much are they growing each year? How much of that is new versus just growth in their existing accounts, right?

You can, you can get to some estimates. so really trying to understand how much demand there is out there and then really understanding why you win and why you lose, So figuring out, okay, in these situations, if it’s this industry and they have this problem, or it’s this situation, we have a really high win rate, and then these other ones we don’t, right?

So that’s the, just having some discipline there, I think is the, the starting point for everything.

Tim: Well said, Marc. Thanks so much. It’s been fantastic to work together on Jama Software. We would love to find more ways to work together going forward, and all of this advice and sharing these lessons is gold for our listeners and for me as well. So thanks for everything. And thanks for being on Founded & Funded.

Marc: This was great. Hopefully it helps. See you, Tim.

Archon’s Jamie Lazarovits on Unlocking the Full Power of Antibodies With AI

Listen on Spotify, Apple, and Amazon | Watch on YouTube

In this episode of Founded & Funded, Madrona Partner Chris Picardo sits down with Jamie Lazarovits, the co-founder and CEO of Archon Bio. Madrona partnered with Archon truly on day one to spin out from David Baker’s lab in the Institute for Protein Design at the University of Washington. The company leveraged cutting-edge protein design from the IPD to develop a brand new therapeutic modality with the potential to massively transform how we think about treating many categories of disease.

Jamie and Chris dive into how Jamie pivoted from bootstrapping deep tech investments to biotech, how he landed at the Institute for Protein Design working with David Baker — who just won a Nobel Prize, how to navigate translating an academic project into a commercial product with a clear business model, picking a co-founder and building a team, and of course, how he’s building novel types of drugs that will get more treatments to more patients.

This is a great conversation for anyone interested in launching a startup, whether biotech or not.

This transcript was automatically generated and edited for clarity.

Jamie: It’s great to be here. It’s exciting to finally have the opportunity to talk about who we are. We’ve been maybe not that great of a kept secret, but it does feel good to finally share who we are.

Chris: You have told me many times that you started in the startup and tech world by bootstrapping investments into deep tech companies. How the heck did that happen?

Jamie: My best friend when I was in my late teens, he started a software company and he happened to do pretty well. I had some money from my tuition as well as from scholarships and decided to start day trading. This was exciting because we had never played around with something like this before, but we thought, why not do something more mathematical and see if we can figure out trends in the market? At the time, we were playing with penny stocks and the rare earth market, and that was a lot of fun. We did relatively well, but the problem was we realized it was gambling at the end of the day and we weren’t able to see underneath the hood to know if these companies had substance. We also realized we weren’t excited about just making money. We wanted to understand how these people thought about what they were doing.

My friend had this software company based out of Waterloo in Canada, which is a very big engineering school, and we started speaking to a lot of early-stage founders who had amazing ideas but didn’t quite have the right capacity to communicate what they were doing. My friend and I were interested in understanding what was going on under the hood, so we decided to take our tuition money, the money we made from day trading, and the money he made from his company to start investing in these companies. We ended up doing this in about 10 different companies. Some of them have grown to over 100 employees with multiple tens of millions of dollars a year in revenue.

Chris: It’s fun that you got a little taste of our side of the table.

Jamie: Totally.

Chris: Some of those companies are still going. I think that’s super interesting. Not many people have this sort of accidental venture capitalist interlude between graduation and going to get your PhD, but I think it’d be interesting to talk about how you made that transition. You were working with these deep tech companies and then you decided that you’re going to go to grad school.

Jamie: I was working with these deep tech companies. A lot of them were focused on AI. I came from Toronto, so obviously, machine learning is very big over there — they are big in AI and in hardware. It was interesting thinking about how you translate deep tech products to solve real-world problems. I was working at Harvard, and my roommate’s mom invented this device that profiles maternal blood. Instead of a woman having to go for an amniocentesis, you were able to profile her blood, and then you could determine if potentially there was something wrong with her child.

I found this amazing because it opened my eyes to this opportunity of using engineering to solve important problems in healthcare. It inspired me to think about how I could take this deep tech stuff that I had learned and my passion for biotechnology, medicine, and healthcare and integrate that together to find something that is bigger. I ended up finding my PhD supervisor, who’s a world expert in the delivery of materials and nanomaterials, and I did my PhD with him because it felt right.

Chris: I also know that you have a more personal reason for being super excited about the biotech and healthcare side of the world, and I think it would be great to share how that put you on the path more towards biotech and eventually to the IPD.

Jamie: My mom would say that she’d be incredibly upset if I told her that I failed my family by doing a PhD. If I would’ve become a medical doctor, I would’ve been the seventh generation in my family. My father was a physician-scientist, and when I was about seven, he was diagnosed with brain cancer. He was pretty fortunate because he was able to survive two years with that, but if you have a certain type of brain cancer, glioblastoma multiform, it’s pretty much a death sentence. It’s like a 99% mortality rate.

But in line with him dying from brain cancer, at the same time, he was doing absolutely incredibly impactful science. He was an immunologist by trade, worked in organ transplantation, and was one of the first people to develop antibodies. He developed an antibody that is now one of the most successful biologics on the market. It’s called Entyvio. Obviously, he was not here to be able to see that go public, but one of the big things that it taught me was that having an impact is possible. I think sometimes in life you’re taught that these things that you want can’t happen, but when you see something that is tremendously impactful and can help people and improve lives, it does it in a certain perverse way actually show you that it’s possible even though you could’ve maybe believed you could do it beforehand.

Since he wasn’t there, I took a lot of the responsibility of navigating with different agencies because, obviously, it’s a pretty big deal if a medicine gets FDA-approved. It was exciting for me because it was, again, this theme of integration brought me into experiencing what it’s like to see a drug get approved, see what happens when change is real, and integrate that with both my learnings as a student as well with me bootstrapping this deep tech. One of the big things that it came back to is that I wanted to have impact in a way that people wouldn’t have to go through what I went through historically. One of the ways that you can do that is to work with the individual, but if you’re working on engineering or technology solutions, you don’t have to do that. You can have impact beyond it.

Chris: It’s an incredibly powerful story and a constant source of motivation. I think, for me, I just could be lucky that I get to work with founders like you who are so driven to solve those problems. It’s vicariously motivating for me, certainly.

Before we get to the Archon story, which obviously we’re both super excited to talk about, you made it to the Institute for Protein Design, where you met your co-founder. Maybe just give us a little bit of insight into what the IPD is for those of our listeners who don’t spend as much time in the world as you and I do, how you got there, and how you and George met.

Joining the Institute for Protein Design – David Baker

Jamie: The Institute for Protein Design, when I first moved there, felt like the physical, mental, psychological manifestation of the Yukon Gold Rush. It felt that when people were there six years ago, they felt that they were a part of something big, and there was this substance that you could feel in the air that people felt that they were going to have impact or the things that we were working on had the ability to change how we live.

The Institute for Protein Design was spun out of David Baker’s group, and he has this unbelievable superpower of attracting good people, which turns out that’s actually not as easy as it sounds. And because of the magnetism of the excitement of the technology, but also people to be enabled to pursue science creatively, it’s built this institute where people pursue design, computational protein design to solve problems, but without predisposition of what those problems are. So people can come into the institute with complete and open creativity, and that’s encouraged. What’s cool and interesting about it is that it’s very rare to be able to go somewhere where you are completely supported and financed to be creative and imaginative and be given everything that you need to do in order to get things done.

Chris: And maybe do some awesome protein design along the way. We’re talking about David Baker who is the founder of the Institute for Protein Design and invented the modern version of the field. You and your co-founder, George, were part of that along the way. Do you want to talk about how you and George met, it’s a serendipitous story, and why you got working on the Archon technology?

Meeting Co-Founder George Ueda and the Birth of Archon

Jamie: The interesting thing about life is that there’s only so much that you can plan, and the big part is being open to opportunities when they present themselves. In this case, this was like anything but an opportunity. There was a going away party for someone who had been working at the IPD, and this was about five and a bit years ago. I happened to sit next to George Ueda at a dinner table, and we were both drinking a beer, and he had made some comments about being half Japanese, and I’m like, “Oh, I just came back from Japan.” We started talking about our passions and the things that we were interested in, and it had nothing to do with any professional overlap, but we got along well and decided to, I don’t know, chat the next day at the institute. What we realized is that he was working on some pretty cool technology. He architected the technology with members at the IPD, and the problems that he was speaking about were very familiar to the types of problems that I saw in my PhD. He was such an interesting person and easy guy to chat to, I just started to pursue it and pause it, is there cool opportunity to maybe do something together here?

Chris: I think it’s interesting that there wasn’t pre-planning involved. You didn’t go to the IPD with the idea of, “Hey, I’m going to start a company.” You didn’t go search out a co-founder of the IPD with the baked-in idea, “Yeah, we have to find a company to start just to spin this thing out.” You guys met at dinner and started working on really cool technology.

Jamie: Part of that is true. The other part that is not is that when I did leave my PhD, it was a difficult process for me to figure out what I wanted to do with my life. My friend Carl, who’s a co-founder of the local company Neoleukin, told me that I need to come here, and that protein design is the most amazing thing that I need to be aware of, and this was seven years ago.

When I spoke with David Baker and Lance Stewart, who is the chief strategic and operations officer at the IPD, I told him that I had a strong desire to be able to build a technology that had impact. The amazing part of being able to use computation was that you could explore space in ways that you could not have done with other types of technologies. I did come to the IPD with a feeling and a notion that I wanted to have impact and translate, but it wasn’t at the expense of doing something that had quality. So I guess like 50/50 there.

Chris: I remember visiting you guys in that tiny office where you sat six inches apart from each other, and it was boiling hot, and being like, “How do they get any work done?”

Jamie: When you do that for four years, you get pretty used to it.

Chris: I know we’ve talked about your background a lot, and I think that’s important, but now we get to the exciting part, which is that for the first time, you were going to talk about Archon publicly. We’ve announced the company, and the fundraising. I think there’s a huge amount of tailwinds behind what you’re doing. What’s the revolutionary technology? I’m going to give you the space to describe out loud what you’re up to and why we’re so excited about it. But we have to talk about the fact that David Baker just won the Nobel Prize, which I think is a phenomenal achievement and honor for David and the IPD, Seattle, and everybody that has worked with him along the journey to get there, which is not over. Start with the work that George and David, George being your co-founder, were doing at the IPD for the 11 years that George was there before Archon.

Jamie: The most interesting thing about protein design is that it wasn’t designed with the intention of application. It was more from this mathematical, almost artistic pursuit of, can we do this? Can we make structures that have never existed in nature before? Can we represent beautiful shapes out of amino acids that give rise to proteins? The interesting thing is, back in the day, even being able to put little pieces, little Lego blocks, of proteins together was incredibly difficult to do. Where the field was about seven, eight, nine years ago was that. Then as complexity began to increase, it was like, okay, well, can we add on to the complexity of the types of structures that we can make?

Then it took a lot of insight from people like my co-founder, George Ueda, and other really spectacular members of the IPD to start asking the question, okay, well, if we can make these shapes, is there a chance that these shapes can do something? There was this fascinating transition where George invented vaccines that are in the clinic right now. He was one of the first people to be able to show that you can use a design protein to tune the way a cell behaves and communicates. Now we’re at this completely whole other side of the coin where we now have this public affirmation and confirmation that protein design is big, it’s impactful from a scientific and technical standpoint, but now the next step is, okay, what can we as a community do with it? What are the real problems that we can solve? And that’s where companies like Archon fit in.

Chris: I think that’s a perfect transition. On the theme of design proteins that can do something, maybe give a description of what Archon is. What are AbCs — Antibody cages? What’s the underlying innovative technology that you guys have built?

Revolutionary Protein Design at Archon: Antibody Cages (AbCs)

Jamie: Archon is a biotechnology company that has created a new class of protein-based biologic to solve significant problems in the medical space that other technologies cannot. Antibody cages, or AbCs, are at this intersection of generative protein design and molecular engineering. They help resolve a long-standing problem in science with this unique engineering solution. This long-standing problem in science is how do you make a drug that gets to where it needs to go, doesn’t go to where it shouldn’t when it’s where it needs to be, it stays there, and it does its activity properly. It turns out that’s an incredibly difficult problem to solve.

Chris: You’ve reduced drug development into the simplified form, but that’s the game, right?

Jamie: Exactly. If you study over decades and decades and decades in the pharmaceutical industry, what we’ve learned is that it’s the shape, size, diameter, and the flexibility of a drug influences the way it travels inside of the body.

Chris: It’s super interesting. You can, by changing the shape of the AbC, change the efficacy profile, the delivery profile, and you can get these antibodies to where we want them to go and engage with the target totally differently. And then what? What effect does that have the ability to do that on the underlying disease or pathway that you’re trying to modulate?

Jamie: Fundamentally, it comes down to what we call the therapeutic window. It’s a balance of on-target versus off-target effects.

Chris: What does that mean?

Jamie: If you look at people that have taken chemo and they lose their hair, that’s a consequence of how the drug works. The reality is that the drug is distributed to many, many different places. You have this balance of how much drug can be added that gets the profile necessary and how little possible you can use in order to not have all of these super negative effects. By changing the geometry and these unique properties of our structures, we have this finer ability to tune either going up into our desired site or away from those sensitive sites that are going to cause some of those worse side effects. It’s this bio-distribution-enabled what we say is a modulation of therapeutic window, which is a huge opportunity in a variety of different therapeutic areas that either get too much or too little efficacy because if you have too much of something, you can get a lot of toxicity, and too little of something, you’re the safest drug in phase one.

Chris: To clarify for everyone, because I’ve been able to see all the visuals and it helps. What you’re really talking about here is the underlying cage that you’ve built out of proteins, which can take any number of shapes. It could be a pyramid. It could be a cube. It could be an icosahedron, a classic geometric structure that you’re able to link antibodies into naturally, so therefore, they’re fixed in a geometric position.

Jamie: Exactly. Yeah, thanks for bringing that up. The antibody cage system is you have your antibody of interest, you have your design protein of interest, and you mix them together and they spontaneously form this rather beautiful geometrically defined structure. It’s the geometry of this structure that dictates how it travels in the body, how it interacts with its tissue, and how it elicits its behavior. It turns out that making that is very difficult, if not impossible, without the advent of these computational protein design methods.

Chris: And now you can make a lot of them.

Jamie: We can make a lot of them. The core innovation that we’ve done is not only on the backend AI, but it’s also on the manufacturing. Sometimes, when you make a technological solution, sometimes you overthink how it can be integrated. One of the core things that we tried to do when we were making this technology was figure out how this can be as easily insertable into downstream manufacturing of antibodies as possible. Antibodies are the largest class of therapeutic in the biologics market, and often, when you change an antibody, you lose all the features that made it great in the first place. As we try to ask a pretty simple question, how do we not do that? That’s where our unique manufacturing has come in — because we now can do that. We can make any number of these, but fundamentally, it’s using AI to solve a very defined problem that not only allows us to tune biology but also allows us to tune biology without compromising manufacturing and production of the therapeutic in the first place.

Chris: We’ll get into this tuning topic because I think it’s particularly interesting, and it’s a theme that has started to come up with a lot of AI plus biotech companies, but one of the ways that I think about Archon is essentially unleashing the full power of antibodies. Antibodies transform that’s the tagline. Can you give a couple of non-technical examples of why this cage structure with antibodies just elicits a totally different type of behavior or what you can achieve that you can’t with an antibody alone?

Jamie: I can give an ultra-colloquial example.

Chris: Sure.

Jamie: If I yell at you and tell you to do something, you’re probably not going to be that enticed to do it, but if I ask you nicely, you’ll maybe be more interested. So it’s the structure and the composition of how I ask you to do something that really matters, even if it’s the same words. What we found on the cellular level is that it’s structural. It’s explicitly how you ask the question and how you interact and engage with the cell that determines its proclivity to actually do something for you.

Chris: So you’re saying, hey, if you have one antibody that’s the drug and it goes to a target of interest on its own, it may be safe, but it might not have any effect.

Jamie: Exactly.

Chris: If you stick it in your cage and you bring several antibodies in a very geometrically specific confirmation to that target of interest, you might be able to elicit a massively different effect even from the same underlying antibody.

Jamie: Exactly. The most interesting thing is because we can control the structure, we have the ability to tune across the entire plane of behavior. If you want something that’s super strong, if you want something that’s super weak, if you want something that’s in the middle, you have the ability to do that. That’s because of the power of this geometric control and tunability that I’m talking about.

Chris: This is one of the things I get so excited about, and you and I have talked about forever, but if you think about the old paradigm of, if you want to try to figure out the range of what antibodies are going to do, you have to maybe mutate them one by one, do this in parallel, and it takes a long time and you might not get great data. You might not even get any of the results that you want. What Archon has said is, “Hey, just take those same antibodies and let’s try the entire spectrum of geometry, and we can show that just based on the shape, we can get very different types of behavior out of it.” I think that’s an incredible fundamental approach to how you’re thinking about building products.

Jamie: Totally. We like to say that we’re trying to turn drug discovery into an engineering problem in the sense that if you decrease the probability of failure of even being able to generate one of these structures, you can ask the pretty basic science or clinical questions, how do these pathways work? What did failure and success look like in the clinic? Is there a way that you can generate a molecule to be able to hit specific metrics you want to get? It totally redefines this whole notion of a TPP because, classically, our target product profile, excuse me. So often, what happens is you have a molecule, and it has its behavior, and the question is, how much can you modify it in order to get a desirable profile that you want?

Chris: Every time you modify it, something else probably goes wrong or at least changes in a way you didn’t expect.

Jamie: Exactly. From our case, we take it from a slightly different perspective. We’re not going to be talking about our programs today, but for our lead programs, we’re starting with very well-characterized and understood biological pathways where we’ve seen how these types of molecules behave in the clinic. By starting from the first principles of how this biology transforms or translates into a clinical setting, we can recursively go backward and ask, how do we change the structure in order to move towards or away from that specific behavior?

Chris: Ultimately what you were creating are drugs that are fully owned and developed by Archon to treat diseases or to go after targets that were otherwise not other modalities weren’t able to hit.

Jamie: Exactly. The nice thing here is that antibodies are increasingly commoditized. From our standpoint, what is unique is that we own this design and manufacturing process to generate the AbCs. It gives us the opportunity to choose which program areas we want to own, which ones we want to go into, and which ones we are going to be open to potentially co-develop if that’s of interest.

Chris: Ultimately, we will have a pipeline of our own internal drug candidates that are Archon and maybe a couple of things that we decide there are really natural partnerships out there with larger pharma or people with great disease insight or biology insight that we can work with.

Jamie: Exactly. We have an internal pipeline right now just not talking about it today.

Chris: I think this is a great way to talk about where AI plays into the story. I mean, at the IPD, you guys have built pretty much all of the fundamental generative protein design tools that are out there. Certainly, the IPD and DeepMind are the leaders in the space. So you have your hands on everything or both hands on them and have built them yourselves while you were there. And so, how does AI fit into the story?

The Role of AI in Protein Design

Jamie: AI fits into the story because it is the crux of how we solve the problems that we do, but AI also isn’t the be-all and end-all. I think that there’s this misnomer in, let’s say, more of the information sciences space that we can throw these models at biology problems, and they’re going to be inherently useful. The thing is, it’s a highly multivariate, multifactorial space that you don’t even know the implications of what you’re doing has on an even more complex and chaotic system like the human body.

We’ve been incredibly fortunate to be surrounded by the best and brightest minds in the protein design space, but not even protein design. It’s also all these people from all these different unique backgrounds that aggregate at the IPD. The big thing that George and I were focused on was how do you change and control structure? How do you change and control distribution? How do you do that by not compromising these desirable manufacturing features that we like in the first place? AI has massive power. It’s hugely enabling, but it has to be constrained at first in order to know how to use it and for what problems.

Chris: I think one of the concepts that you and I talk about a lot is the product-led platform. AI is deeply embedded into everything you do, and it’s not the product of Archon. For the non-biotech part of our audience, we’ve talked about this a lot. There’s a lot of AI and software and compute sophistication going on under the hood here.

Jamie: We’ve been very fortunate, again, to come from the IPD. In particular, I think what really differentiates us is this extreme focus on these particular problems, especially in these target classes that have this well-characterized biology, but again, we view that we’re leveraging these models in a wholly unique and differentiated way.

Chris: I know that everybody can now check out the new fancy website that’s up and live if you want to see some good examples and animations of what this looks like in practice. Switching gears a little bit, I think for the rest of our conversation, we can talk a little bit about the company-building side of this. As we hit on before, these companies, they don’t come from nowhere, right? There is a lot of building and process that goes into it. You and I knew each other for two years before we started the company. Effectively, you started the company, and we provided the capital to help you do it, but we spent a lot of time talking about this beforehand. I think to start, how do you think about translating the academia and what work you and George were doing at the IPD to the commercial world, and what have been the major learnings and insights that you’ve had now on this journey for two years post-IPD?

Jamie: The first thing that I learned when I was doing my deep tech investing bootstrapping days was the importance of framing a problem, and understanding the problem that you’re solving, and knowing when to ask for help. I think the third part is really big is, which is academia. David, at the IPD, will talk about this communal brain and everybody benefits from this communal brain, but fundamentally, in the academic setting, it’s very individualistic. You are almost devalued from working with many people because your currency is your first authorship. And so it’s very difficult to know how and when to ask for help if the system itself is almost pressured to de-incentivize that.

So one of the biggest parts is that there is this ego associated with being exceptional, being able to build things yourself. When you’re trying to translate a really sophisticated technology, it’s just not possible. You cannot develop it yourself. One of the first things that you have to switch is you need enough of an ego to think you can do something that nobody else has done, but not so much of an ego that you don’t know how to ask for help. You have to find people that can really benefit you and transform your perspective in a way to solve problems that you may not have had the predisposition of solving before that.

Chris: What’s a good example of asking for help? I know for the people who are still in academia or still thinking about starting a company or making this transition, do you have an example from “your own process here where asking for help might’ve been hard but was totally the right thing to do?

Jamie: Probably the biggest thing is that I spoke to George and I said, George, I’m not the best protein designer, but you’re really freaking good. I’m able to conceptualize and focus the problems in completely different ways based on my technical background.” Instead of me trying to confine him, what I actually did was try and find a way to enable him and enable myself where we could solve different types of problems together that actually allowed us to do better faster. A big thing for me was that I had been working in design for a while, but finding someone who was able to complement me in a way that really enabled me to be better, allowed me or forced me to look at what was I good at, what is he good at, and is there a way that we can benefit each other?

Chris: That’s a very powerful message about putting together complementary founding teams and playing off your co-founder really well. The other thing you guys did that was unique was you weren’t ready to start the company for a while. Like I said, we met each other two years before I think you technically really started the company. I think I swung by that small office a lot, and we’d have a chat in an update, and I’d be like, “Hey, are you guys ready yet?” You’d be like, “No, not yet.” And then finally you were. So what gave you the confidence to say, “Hey, we are ready. We made that go-no-go decision, and now it’s time”?

Jamie: We viewed that decision matrix and this lens that they, maybe some engineers will take offense to this, but it’s fine. I’m an engineer. There are science problems, and there are engineering problems. A science problem is something that we view as having a nonlinear relationship between input and output. You can put tons of effort in, but you don’t know if you’re going to be able to solve it, or if you are, you don’t know how long it’s going to take. We view engineering problems as those that you’ve confined and constrained the space and how much effort it requires. It’s a matter of the input.

In our case, we didn’t want to be entrepreneurs and build out this technology for the sake of it. We understood what we needed to overcome, we understood the problems that we needed to solve, and for ourselves, there is this integrity piece where we said, “If we can show to ourselves that we can resolve and implement these unique features of the technology to solve these types of problems, that we would have the conviction to stand behind it.” We were given a really hard time during the COVID bull markets. We’re like, “You’re crazy. You’re working in protein design. How could it ever get hotter than it is now?” Turns out it got a lot hotter.

Chris: It’s gotten a little hotter.

Jamie: It got a lot hotter. What we did is we really followed the data. By doing that, we felt that we really presented ourselves as people that could be trusted. Then, obviously, the story had to follow suit with that. Once we really hit those key data points, that was when we were like, “All right, we’re ready to go.” What we didn’t realize is how quickly it was all going to come together.

Chris: We did move from our regular check-ins to, hey, there’s a company now pretty dang quickly.

Jamie: It was really quick.

Chris: That was a fun time over the holidays and going into the new year of a lot of work, but I’m glad we did it.

Jamie: It’s an exciting process when you get investors that want to put money behind your idea because sometimes it feels like you’re playing the sandbox until people are willing to put real money forward. There’s this massive aspect of flattery where it’s like, wow, okay, finally someone or somebody is interested in what I’m doing, but there’s this other side to it where, again, going back to this whole relationship piece, is that the quality of your investors, the quality of the people that you have around you are going to be the critical determinant in you being able to execute on this. If you think anything is hard in academia, multiply that by however many fold you want once money is on the table, once the stakes are a lot higher, once you can’t just happen to write another grant to keep people working in your company.

The big thing is when you’re looking for an investor and why we’re happy with you Madrona is that when somebody tells you who they are, you should trust it. You were incredibly honest with who you were as people, what you were going to do for us. In fact, you’ve done more for us than I would’ve expected. I think that’s what you want to look for when you’re looking at an investor is that it’s not whether or not they’re high profile, but whether or not you legitimately trust that they’re going to support you and be honest enough with you to overcome problems when they will inevitably be in your face.

Chris: I really appreciate that, and so does everybody at Madrona. I mean, one of the things we focus on the most is just helping our portfolio companies and founders as much as we can. And B, I think it’s an important lesson for lots of other founders. I mean, the fit is important between you and your visions and what you want to do and the alignment with the investor base and capital, and not everybody fits with everybody else.

As you’ve been in the CEO seat now for a while, you’ve developed and probably already had, but you certainly have stronger views on team building, how you’ve chosen to recruit, who you’re hiring, and really what you look for. It’s a little bit different in how you frame it than maybe lots of other companies. I’d love you to talk for a second about what’s your thesis on talent and how do you think about adding people to the team and specifically to your culture.

Jamie: If you acknowledge the importance of yourself individually, that’s very important because we need accountability, but the other part of it is knowing that in order to be your best self, you need great people around you. Matt McIlwain said a pithy thing, which he’s very good at, is you want to find people that are cross-functionally useful and cross-functionally curious. What I found really interesting is when you build teams and everybody talks about what their corporate values are, what they look for in individuals, it’s interesting because sometimes one can say something and believe it, and other times you can say something and not believe it.

For example, you could tell me that you want to wear multiple hats, you can tell me that you’re curious, but are you actually? That’s something that we have heavily prioritized for our people that value relationships that can take a longer term perspective, we look for people that want to have impact. That sounds insanely cliche, but the reality is that if you’re an intuitive and emotionally intelligent person and you start actually having conversations with people, the biggest thing that you can hear is what they get excited by. For us, we are constantly looking for people who are good people, they’re great at what they do, and they’re curious and excited about learning multiple things and doing multiple things.

Chris: The bar is quite high to get a job at Archon, which is great. Every company talks a lot about raising the bar on who they recruit, but it’s an extra high bar on cultural fit and how you guys spend a lot of time thinking about the culture of this team and what that means when we add new people to it. Are there certain types of people that you’ve found you really are out there looking for, or hey, these are really great when they join the Archon team?

Jamie: Honestly, it’s been just completely organic. It goes back to this point of treating people how you want to be treated yourself. What we’ve found is that enabling, supporting, and advocating for people on our team, that’s, unfortunately,, a little bit more of a rarity, and we would like to believe it. Then the other side of it is that if you’re incredibly supported and you’re working on amazing technology, then all of a sudden it becomes a highly desirable place to work.

What we found is that all of our intake has been completely organic. You have people that hear what it’s like to work with us and that we’re people that back up what we say with action. It’s been nice because everybody that’s come forward has been through an organic connection with someone on our team. The really interesting thing is that we have teammates that are telling people how much they like to work with us, and then you have other people coming up and asking, “Is there even an opening?” These are for job positions that we don’t even have. They don’t even exist yet.

Chris: This is why you’re the worst, best-kept secret in protein design.

Jamie: Yeah.

Chris: We’re getting to the end of our conversation. I know you and I could talk about this for hours and could’ve probably spent a lot of time just in the technical depths of why the technology’s so cool. I’m also curious about what you are most excited about going forward, both for Archon and just in general.

Jamie: There is something really gratifying about being able to let people know who we are, what we’re going to do, and we’ve been living this. Obviously, now we are public, or not public, but we’re not in stealth anymore. And it’s really nice to be able to share what we’re doing and also speak to people that are just ultra technical in the know and they look at what we’re doing and they’re like, “Wow, I’ve actually always wanted to solve this type of problem, but I’ve actually never known how to do it. If this works the way you guys are trying to build this, I mean, this could be the new gold standard.” That’s a huge one I’m excited for because, thematically, we want to be able to back up what we say with action, and being able to leverage the financial support that Madrona and our other investors have been able to offer us allows us to make this a reality.

Chris:

I know that everyone at Madrona has been just thrilled to be a part of the journey and to help you guys get to where you are and for the rest of the journey to come. I’m just also honored that we were the ones to be able to talk about this together in person for the first time. I really appreciate you having this conversation.

Jamie: Thanks a lot, Chris.

Beyond the Model: Taking AI From Prototype to Production with Unstructured’s Brian Raymond

 

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

In this episode of Founded & Funded, Madrona Managing Director Karan Mehandru hosts Brian Raymond, founder and CEO of 2024 IA40 winner Unstructured. Unstructured turns raw, messy, unstructured data into something AI tools can actually process — a typically expensive and time-consuming process for companies hoping to save money with the tools themselves.

Founded in 2022, Unstructured hit the ground running at the height of the AI craze with Brian at the helm as a first-time founder. Brian’s background set him up to be uniquely qualified to tackle the data issues so many struggle with. He started off his career as an intelligence officer at the CIA before joining the National Security Council as a director for Iraq and then spent three years helping build advanced AI solutions for the US government and Fortune 100s at Primer.

Brian and Karan explore some of the biggest trends in AI, including moving from prototype to production, the rise and fall of vertical applications, the shift toward powerful horizontal solutions, whether the ROI will be there on all this capital investment into AI, and the trough of disillusionment and why expectations in AI often don’t match reality. They also discuss how an unlikely source, the public sector, is driving AI innovation today. It’s a conversation you won’t want to miss.

This transcript was automatically generated and edited for clarity.

Brian: Happy to be here. Thanks for having me.

Karan: Well, let’s dive right in. We’ve seen a ton of conversations and articles talking about LLM models. We saw the launch of o1Strawberry, and obviously, OpenAI just raised this massive round, but from where you sit, I don’t hear a lot of talk about the preprocessing and, as you call it, the first mile of AI. So, help us understand what’s actually going on in the world of AI from the trenches of AI from where you sit.

Brian: I think a tremendous amount of energy to try and move generative AI workflows and applications from prototype to production. We’ve been talking about, it feels like the same thing for the last 18 to 24 months, about when are we going to reach production on these things and it’s starting to happen.

However, it’s still very hard. And the instances in which these workflows or applications are making it from prototype to production are, there’s a lot of pattern matching that we could do now to know what models are good at and what’s still really hard. And, on our end, we are focused every single day on the data that we make available to these foundation models and trying to help our users be successful and marshaling them for their business use cases.

Karan: Every company that we’re involved in and every board that we’re in, if it has to do with AI, there’s at least a conversation about how we move things from prototype to production. So that’s definitely happening. As you just mentioned, what’s actually under underneath this migration of a lot of these vertical tools that came out of vertical applications that were targeting interesting things, interesting workflow but constrained to a certain domain, constrained to a certain verticalized application. And now we’re seeing a ton of horizontal applications that are coming out as well and doing some broad processing and broad rag applications. So maybe walk us through — what are you seeing as far as the enablers and underneath some of these applications that are allowing some of these more interesting applications to get escape velocity.

Brian: Let’s just take a step back first. And going from GPT-1, GPT-2, BERT, BART — these smaller models, you had token windows that were only maybe two dozen tokens, right? And we had a huge leap forward to around a thousand token windows, 4 – 5,000 token windows, and even larger token windows. And with that parameter size has increased the amount of data and knowledge that are encoded into these models has become ever more powerful.

The leap from GPT-3 to GPT-Neo, the GPT-3.5 was according to the big jump to GPT-4o was huge. But these same kinds of problems have persisted around latency, cost, and performance, which is why a lot of the conversation early on was around hallucinations. Now it’s more around precision at scale.

And so almost everyone wants the same thing they want an omniscient foundation-model-driven capability that sits on all their data, that knows everything about their organization, is never out of date, and is cheap to operate. And getting there has been quite an odyssey.

I think we’re, as an industry, we’re making progress on that. But one of the big things that happened in the winter of 2022 and then 2023 was the temporary displacement of knowledge graphs for RAG. And RAG has been the dominant architectural paradigm. Interestingly enough, knowledge graphs have crept back in, but within under the auspices of rag, which means that you have ingestion and preprocessing that’s a necessary capability. You have external memory of these models in the form of vector and graph databases. You have orchestration observability frameworks Arize, LangGraph, LangSmith, etc. And then you’ve got the model providers. It’s all sitting on compute, right? That’s how we think about the world, is are those four components.

And the way that we’re engaging with customers is they’re producing hundreds of thousands or millions of new files a day. They want to take that and instantly use it in conjunction with foundation models. And we’re really at the top of the funnel in the ingestion and preprocessing side of things, trying to take this heterogeneous data in terms of file types, in terms of document layouts, and get it in a format that’s usable with large language models.

Karan: You know, the way I interpret that and the way we talk about it internally, you and I, has always been that companies like Unstructured have to exist and win for AI to reach its full potential because that is truly the first mile. It’s just the ability to take data that’s sitting behind firewalls and inside these corporate enterprises and be able to make it ready for things like RAG that you just mentioned.

Speaking about winning, there’s obviously a layer cake that is developing inside the AI stack, and you’ve got NVIDIA and chips at the bottom end, and you’ve got some middleware tools. You’ve got an application sitting on top, and within that, you’ve got LLM models. So, as you think about this layer cake of value being created in this emerging stack, where do you think we’re operating in a winner-take-all or winner-take-most dynamic? And where do you think it’s going to be pretty distributed across a whole bunch of different companies that are going to be deemed as winners as we think about the next three to 10 years?

Brian: It’s really interesting. I’m thinking about how competition is unfolding. NVIDIA really has done some jujitsu on the market and is moving from raw performance of individual chips to orchestrating those chips into massive data centers to unlock even more compute capability, not just for training but for inference as well. And so they’ve changed the competitive dynamics at the hardware level. At the application level, it’s become less about the models, in my opinion, and more about the UX. That it is a horse race on who can deliver the best user experience.

In the middle, it’s a combination of the two. And so, you have verticalized applications that are relying on proprietary training data or proprietary access to models in order to deliver outsized value to the customers to raise ahead, capture market share, and consolidate. And then you have others, like us — we’re a mix. We’re a mix of focus on UX. We’re a mix of focus on having access to the best models or knowing how to leverage them the best possible way. And then also for us, like having great relationships with the CSPs where a lot of this work’s being done, which is in VPCs.

Karan: There’s a lot of talk about the trough of disillusionment these days as well. And there were some slides flying around from different venture firms that are talking about how much capital has gone into AI and how little revenue has come out the other side. And I remember it took over a decade for the modern data stack to emerge in the past. And so, help us understand the pace of innovation in AI relative to how much capital has been invested in companies like yours and many others, including the model layers as well. How do folks think about, how do you think about ROI? How do you think about the pace of innovation? And do you believe that the current level of investment in AI is justified? Or do you think that all the VCs, including the folks that are investing in AI, have expectations that are never going to be met?

Brian: We can look back to the Internet, mobile, big data, different paradigm shifts. I think everyone’s in agreement that there’s a paradigm shift underway, the scale of which we haven’t seen before and the velocity of change we haven’t seen before. And you can ask about whether or not folks are moving too much capital too quickly, but I think there’s no going back from this.

Now, if you were to look back at headlines a year ago, there was a lot of talk about like how big our model is going to get, are we going to run out of training data, etc. What you’ve seen is actually this divergent path from the model providers. You’ve seen models not only getting bigger and more performant, but you’ve also seen them getting smaller and more performant, which is really interesting.

And what that means is that these models are going to be showing up everywhere. Apple has really been leading the way here on edge devices, but these generative models, frontier, foundation models, whatever you want to call them, they’re going to touch almost every aspect of your life. And it’s really difficult to estimate, like, where does this stop, right? Like, when does it actually slow down? Just because, if anything, we’ve seen the pace of acceleration — like the curve is logarithmic in terms of the pace of development here. And so I think, if you look at pure productionization of these and the revenue, if you’re just looking at 10Ks, 10Qs, and quarterly investor calls, I think you’re going to miss the broader trend that’s going on right now.

Karan: I’ve always said our brains don’t understand that the concept of compounding the way it’s supposed to. And so I think you’ve combined that with the fact that we’re literally two years into when ChatGPT came out. So it’s just been two years. So it’s still early days.

And then you combine that with the fact that humans in general, usually overestimate what you can do in one year and underestimate what you could do in 10. You combine all of that, and I can totally see where you’re coming from, which is it’s early days, but this is unlike anything that anyone’s ever seen.

So, I am equally bullish about the potential that this wave represents. I’m curious, with all that is happening with this investment in AI, how are you, as the CEO and founder of Unstructured, navigating it? How are you thinking about capital? How are you thinking about positioning the company?

As this market matures, and sometimes these markets take a little while to mature, what is, what’s top of mind for you right now as you build a company, as you watch some of this innovation come in?

Brian: I’ll talk about some of the things that have changed and some of the things that haven’t changed. Some things that have changed was that when we started talking a couple of years ago, Karan, we expected to be trading our own models for a long time. And we expected that fine-tuned proprietary data on, Apache license models were going to have a hard time being displaced for a while. The speed with which OpenAI and anthropic in particular, but also like Gemini models have improved is breathtaking, and I think that has actually put a damper on a lot of fine-tuning activities that are going on or the need to fine-tune and that’s pushed a lot of folks In the direction of investing more in architecture and how they’re leveraging the models and the architecture or just how they’re even prompting the models.

And so that that’s, I don’t want to say caught us by surprise, but we didn’t anticipate that like the speed of change there in the last two years. What hasn’t changed at all is a focus on DevEx is a focus on software fundamentals. Like maybe we probably thought we were going to be able to lean more on some of these large models that help with code generation than we have.

We thought they were going to be a lot better than they have been. They haven’t. And so, large-ish teams, that are just focused on shipping product, talking to customers, shipping product, talking to customers, and closing that loop as rapidly as possible — that hasn’t changed at all. That’s where a lot of the competition is unfolding today in our space.

Karan: I mean, I think having been involved with you now for a year and a half there’s obviously a lot of things that I can point to that make Unstructured special, not the least of which is you as the founder. But one of the things that is special about this company relative to anything else that I’ve invested in the past, having been doing this for almost two decades now is the rate at which you’re being pulled into the public sector.

And, historically, if you met a Series A company that was talking about going into the public sector, you would sort of usher them the other way, just because it’s like pushing a rock up a hill takes a long time. You’ve got to have the right relationships and connections, and it always feels like you’re pushing from the, just the conversations you’ve had with some of the constituents inside the public sector, it’s been really fascinating to watch how quickly the public sector is actually adopting this trend.

So, give the listeners a sense of how to think about the scope and scale of the problem that the public sector is facing, as it relates to AI, specifically as it relates to how Unstructured can solve that need, solve that pain?

Brian: yeah, on the public sector side. It’s been interesting because a lot of the investments that were made over the last 10 years have set the conditions for rapid genAI adoption. And so, you saw AWS and Azure, in particular, spend a lot of time in classified environments, standing up classified cloud environments.

You’ve seen a lot of testing over the last, scince about 2017, and trying and efforts to implement more traditional machine learning approaches. And a lot of those failing, quite frankly. But on the other hand, rising competition from peer threat, like China. And a lot of congressional funding going on in the defense side to the rapid adoption of AI and ML for, pointing to the spear type stuff, but also back office, like one of the largest programs that’s going on at the Department of Defense is Advana, which Databricks has a big hand in on back-office automation. And so, with those sort of conditions set, you had ChatGPT launch, you had BART, you had a few of these open-source models. You had a whole bunch of kind of mid-career professionals that had experience, had resources, and have the mandate to go and adopt as fast as they could.

But what they also had, which the private sector didn’t have during this period of time, was their measure of risk tolerance. And so, in 2022 and 2023, and in the early 2024, you saw in the private sector… a shift among corporate leaders from once being nervous about a recession and holding back on spend around new technology to they couldn’t spend it fast enough. However, like that fear was not there in any way on the defense side. It was actually the opportunity cost of inaction was quite high. And so you saw them going into this very early. They also have lots of paper-based or document-based processes that are just ripe for displacement.

Karan: Without getting any of us in trouble, can you give us a sense of how many net new documents does a typical three-letter agency create in a day or any unit of time?

Brian: From a document standpoint, you’re talking about a half million to 1 million, upward of 2 million per day. Those are documents, not pages. Just to put that in perspective, we’re finding as an industry that vector retrieval starts to break down at around 100 million vectors, right?

And if you think about how many vectors you’re creating per document, right? You’re chunking it. You can generate 100 million pretty quickly, right? And so that’s like a few days of work of just organizational production can overwhelm like a rag system, which has put a lot of pressure on companies like us and others in industry to figure out how to make Gen AI work at scale for large organizations.

Karan: one of the other interesting things about dealing with something like the public sector and the government and the defense forces is that not only can you obviously have them as great customers, the fact that they can battle test your product at this at the levels that you’re just talking about is, it’s just it’s amazing. It’s like something you can’t get from a lot of the enterprises, even if you sell it to the large end of the enterprise.

Brian: They make fantastic design partners, and I think enforce a level of discipline on startups at our stage that we benefit greatly from.

Karan: What is What is one or one or two widely held views about AI and the emerging AI stack as you’ve heard, or as if you’ve seen or read from places that we all visit and people that we listen to that you think is probably not as true as most people believe or some personal view that you might have about the space that might be different than what the consensus view might be.

Brian: I think I’m struck by two completely incongruous narratives that are both prevalent. Narrative one is that, we are hurtling toward AGI that Sam Altman is poised to be the overlord of all as the owner of AGI, and folks are absolutely terrified of that. The other side of this is that this is all a big flop. Nothing is, none of this is producing any value. It’s the largest bubble anyone’s ever seen. And none of it actually works. And I know it’s overly simplistic, but it’s, I hear it almost every single day. And what’s just, I think stepping back here, a story that hasn’t really been talked about, I think enough is that if you look at the past, like 18 to 24 months, a lot of the development that we’ve made as an industry, is thanks to Jensen and Nvidia, thanks to Sam, thanks to Anthropic team, thanks to DeepMind and others. But it’s also like thousands and thousands of individual engineers who have figured out how to stitch all this stuff together to make it work. And let me give you an example. Right now, there’s a lot of folks out there that are fine-tuning embedding models, graph RAG. Everyone’s talking about graph RAG that’s in this nerd club that Unstructured and others are in. That’s all bottom-up on trying to make this stuff work. Nobody would be fine-tuning embedding models or doing graph RAG if the models worked out of the box or if nothing ever made it. And that is all just being done partly through open source, but also through these like discord channels and Slack groups and Slack channels with thousands and thousands of engineers that are all working together on this. We haven’t seen that before. The move to big data after ImageNet through like the 20-teens, you never saw any of this. And so this is like new, and I think it’s unique to some of these hubs, like the Bay Area that have created the conditions for thousands of these engineers to work collaboratively that were disconnected before to try and figure this stuff out.

Karan: So, as you kind of play that out and think of the next entrepreneur, whether in the Bay Area or other places, and you had to give them advice. They want to build something in AI as does everybody right now. What advice would you give to that next entrepreneur that’s trying to build something of value to the world as the stack innovates and goes by leaps and bounds every given week. So what would you advise those entrepreneurs?

Brian: Well, this might be bad news for folks in PhD programs, but I would not start with technology in search of a problem. I would search for a problem in need of technology. And I know that goes like broadly, but there are a lot of amazing companies, Databricks, that emerged from technology in search of a problem, right? I think that’s a very dangerous game to play these days. Just with the pace of innovation being what it is. However, you cannot go wrong if you start with unmet customer problems, and you go orchestrate the best technology that you can at the time, and that you have an extreme willingness to adapt over time because that’s just that, that, that willingness to adapt is absolutely essential given the pace of development in the market right now.

Karan: Totally agree. And I think this is where the mentality, the passion, the motivations of the founder related to the market that they’re going after is so much more important today than ever before, because we’ve discussed this many times over drinks and dinner. It’s like the half-life of something that is being built today is so short in some cases because the next version that gets launched by open AI could basically make what you build redundant. And iterating and pivoting and figuring out where that way, how to weave yourself through that is, is going to take a lot of work, a lot of skill.

Brian: And I think like part of this Karan, you made a great joke at the time — you said, Hey, we’re the Area 51 of startups — only the government knows about us. You guys need to get it together on marketing.” And I went and hired Stefanie and Chris on our marketing team, and we created a brand. And boy, did I learn the value of brand over the last year and a half. Being able to stand up from the crowd, having a sense of what are you trying to do? What’s the promise that you’re making when you show up or they engage with your product. And, this is not a foot race that purely unfolds through technology, through bar charts and scatter plots on your performance. But it also is what’s the soul of your company? What’s the soul of your brand? And what are the promises that you’re making to your users? That matters a lot for the success of your startup.

Karan: That’s great. Let me end with this one last question, which is as you think about Unstructured future, what is the thing that excites you the most about AI’s future and then Unstructured’s role inside of it?

Brian: I think that we have an opportunity at Unstructured to be like the critical enabling scaffolding between human-generated data and foundation models. And the potential for us to serve that role is incredibly motivating to everybody at the company. I think that this is a very shallow trough of disillusionment if you want to use that term in terms of the arch of generative AI adoption.

The pace of adoption is moving at a rate that is going to surprise a lot of us over the next 12 or 18 months. And, we feel honored to be able to be a part of that and to be a critical enabling layer for large organizations that are trying to make this work.

Karan: I will say the last couple of years of being on your board and being your partner has been a complete privilege and honor, and frankly, a ton of fun as we see this market evolve and see how you’ve been building the company, hiring the team and solving all of these pain points for our customers.

So thank you for your leadership. Thank you for being on the podcast today as well.

Brian: Thanks, appreciate it.

Perseverance Over Speed: Creating a New Category with Impinj’s Chris Diorio

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

In this episode of Founded & Funded, Madrona Managing Director Tim Porter and Venture Partner Patrick Ennis host Chris Diorio, co-founder and CEO of Impinj, the pioneer and world leader in RFID technology. Patrick and Chris have known each other for more than 24 years, since Madrona and ARCH Venture Partners co-led Impinj’s first round of financing in the summer of 2000.

In this episode, these three reminisce about old times and share some fun stories while diving into what it takes to truly launch a new category — and the patience required by both entrepreneurs and investors when doing so. They also discuss developing new speculative technology, navigating channel partner relationships, making it through an IPO during uncertain times, and how Impinj became the least known $4.6 billion public company in Seattle. This is a must-listen for any entrepreneurs out there.

This transcript was automatically generated and edited for clarity.

Patrick: Impinj is a massive success story, but many people may not know about it. Why don’t you just tell us? What does Impinj do?

Chris: We make miniature radio chips smaller than a grain of sand. Really smaller than a grain of sand. We were joking that our radio chips are about equivalent in length to 50 bacteria. With those radio chips, enterprises can track their items, get inventory in stores, know that all the items in a medical crash cart in an emergency room are in the crash cart, and there’s a myriad other uses.

Tim: Impinj is such a special place in Madrona history and for me personally, so this is particularly fun and gratifying. When I started in Madrona in 2006, Impinj was one of the first companies I got to work on with our co-founder, Tom Alberg, and our colleague Suja Vaidyanathan at the time. Patrick was working at ARCH but was in the same office with Madrona. I literally met him on my first day in 2006.

And so, now here we are, and what a huge success story and never a straight line to success, which we’ll get into, but let’s go back to the beginning and the founding and the original deal.

$15 million round in 2000. Everything that was getting funded then were dot-coms. And here’s Chris, a new-ish star professor at UWCSC from Caltech, partnered up with one of the industry’s fathers, Carver Mead. I’d love to hear, Chris, a little bit of the founding vision around disposable semiconductors and RFID. Were those things even thought about then?

Patrick, maybe you can comment on a $15 million round in 2000, which was big. How did ARCH and Madrona come together in the founding of the company in the beginning?

Impinj’s Pivot to RFID

Chris: The founding vision was none of those things. The founding vision was a wireless technology that allowed us to make very, very low-power radios.

The original idea behind the company was we were going to use that wireless technology to improve the power efficiency of base station radios for the new big thing, 3G wireless.

Then the dot-com bubble burst, the bottom dropped out of that market, and we had to find something else to do. And so, we looked around. We did a lot of analysis. We looked at ultrasound and GPS and many things. We settled on RFID because, with our wireless technology, we could make, like I said, miniature radio chips that were tiny. I mean so tiny that with the naked eye you can barely see them.

Those radio chips absorb their operating energy from radio waves, so there’s no battery. We said, “We can make these chips. We can put an identifier in them. We can allow enterprises to get information about their items and then, going forward, ultimately enable people to get information about their items.” We started going down that vision.

Then nine months later, Walmart made an announcement that they were going to track all pallets and cases using this RFID technology. We all high-fived. Woo-hoo. Success. We’re going to IPO in 18 months. We’re done. Well, as we’ll get into, we weren’t done.

About 14 years later, we IPOed with a lot of ups and downs along the way. As I look back now, I can truly say any transformative technology takes time. A long time. It doesn’t happen overnight. That was a learning experience for me.

To go back to the beginning, the plan we initially had didn’t materialize, so we focused on a new one and made it work.

Tim: As is so often the case, and thus the Internet of Things was born. Patrick, how did ARCH and Madrona come together to want to do this deal in that size round and convince Chris? Maybe comment on Carver, too, and his role in those early days.

Patrick: Well, it’s interesting you say that because we’ve looked through the files, and there’s no mention of RFID for the first two years.

Chris: That’s correct.

Product-Market-Fit Wilderness

Patrick: That’s not an exaggeration. There’s everything from medical ultrasound to cable modems and DSL and 3G. We were wandering around the product market fit wilderness for a while, and that can be frightening, and you’re told not to do that. But sometimes it’s a good thing when you have a breakthrough technology. If you pick that application too early, you may not ultimately realize all the value. Now, that’s bad advice in general to entrepreneurs, “Don’t worry about your application,” but sometimes, as long as you keep the burn rate low and you’re smart about what you’re doing, then it’s the right move.

Speaking of the round, as you point out, in the year 2000, if you discount back inflation, that size round, seven and a half million from Madrona, seven and a half million from ARCH, that was a fortune.

Chris: Thank you, Patrick.

Patrick: It took a village, but I think in part because we sincerely thought that might be the only round we did. Chris, you had great interest from all the top firms in Silicon Valley, partly because of what you had achieved in your relatively young career, but also due to Carver Mead, who is as consequential a figure as there is in the history of technology in the last 100 years, and that’s not too much of an exaggeration.

How did you view that working with Madrona and ARCH and our founder, Tom Alberg, and working with us versus going with some of the Sand Hill Road firms?

Chris: There’s a lot in those questions. Well, first on the $15 million, at the time we founded the company, I was a three-year faculty member at the UW. They did have $15 million, which was kind of off-scale for me. It’s truly amazing that we were able to raise that amount of money. We thought it would last forever, but of course that amount of money never lasts forever because you’ve got a ton of work to do, hiring people and all the things along the way.

We ended up doing multiple subsequent rounds and, I believe, raised something in the range of about $125 million over the course of the company before we IPOed. But again, we were truly developing a transformative technology.

Going back to those days, we had Tom Alberg on our board. We had Patrick on our board. We had Carver Mead, and it was the collective experience of you guys that settled the company and allowed us time to really think and pursue opportunities. Anytime a major problem came along, and I was all frustrated and all worried, especially Carver and Tom, they’d say, “Relax. It’s okay. Why don’t you look at it this way?” It was that kind of wisdom and it’s more than wisdom. It was that experience and their ability to drive confidence in the paths we were taking. Confidence even though we didn’t at that time know which direction we were headed or how we would get there. It was that confidence that really helped build the company.

And so, you go out, hire good people, hire the best people you can, hire only people that are better than you are, and I truly mean that. With a strong team, you will in general find a way to be successful. We have a set of principles that underlie the company, and they are the most important things to the company. When I go to customer meetings, I’ll often start with our principles. They begin with, “Be respectful. Always act with integrity. Always think big. Always trust your instincts. Always be curious and listen,” and they end with a saying by Patton, which is, “Tell people where to go, not how to get there, and you’ll be amazed by the results.” Those are our principles.

Then we have our mission statement, which is separate from that. Our mission in the world is to connect everything, and we’re going to do it.

Tim: It’s so instructive, and you can have breakthrough technology and world-class technologists, but company success comes back to culture and people. That cuts across all of the great companies that we see here at Madrona.

Founders get to that culture in different ways, though. That’s an amazing set of principles, and you’ve articulated it well, and the company lives them every day.

Did those kind of happen organically? Did you and Carver specifically say, “We’re going to write this down early,”? How did you kind of create that originally and then cultivate it all these years?

Principles and Impinj Company Culture

Chris: It didn’t happen early. It happened later on in the company’s history. We had a set of ways that we behaved, but they weren’t codified in a set of principles. It wasn’t even until after our IPO.

Tim: What year was that?

Chris: We IPO’d in 2016. The principles came about, I think, 2017 or maybe 2018 where we sat down and said, “Okay, the company’s growing. We’re getting more and more people. It’s time to write these things down.” They were not new. They were a set of sayings and actions and the way we behaved. They weren’t new, but putting them down on paper was an important step in the growth of the company. We were living those principles up to that time.

Today, I believe my number one job as CEO is to give every employee at Impinj the opportunity to be wildly successful because, if every employee is wildly successful, the company can’t help but be wildly successful.

I believe every employee’s job, and I tell them this, is to do everything they can to make our customers wildly successful because, if our customers are wildly successful, we can’t help but be wildly successful. That is the vision of the company and how we behave and act. The principles kind of codify that, but it’s our viewpoint and how we go to market, how we act as people in the company, and it is the most important thing to our company, bar none.

Tim: Geniuses taking complicated things and making it simple. That’s a great example, but it’s hard to put it in practice, and you all have done a great job. Let’s go back to the story. We talked about the founding, and you mentioned Walmart making this big announcement but then sort of teased maybe that didn’t play out exactly like everyone thought.

Chris: It did not.

Tim: You mentioned it was 16 years to the IPO. Many fits and starts. We sort of joked RFID, once defined as a market, was still the market that was one or two years away for 10 years, or 15 years.

How did the company persevere, continue to execute and grow, despite a market that arguably took longer to develop than maybe you thought when you first defined it and honed in on it?

Chris: There’s two answers to that question. I think part of the answer is, if you have a vision, and you truly believe in it, and you can see it, you know it’s going to be there. You can just see it. Don’t give up. Just do not give up. Persevere and say, “I’m going to make this thing go no matter what. I don’t know how I’m going to get there. I don’t know what the path is, but I’m going to do it. And day by day, I’m going to figure it out.” And not just I. every person in the company has got to say that. We are together going to figure it out.

Struggles With RFID Standards

And so, yes, in 2003, Walmart made an announcement that they were going to tag all pallets and cases. This was great. It was fantastic. For a couple of small nits, we didn’t have any spectrum we could use. We didn’t have any wireless standard by which to communicate, and we didn’t have any products. Other than that, it was great.

On top of that, we didn’t have any money to buy spectrum. Spectrum is not a cheap thing. Building all the products and creating a worldwide standard for radio communications is not easy. People think Bluetooth and Wi-Fi. Well, we had to create the equivalent thing. We didn’t have it, and you cannot create a worldwide standard overnight.

We got to work, and we created a standard that finally got ratified in 2005 with an immense amount of work. It was the work of 100 PhD theses. It was just incredibly hard because it was doing your PhD thesis, and everybody’s fighting. Got the standard approved, but we had to build products to it. We still didn’t have spectrum. I personally spent a ton of time on that standard development effort. I was a project editor for it.

Then I went and met with the FCC. I joined TG34 in Europe which is the standards body associated with spectrum allocation. I presented in front of MEDI in Japan, and we did everything we could to get spectrum allocated for free. With the assistance of some large end users, we made it happen. In the meantime, the market all the hype went up. But with no spectrum, no standards, and no product, the thing came crashing back down again.

By 2006, it was a mess because we had just gotten some of the spectrum of things, but everybody was walking away saying, “This RFID stuff’s not going to work.”

It wasn’t until 2010 that we were far enough along from a product perspective and everything-else perspective that a couple of large retailers, Macy’s, Decathlon, Marks & Spencer, decided to adopt RFID for inventory tracking in their stores. And that was the beginning of the turnaround. We went through the typical Gartner Hype Cycle. Way up. More than $1 billion of VC money got poured into RFID when Walmart made that announcement way down, crashing down, and only one company of that whole time made it out the other side. It was luck. We were lucky enough that it was us. All the other companies failed or got acquired along the way.

Patrick: Chris, how many chips has Impinj made over the course of time? And where might the average person have encountered them, even though they didn’t know they were encountering them?

Chris: We’ve sold more than 100 billion chips to date.

Patrick: Let’s say that again. You didn’t say 100 million.

Chris: I did not. I said 100 billion. We’ve sold more than 100 billion chips. The industry has been growing at a 29% unit volume CAGR since the year 2010. At this point in time, we’re probably about 0.5% penetrated into the opportunity. The total opportunity is of the order of 10 trillion chips a year. I wish I had brought a vial. I have a vial that’s about the size of the end of my pinky that’s got half a million chips in it, and it’s tiny.

Where Impinj RFID Tags Are

They are effectively consumable silicon. We put the chips into our onto item. In retail stores, for example, let’s say for retail apparel and footwear tracking for inventory visibility in stores, the chips are generally inside the price label. If you hold up the price label to a light and look through it, you’ll see a little black dot. That’s our chip, and you’re mostly seeing the glue around the chip. And then a small antenna laminated into that label. When you get home, you cut off the label. You throw it out. You own the item, and then the retailer goes and buys a new item with a new chip. Every one of those chips identifies the item uniquely. Employees in stores go out with a handheld reader maybe twice a week and do a quick store inventory so the store knows what they have, knows what they need to order.

Going forward, we’re doing self-checkout systems, loss prevention systems, and other kind of systems to make the consumer buying experience much more seamless.

Patrick: On my phone, I have an app where my airline tells me where my bag is. When I do a road race, it tells me what my really bad 5K time is. That sounds like you must be involved in that somehow.

Chris: We’re involved in a lot of those things. If you fly Delta Airlines and use Track My Bags app, if you ever peel back one of the baggage tags, inside you’ll see an antenna with a little dot on it. Delta tracks your bag with what is now called RAIN RFID. RAIN for radio identification.

If you run a foot race, a 10K or something like that, there’s one of the chips with a little antenna in your bib, which you throw out at the end of the race. You don’t have to untie the tag and return it at the end of the race like in the old days. Our chips are on everything. If you think 100 billion, there’s roughly two billion people in the entire Western world. We’ve delivered roughly 50 chips for every person who’s listening to this call, and we’re just getting going. As I like to say, we’re the most pervasive technology that nobody’s ever heard of.

Tim: I love that. From a company building standpoint, it’s so fascinating because, again, you’re breaking new ground from a technology standpoint. You had to search around to find this set of use cases and product market fit, as we say. Even at that point, Delta and these retailers, there was a lot for them to bring together and write applications or track their inventory, et cetera. You’re sort of building the tech, evangelizing to end customers, building partnerships to integrate and take these things to market. That’s a hard set of things to pull off. Of course, back to people, you had some great people around the team, across both technology and the business side. How did you cultivated those end customer scenarios and the end customers while also working across all these partnerships?

Partnerships

Chris: One of the strange things in our industry, which you don’t see very often, is that our early adopter companies were some of the largest enterprises on the planet. Think Delta Airlines. Think Walmart. Think Macy’s. Think Decathlon, the largest sporting goods retailer. Think the Department of Defense. Think Department of Homeland Security. I mean, it’s these enormous-size companies or institutions. Hospital chains. We learned pretty early on that we needed to work with the large enterprises because they were the ones who could retool their operations and retool their supply chain to use what we were bringing to market.

The advantage there is our customers were huge. The disadvantage there is that they had huge operations, and it takes years for them to fully deploy.

Tim: I think an important lesson there for all startups, and even if they’re not targeting Walmart and Delta right out of the box, which you all did, is that you worked backward from the end customer, and then you had to develop the channel partners and integrators. You weren’t printing the hang tags with price tags on it, and neither is Walmart. Somebody else is doing that and somebody else goes and installs other things, et cetera, but you started with that end customer and said, “What’s the scenario?” You helped them cultivate, and then the whole solution and the partners necessary came together from that order.

Chris: We went to the end customer and said, “Tell us about the problems you’re having.” It’s about solving the enterprise problem. Tell us the problem you’re solving. We can solve this problem. Here’s what the solution will look like. The most important thing there, is establishing a trusted relationship with that enterprise customer. Truly a trusted relationship that you and they are in it together to solve their problem.

Financing Strategy

Patrick: Chris, I’d like to ask about how you thought about the financing strategy because you and your team have done a great job through the years of raising money when you needed it from value-added sources, and entrepreneurs and CEOs are always thinking, how do we view strategic investors versus financial? Early on in Impinj’s lifecycle, I think in the what we used to call the series B, I think maybe now it’d be called the series D.

You brought in three corporates as investors. TSMC, UPS, and Unilever. One of them, in particular, turned out to be an amazing long-term partner, and the other two were good partners also. But wondering if you had any thoughts on that, especially TSMC. Although they’ve been around for 40-plus years, recently they’re in the news because of all the supply chain issues and global issues regarding different countries and who makes what semiconductor. It’s very interesting because Impinj’s co-founder, Carver Mead, is the one that provided the intellectual and technical justification for the so-called fabless semiconductor industry. Morris Chang, who’s a very famous founder and CEO of TSMC, several times has publicly credited several conversations with Carver in the 1970s for the founding of TSMC. Yet here you are, a big customer of companies like TSMC.

Chris: Actually, two of those companies have had a very significant impact. Obviously, our relationship with TSMC that goes back more than 20 years, and we have worked closely with them, and they with us, to develop this technology and develop these chips. We chose TSMC early on because we did a bunch of technology evaluation. Back 20 years ago, TSMC wasn’t very big, but they had by far and away the best technology. We’ve had a partnership with TSMC ever since. Both their investment in us and that was a financial investment. Today, their investment in us in just a customer for them and the things they do to help us along as we develop new products.

The other one to look at is UPS. If you listen to UPS’s public statements by their CEO, they talk a lot about RFID, and they talk about tracking packages and the benefits it’s driving for their operations in terms of reducing miss loads, reducing miss shipments, and providing packages at the right place at the right time when they say they’re going to deliver it. I’d encourage you to go listen to some of those statements because it really is a testament to a company getting in early, understanding the benefits of the technology, seeing it as an opportunity for them, and then driving it forward. It’s an example of one of those Fortune 100 enterprises that takes time to adopt but has had the vision for a long time.

And I know, Patrick, you had a chance to meet with Morris Chang, going back to the TSMC.

Patrick: Chris, that was one of the thrills of my professional career. I was on the board of an optical networking company that happened to have a board meeting in Taipei because UMC, which was a TSMC competitor, was an investor. Because of you and Carver Mead, he sent an email, and the great Morris Chang and I had 20 minutes one-on-one at TSMC headquarters in Shinshu City. I was shaking the whole time because in my mind he was like Bill Gates and Jeff Bezos and Henry Ford all put together, but yet he was talking to the little old me, and it was because of you and Carver. He was such a nice gentleman. So smart. Had nothing but great things to say about Impinj even then. And this was 2002, I believe.

Chris: He is in the same mold as Carver and Tom Alberg from Madrona. I have no other way to characterize them other than wonderful people because they think about helping others along their journey.

Tim: Now one of the most valuable companies in the world and that a lot of folks hadn’t thought about until AI chips, NVIDIA, all those things. And there we mentioned AI. Now, we can move on. What a story.

I also just want to comment on Tom again, but he had such an influence on Impinj and, of course, on us here at Madrona. He did so many things that I think he did to help us all be successful, which is work with great people, empower them, take big bets, and then be patient. Probably more, but those all come through in how Impinj ultimately became successful and what we try to take into our investment philosophy today.

Tom Alberg’s Influence

Chris: Tom was a wonderful person, and we all miss Tom. He was wise. He was smart. He was trusting. He was helpful. He was patient and I wish he was still around. I really do. He was on our board from day one along with Patrick joining us on the board early on. Maybe at some point, you can tell your story about how he got to that board, but he was on our board from day one, and he stayed with us up until just a couple of years ago. I believe he was on two boards in the very early days. Amazon and us. I don’t remember when he stepped down. Maybe 2020. In that timeframe. His wisdom and the values that he brought to the company were immeasurable to our success.

Patrick: It was amazing because, when we first put the deal together for Impinj in 2000, we talked about this. Tom was on two boards then, Amazon and Impinj, and we were in awe of that. I think we were somewhat intimidated, but we really, really benefited from having someone of his caliber. He was such a great mentor to everyone that he touched. Specifically, how we put that deal together It combined a couple things.

Even though I wasn’t working at Madrona, I was sharing office space with Madrona, and Tom was happy to mentor anyone. Tom was one of my mentors. He gave me a lot of guidance for Impinj. I remember when we were negotiating the term sheet, Chris. Tom empowered me, and then my bosses at ARCH empowered me within a range. And then, Chris, you had a unique suggestion. Instead of coming over to your office and negotiating the term sheet, what did we wind up doing? And it’s a tool that I’ve heard you’ve used many times since.

Chris: We went on a walk. I do this a lot. We had obviously gone through the terms, and there were some sticking points. It’s often really hard to settle a negotiation around a table.

Oftentimes, and I do this with our enterprise customers as well, say, “You want to go on a walk?” Or, “Do you want to go on a run?” I’ve actually done deals on a run. And there’s something about going out and walking or running. Be careful if somebody’s a lot younger than you when you do the run one.

I remember it, Patrick. I said, “Patrick, what’s most important to you?” And he told me. And I said, “Well, here’s what’s most important to us.” How did we get to a resolution? We just walk and work through it. It was probably only 30 minutes, and we came back, and we were done. I’ve done that innumerable times with many customers out in the field.

The Fail-Fast Mantra

I’d like to go back to the investment topic for just a second. I hear this saying, especially coming out of Silicon Valley, that’s, “Fail fast.” If Impinj had been held to fail fast, we would’ve failed because there was no fast. As I said before, any transformative technology takes time. Everybody kind of takes mobile phones for granted today. It’s taken 40 years. I used the first mobile phone in 1985. It had a rotary dial. It was in a briefcase with a rotary dialed on a coil cord. The “fail fast” mantra works for disruptive approaches to existing problems where the foundation’s already built. Fail fast does not work when you’re inventing a new category.

The benefit of having Madrona and ARCH on our board was that they didn’t have the “fail fast” mantra. You guys had the “nurture the company to success” mantra, and Impinj was incredibly lucky. Incredibly lucky and thankful to have Madrona and ARCH as our first investors.

Tim: That’s kind, I mean, and it’s in the hardest parts about this business is riding the line between having patience versus putting good money after bad or falling in love with a bad idea. Tom was really good at this. I want to tie one other thing in here. I mean, Tom is clearly good at identifying talent and working with great people. You and Carver. Jeff Bezos early on. You’ve also done a great job. You came at it with a technical background. You were a professor at UW. You had some great business partners over the years. Bill Colleran. Evan Fein. You were not CEO. You’ve now been CEO again for a long time.

Any tips for newer founders? Your approach to hiring great people that aren’t technical? How did you hired heads of sales, heads of market, just in general? Do you have an approach? Is it something you had to learn over time? It’s always, I think, one of the hardest things is bringing on board great people in areas that maybe you’re not as comfortable with or don’t have as much experience with as a founder.

Chris: Tim, that’s probably the hardest thing in a company is to hire good people. You always make mistakes along the way. Then to be able to correct your mistakes relatively quickly because what you’ll find in a company is that bad apples, you will hire some, can cause immense damage. They don’t only cause damage financially or other things. They cause cultural damage.

The question is, how do you find the right people? I don’t know that there’s a way. You can write all this stuff down. You know all kinds of things. I haven’t found an institutionalized way to go at it. I meet with people. Spend time with them. I try and probe who they are. I ask them questions about some of the things that were most difficult for them, how they would approach problems, but they’re all basically ways of getting the person to open up and tell you who they are as a person. I don’t like to make a hiring decisions until I understand who a person is.

The Best Technical Question For Candidate Interviews

On the technical side, it’s easier for me. I’m going to share my cheating technical interview question. I haven’t figured out a way to do this on the non-technical side. But on the technical side, it’s really easy. Meet with somebody, and I say, “I’m going to give you 20 minutes to ask me any question you want. And then I’m just going to ask you one question.” And I’m going to tell you the question right now. You have 20 minutes to think about it.

The question is this: I want you to spend the rest of the interview and teach me something. It could be in any topic. Anything. I’ve had people go into soil biology. I’ve had people go into neuroscience. I’ve had people go into the details of spectrum analyzer design. Whatever it is, I want you to teach me. We did something in crypto, and we were deriving equations on the board. I want you to teach me something, and you can get a lot from a person by asking them to teach you something. You’re not asking them questions to probe what they don’t know. You’re asking them questions to probe what they do know. That’s the most important thing because, if they truly do know something, and they worked it all out, and they understand it, and I like it, you’ve learned something about the person.

Tim: So great.

Chris: Anyway, don’t steal my question all the time.

Tim: Why just technical? That’s a good question for anyone. If you understand something deeply, you should be able to teach it to me.

Chris: It’s harder to do on the sales side or something like that. I haven’t yet figured out an equivalent question if I’m hiring a sales lead or something, but it is a question I use fairly often.

Patrick: It’s such a great tool because, as you know, many tech companies, Bell Labs in the old days, Microsoft, they would ask these trick questions, and there’s no correlation between how you do. I was in an interview at Wall Street once, and they asked me, “If a chicken and a half leaves an egg and a half in a day and a half, how many eggs does one chicken lay in one day?” And that’s absolutely ridiculous. If you know the answer, you know the answer, right?

Chris: I would’ve said, “I don’t really like eggs. Next question.”

Tim: We got to talk about the IPO. Patrick had alluded to financing strategy over the years. You went public in 2016, as you alluded to. For those who don’t remember, 2016 was not exactly a go-go year for tech. There was a bit of a downturn in that timeframe for a variety of different reasons. Impinj had already been around 16 years.

Chris: We had outrun the funds that were initially funding us.

Tim: Tell us that story. And how did you manage to get public? And how was that then a springboard for what’s happened subsequently?

Chris: How did we manage to get public? Through a lot, again, of perseverance. We had tried previously and hadn’t had success just due to bad timing or events that happened. In a prior attempt, the market bottom-dropped out of the market just as we were about to go out.

A window opened in that 2016 timeframe. We decided to go. We put everything into it, and we focused on our story, this idea that we’re going to connect everything in our everyday world from point of manufacturing through the supply chain to point of sale, to consumer use, to end of life, beginning with the enterprise side. In so doing, we’re going to transform business operations. We’re going to transform people’s lives. That was the basis of the story. The industry has been growing at this pace. The opportunity’s here. All the pieces are in place.

We went out and told the story passionately. The IPO was hard. We did not have the most glamorous IPO. You can hear about all these glamorous IPOs. They’re flying here and there on private jets, and it’s easy and fancy. No, we didn’t get to do that. I think in the first week of the IPO, I ate at McDonald’s one night, Au Bon Pain one night, Subway one night, and one night on the train where all they had for food on the train was a bag of M&Ms, and I had a little half bottle of screw-top white wine. And that was dinner. But we did pull it off.

Tim: You ended up raising about $100 million in the valuation and the market cap after you went public was $400 million.

Chris: Yeah, it was. I don’t remember the numbers. It’s been too long ago.

Impinj’s IPO Journey

Tim: That was the order. I remember working on that together with you at the time. People talk about IPO windows being open or not open and that it’s this place that billions of dollars gets rained down. That was a financing, right?

Chris: It was a financing event, but it was a financing event through the public markets. And what the IPO really did for us is it opened the window to the financial markets so that, as we executed, we had access to more capital, and it basically allowed us to accelerate our trajectory and put more distance between us and our competitors. That was the benefit of the IPO.

Tim: It’s such a great story about IPOs being financings. I mean, in eight years since, it’s been almost 10x valuation. Market cap increases. Pretty hard to get 10x in any business, but post IPO in less than 10 years is incredible.

And another Tom story. I mean, he invested individually into the IPO, and that was a bit of a catalyst too and another kind of shrewd opportunity that he saw. 24 years in, still day one. You mentioned earlier 0.5% penetrated. Talk a little bit about the future of Impinj and the opportunity, the scenarios for consumers, for businesses that’s still unfolding around this cool technology that you’ve pioneered and created a category around.

Chris: When you use the term RFID, it’s about as broad as saying radio because there’s all kinds. NFC is RFID. Animal tagging. There’s all kinds of RFID. Key cards are RFID.

A couple years ago, we wanted to rebrand the industry in a name that was more consumer-facing because the future is really about consumers getting benefit from the connected items. We decided to rebrand the industry with RAIN, radio identification. That name has been a little slow to take hold, but I believe it will take hold.

There truly is a future coming where not just enterprises, but people get the benefit of connected items where you can locate the items in your house just due to readers in your house, and you know where things are. Where you can authenticate your medicine or authenticate items you buy either for authentication purposes or before you even purchase them.

A phone could easily be the reader in the future. And I don’t see any reason why it won’t be a reader in the future. People will be able to get information about their items using their phones, and you can almost think of it as kind of a AirTag, Apple AirTag-type model writ very broadly. That really is how I envision the future.

Every item that has value or interest to people, we can connect, and we’re going to do it. We’re going to extend the reach of the internet by a factor of 1,000 from powered electronic devices to things. That is the internet of things. We’re going to connect every thing. It’s going to be fun along the way. I’m just getting going. It’s good thing I’m still young. Not.

Patrick: To that end, Chris, 24 years and counting. Probably 25 because you were putting Impinj together this summer 25 years ago. You have the same physical, intellectual, and emotional energy that you had then. How do you do it? Any tips for CEOs out there?

Chris: Don’t get old.

Tim: Interesting advice from a physicist.

Work-Life Balance

Chris: Just keep going. I work hard, but I also play hard. Enjoy the time with your family. Enjoy the time when you’re not working. Enjoy the time when you are working. I guess that is the advice. If you really like what you’re doing, then it’s not work. I love my time at home. I’ve got a granddaughter now. She’s the joy of my life. I take time off work half a day every week, not counting the time, where I spend with her on the weekends, and I babysit for half a day. That’s my time blocked out of my calendar to go to be babysitting. And we play blocks. We read Dr. Seuss and all kinds of stuff. I get to do that, but I also love what I’m doing at work. And so, I guess that’s the story. Do what you love. Love what you do.

Tim: Thank you so much. You’re so generous with your time. You’ve built an amazing company and category.

Chris: No, no. We. We built an amazing company. You guys were on the team. We built a company.

Tim: That’s kind. I know Patrick and I have both learned a lot, and you’ve spent a lot of time mentoring and sharing advice, and this is another great example of that. Thank you so much.

Chris: Well, thank you.

Tim: Here’s to the next 25 years.

Chris: Well, thank you. Thank you.

Patrick: Thanks, Chris.

Chris: Thank you both very much. Thanks.

Startup to Scale: A Mini Masterclass in Efficient Growth and GTM

 

 

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

In the latest episode of Founded & Funded, Madrona Managing Director Tim Porter hosts Pradeep Rathinam, a seasoned software executive with over three decades of leadership experience. Paddy was the Founder and CEO of Madrona portfolio company AnswerIQ, which Madrona invested in back in 2017. It was one of the early machine learning applied to SaaS companies that Madrona invested in. Paddy sold AnswerIQ in 2020 to Freshworks, where he started out as chief customer officer, helped take the company public, became chief revenue officer, and really drove a series of incredible accomplishments for Freshworks.

Today, Tim and Paddy dive into that world of SaaS go-to-market. Paddy shares his experiences on how to not just grow a company, as every company needs to do, but how to grow efficiently and how that includes reducing churn, expanding accounts, and landing new logos. This is something that all go-to-market leaders and startup founders have a lot of questions about how to unlock, so it is a must-listen for founders.

This transcript was automatically generated and edited for clarity.

Tim: So before we jump into the nitty gritty of efficient growth, tell us a little bit about the AnswerIQ story. You founded the company, built it up, and sold it in a relatively short period of time. Give our listeners some insights into what was AnswerIQ and why you decided to sell it.

Paddy: So perhaps the best thing I can do is start by introducing myself. So like you said, I’m old. I have three decades of experience.

Tim: I didn’t mean to highlight that. But I’m going on 30 years here as well. And it’s like, wow, there’s a lot of things that we’ve learned, isn’t there?

Paddy: The last decade is probably the most interesting for listeners here, I founded AnswerIQ, which was funded by Madrona. Thank you. It’s a classic B2B SaaS startup focused on using historical records to make predictions in customer service. We were using very early versions of GPT 2.5. And as I reached a pivotal point, which was — is this business going to grow to $100M or more? I reached the conclusion that being an AI layer sitting on top of customer service systems like Zendesk and Salesforce wasn’t necessarily going to get us to that $100M mark. So, in early 2020, prior to COVID, we sold our business to Freshworks.

Tim: Let me interject there quickly as we get into the Freshworks part of the story. I should mention that the AnswerIQ investment was led by my partner, Soma, and you had known Soma from Microsoft through the journey as well.

Paddy: That’s right.

Tim: In making that decision, was there strong alignment with the board? How did you work through that? Sometimes there can be a disagreement. Classically, the investor’s like, “We’re going to go for it,” or it could be the reverse. Was there a good alignment in that thought process that you alluded to?

Paddy: There was a very good alignment. Soma was an incredible board member and a dear friend. I think one of the things I learned in that experience was the simplicity of the questions that he asked, which was, “Hey, where do you see this business in five years? Do you see this business growing to a hundred million or a billion dollars? Do you see the path?” Once those became clear, it was alignment with the board to get everyone together to say, “Hey, let’s go out and find a transaction.” Freshworks was going for an IPO journey. I wanted to experience an IPO and I decided to jump on that bandwagon.

Tim: Freshworks, tell us that story.

Paddy: I joined Freshworks in 2020 as a chief customer officer. Those days, churn was the biggest challenge that the company was facing, and it was one of the key deterrents to its IPO and the net dollar retention. That was the first challenge I took on, built out the customer teams, and towards the end of three years, we reduced churn by over 40%. In post-IPO in 2022, our growth had slowed down significantly. We invested significantly in our field sales force, adding our investments by over 50%, and we saw a decline in new business in the field. I took on the challenge of the CRO to see how I could turn that tide. That was also an incredible year of sales transformation that we experienced.

In the end, it comes down to, like you said, why do founders stay or leave? For me, it was about what a mentor once told me — that being professionally rich its about the wealth of experiences you accumulate. It was about taking advantage of the opportunity to garner those experiences and learn from those.

Tim: When companies that I work with exit, I always encourage the founders stay on as long as you are learning a ton, and if you can have a big impact, this is a great opportunity. Yes, we’re in the startup business and it’s exciting like, let’s go do it again, let’s run it back. The notion of not just growing but growing efficiently has really been driven home. For those of us who’ve been working multiple decades, yes, businesses must be profitable and grow. Both are equally important.

There was a period leading up to really 2022 where it was sort of grow at all costs and you led through, it’s like, “No, we also need to be efficient.” When you’re a public company, and you’re reporting every quarter, it’s really glaring, and people are pushing on these things. But it’s incredibly important as a startup founder too. I mean the obvious, how long is my cash going to last? And so driving runway. I think efficiency through the full life cycle is critical in the market and everyone realizes that now, but it can be hard, especially as you’re also trying to significantly scale. Talk about your framework for thinking about efficient growth. What goes into that? What does it mean to you?

Paddy: Freshworks is a company that has over $600 million in revenue, creating SaaS products for IT service management, customer service, and CRM. With over 60,000 customers, we had three sales motions, which is PLG inbound, field outbound, and then partner-led. We had selling to three distinct buyers. We’re selling to a customer service leader, we’re selling to an IT leader as well as sales and marketing leaders. It’s a complex business in a lot of ways, right? When we started looking at this business, it’s not easy to break this down and say, “Hey, how do you really solve for growth?” Right after IPO, a year after that, like I said, we reached a crisis point where we created a 50% increase in our sales investment, and our new business declined in the field. All of these investments were essentially targeted.

Tim: That’s not efficient.

Paddy: Basically, all of them were targeted toward the field investments. We wanted to grow our ARR and ACV. I took on the challenge of the CRO, and one of the things I had to do was have a framework of how do you solve this problem. The first thing we did was look at deep analysis. I truly believe that growth for most companies lies within. If you’ve grown to a certain point that it’s 10, 50, or hundred million, there’s a reason why customers have bought you. There’s a reason that you need to go down and really analyze and be relentless in the analysis, whether it is by product, by geo, by SKU, by sales motion, really looking at both the economics, whether it’s CAC or LTV and those types of simple metrics or going into the mechanics of understanding and seeing what is going on with respect to win rates? What’s going on with respect to MQL-to-close and go-to-market efficiencies and engines?

Really looking holistically at the problem and saying, “How do we really break this problem down to see what’s going on in the business?” When growth stalls in a company, you typically will see the common view being the sales is broken. The go-to-market engine is not working or demand generation is not working. If you ask the sales folks, they’ll say, “Hey, the dog is not hunting.” The product isn’t really fit to what the market needs. It’s behind its competitive features when you compare AI with other competitors.

Tim: Wait, sales blaming product? Yes, that happens. Product blaming sales? Yes, to get in the blame game.

Paddy: It’s pretty common, but when growth stalls, it’s usually a structural problem that goes beyond go-to-market. Yes, there are go-to-market challenges but beyond that, when you look at product and mix what’s going on with product-market fit and pricing and packaging and competitive dynamics. It is a lot more complex than just looking at one dimension and saying, “Hey, we got to go do a go-to-market transformation and we’ll go find growth back again.”

Tim: You inherited a complex go-to-market, three different segments, three different buyers, and three different product lines; that’s a detailed matrix. What was kind of the range of deal sizes? If that’s something you can share, so the audience can orient because there’s sort of the big enterprise versus smaller deal size overlay on this too. Then, the first thing I’m taking away is to really know the data and follow the data. What did you do from there as you dug in?

Paddy: When we dug in, we realized two or three common patterns that were happening. The reason why our field new business had declined year-on-year was that the field was doing a lot of sub-$10K ACV deal. Imagine sitting here in the United States or in Europe or in Australia in some of these markets and building a sales force that is going out and selling sub-$10K deals. That was not cost-efficient and was primarily because of segmentation was broken. Our segmentation was such that more than 250 employees were belonged to what would be considered field. Looking at that data, one of the first things was to really break the mold and say, “Hey, we got to stop selling to a smaller segment of customers from the field,” number one. Number two, we got to sell a minimum ACV threshold, and we defined that as $30,000, and that became the core ICP.

More importantly, it was all products. Imagine a sales force running all products and selling them to three different buyers. It’s super complex. One of the things we did is simplify the motion saying, “Hey, we’re going to focus on one product that is going to be the winning cars.” We focused on IT. That was a product that was growing at 50% year-on-year. We made the sales force learn that one product and took all our other products and made them own to overlay and specialist sales force. What this is brought focus. Focus on IT, and on $30K plus ACV. We had competitors like ServiceNow, Ivanti, Cherwell, and we were winning in that marketplace. All we had to do was try and get the sales force to get focused on that.

Tim: The most important part of strategies is often deciding what not to do. That focus, I can see that being critical, simplifying things for your team. I often hear the conventional wisdom on what is the minimum ACV to be able to profitably sell direct with an outbound motion. Even at $30K, you have to be quite efficient, right? Sometimes I’ve heard $50K, it depends a bit, but 30K, while $10K seems very, very hard, even to achieve $30K probably took some real discipline and streamlining within the org.

Paddy: Yes, it is because when you are coming from $10K, $50K, $100K deals felt out of reach for a lot of our sales folks. Part of this was also transforming our sales force, the sales leadership, and the first-level leaders. First level leaders in sales organizations are the most critical path of really driving this. Once we got ourselves to a point of defining the ACV right, and we started transforming the sales force and because no GTM transformation happens without people. There along, we started getting more experienced salespeople, and we started selling a lot of $50K, a lot of $100K deals. $30K was the threshold, and, essentially, we created another aspect of it, which was our business was hybrid. A lot of B2B SaaS companies still operate with hybrid sellers, hunting and farming.

The reality is hunting is significantly harder than farming. A lot of farmers really got themselves into sellers, so we separated hunting from farming, and that brought a lot more incision and focus on the craft of hunting. We changed the incentive structures and so those were some of the things that we did just from a focus standpoint in basically saying, “Hey, focus on this ACV, narrow down.” The next thing we did was to add a lot more fuel to the fire. When you have three products, your marketing budget and your capital allocation tends to become evenly spread. We took from 40 cents to a dollar on our IT product to 70 cents to a dollar and saying, “Hey, that’s our core growth engine.”

Tim: Double down on what’s working.

Paddy: Double on what’s working. We saw that there was uphill challenges with our customer service product. Customer service as a landscape was going through a lot of change. Zendesk went private, growth rates pretty much stalled down, stalled for all our competitors. One of the things that I learned in my life is like, “Hey, don’t push uphill. Something’s got momentum. Go double down on it.” But trying to revive, the normal notion is we go try and fix this product and go back and sell and market. It takes 18 months, a product fix takes for 18 months, a product-market fit had changed. AI had changed the landscape of customer service, the expectation of productivity was high as well as the fact that the product itself had become more conversationally oriented. There’s a whole bunch of things that happened in the customer service space and I think that decision was a really good one to say, “Hey, let’s make that an add-on product that we sell through a specialist on overlay sales force and focus on IT.”

Tim: Can we double click just for a second on the change management aspect here? You looked at the data, you made these decisions, but then you alluded to the fact that it can be challenging to implement all that. It’s hard. I think I’ve seen whether you have eight people in your go-to-market team or you have 800. Certainly, more can be more challenging. Any lessons learned from the change management, both maybe your other peers around the exec table about hey, we’re not going to do these things and kind of letting go and then especially rolling out these changes to the field? You mentioned having to change out some folks. Was it a lot of rehiring? Was it training? Communication? What were some of the advice on doing that change management effectively? I know a lot of companies are going through this right now actually.

Paddy: Change management is hard. I think a lot of people think go-to-market transformation is about changing people and leaders. The reality is you want to bring them along and you want to give them an opportunity to really try and scale up. What we experienced was from a change management perspective, we did all of these changes in 45 days. So we weren’t doing a great job. What we did is give people a break for the first quarter and make sure they picked between wanting to be a hunter or be a farmer. These kinds of changes were very, very important because people suddenly felt like a goldmine of existing customers that they sold to went away.

And so change management has to be gradual. It has to be thought through. There is no good change management simple principle that you can say — The only thing I can say is to over-communicate and make sure that you bring people along and set expectations that it’s going to take time for these changes to land. It took us 1, 2, 3, probably on the fourth quarter, at the end of 2023, we had the largest quarter in six quarters. North America did its biggest business. It takes time for all of these things to land if you will.

Tim: It can be hard to get people selling $10K deals to selling $30, $50, $100, but it wasn’t just a wholesale. We have to get rid of our existing sales force and add new ones, bring people along, retrain, communicate. Sure, getting the comp piece of it all aligned, incentives aligned with the direction as well. That’s good to hear. Patience and intentional communication sounds like the key. That’s often the right answer, is it? Even in our family lives.

Paddy: It’s harder to said than done. People always will say, “Hey, this did not work well. Mindset is not right.” But overall, I’d say as we went past the six-month mark, there were no questions asked. Everybody was like, “Hey, this is the right strategy. Let’s just go and double down and win here.”

Tim: That’s the other thing. Once you start winning, that’s what aligns everyone. Everyone wants to win. You alluded to something in that description around hunting and farming. You also said it’s much easier to kind of grow from your existing customers than to find new ones. That hunting is much more expensive than farming. That being said, new business is the lifeblood. It’s oxygen. You have to also add new logos to then do the expansion pieces. We can come back to the expansion side of it, which is super critical. I think, especially in a down market, we’re adding new logos is that much harder, but you were able to do both. Maybe talk about the new business piece. It’s the hardest and most expensive, yet you found ways to do that efficiently. How did you do that, Paddy?

Paddy: New business is probably the hardest part in any enterprise field motion. It’s not repeatable. It’s long. It’s expensive, it’s time-consuming. In the end, the only thing that people say there is a sales process that’s repeatable, but the magic of new business is hard. One of the things I talked about is the focus. We focused on North America. We focused on winning in our ICP, which was on $30K plus ARR, competing with ServiceNow, Cherwell, Ivanti, and some of the competitors. Once we put all of these dimensions, then we had to make sure both our recruiting changed to make sure we got the right caliber of people. We changed our enablement, which used to take six months, down to one month of enablement. There’s a set of things that you need to do to get your new business motion right. We changed the incentive structure for multi-year contracts and larger deals.

Tim: How did you pay on that? Did you pay for the full multiyear upfront or what was the from-to?

Paddy: There were kickers. There were kickers for multiyear, and essentially, one of the things that we did was to make sure we made a target that was quarterly coders. It was really about you can make a lot of money, but the idea around the fact that you only are surviving on hunting was a new notion in Freshworks. We really made all of these changes. Then the other thing was, as soon as we put 70 cents to a dollar on our marketing on IT, we started seeing some repeatability. Now storytelling is very important. The alignment between sales and marketing is critical. You get that right, and then you start seeing momentum.

Tim: Did you own marketing as well?

Paddy: No, I did not own marketing.

Tim: Okay. So it was another tricky alignment piece is to work with your peers who are running marketing to drive this alignment.

Paddy: This is an area where I would say there shouldn’t be a ray of sunshine between sales and marketing because goal alignment is important. You want to be able to make sure that everybody is talking the same language. Businesses are measured on quarterly revenue. Salespeople are running on quarterly revenue, but often you’ll find marketing teams being misaligned because they are on an annual incentive plan, annual plan, corporate plan. The reality is MQL (Marketing Qualified Lead) is no good if it is not converting into business. And so getting marketing leader and alignment in my view, ought to be much more closer to the sales incentive alignment on a quarterly basis.

Tim: That is a common one where the MQL is elusive. You, of course, need top of the funnel to get at bottom of funnel, but you can have lots of MQL that aren’t actually qualified. Did you pick sales opportunity? How did you align? What’s the marketing to sales funnel metric that mattered at Freshworks?

Paddy: MQL-to-SAL (Sale Accepted Lead) and then SAL-to-close, and the reality was MQL-to-close was also a number to really understand. In SaaS business, these are very, very hard metrics to really point to. I do think that once you align the leadership teams on the same goals, I think you will start seeing a lot more teamwork and collaboration on these areas.

Tim: With the lead-gen motion? I’m sure marketing was doing inbound programs where they’re kind of out doing marketing things, generating leads that get handed off to sales. What about the outbound motion? Was there a BDR function, and were those people under marketing, or were they under you? That’s another place where I feel like there are six of one and half dozen of the other can depend on the leaders that are in place, but that’s a place you can often get the alignment wrong.

Paddy: And I think we got it wrong. I do believe that the BDR team belonged to my organization and the CRO organization. The reality is if you align your BDR team and marketing teams much closer and if they’re on that team, everything from messaging, because when a customer picks up the phone or is a response to an email, the language, the words, the nuances, that’s where you learn your messaging, outside of talking to your existing customers and talking, why did they buy you and what was the trigger points? The reality is BDR (Business Development) teams actually see tremendous synergy with marketing and putting them together, making sure that everything from MQL-to-SAL is all well packaged together, is probably one of the things I would say is a better alignment from a sales and marketing perspective.

Tim: What was the dominant lead gen source? Was it the BDR function or was it inbound or was it pretty evenly split?

Paddy: It was the BDR function.

Tim: BDR function. So getting that tuned was critical.

Paddy: Getting that tuned was very critical. BDR function is hard, especially post-COVID. It’s very hard for people to pick up phones, emails are hard to read. And so really using new innovative methods, trying to understand how to get better on LinkedIn. Those were the types of things that one would do. I would still think that getting these two functions tightly aligned would really change demand generation. Demand generation is hard. It’s a crowded marketplace. It’s hard to get your message out there. I feel for marketing leaders because simple storytelling is hard today. A big part of what’s missing in today’s industry is the ability to tell a story that a fourth-grader would understand. That’s the compelling message and that stays memorable, right? With SaaS, that’s very hard in a crowded category. Of course, if you are Copilot for GitHub, you got it all easy.

Tim: Let’s shift gears to retention. So this is the other side of we were talking about new logo, but customer retention is you have to do it. It’s more profitable, it should be easier. On the other hand, coming out of this period of time where every single dollar in the budget is under scrutiny from the CFO, can be harder. I know with some of our companies in 2023, I sort of said, “The theme this year, first and foremost, is retention.” We have seen even with that focus, net retention rates have dropped across SaaS. If you look at the overall numbers, certain things like seats weren’t growing naturally, consumption wasn’t growing net as naturally. How did you think about customer retention? Sort of interesting since your AnswerIQ is sort of in this support retention kind of area, but critical for you at Freshworks, how did you approach that side of the coin?

Paddy: Freshworks had a unique challenge. One, because of selling to SMB, churn tends to be natural on one end. On the other hand, with 60,000 customers with the PLG motion, you tend to see higher churn rates in some of those types of businesses. What we did was to simply start, and I have a very simple equation, which is retention = adoption + engagement + advocacy. These are the three core pillars, but it all starts with adoption.

Tim: I like that retention = adoption + engagement + advocacy. Let’s decompose that a little bit on those different pieces.

Paddy: That’s how I set up the entire team’s charter. The customer organization was thinking about this, but the most important thing for SaaS businesses is adoption. It all starts with product. Understanding product telemetry, picking up the right telemetry that you know as a business that will retain your customer or the customer will stay with you for long is critical. Most businesses, if you ask me, they’ll collect tons of metrics, tens and tons of metrics, but can’t make sense of what does retention really mean. We used a framework saying, hey, let’s look at cohorts of customers by revenue sizes, by industry, by usage scenarios across all of these. And basically said, how do we look at the top five to seven features that essentially a customer that has been staying with us for long is using? That’s what we call a pack called essentials.

We created a package called advanced and one for ultimate. It was deeper usage of features. Feature usage and adoption, not as a feature, but in the usage scenario is critical. Being able to see from a telemetry and understand was important. We go CSMs and our customer success and account management organization saying you got to move customers from essentials to advance to ultimate. The ultimate, we knew all the integrations. We knew when people were bringing data into our system, they were less likely to leave. There’s a framework you can think through, but it all starts with telemetry and being strategic about telemetry is very, very critical in today, in the way you think about product adoption.

Tim: That’s fascinating. I think most of the companies I work with realize, hey, we need to have this telemetry, otherwise, we’re flying blind. The customer’s actually using it and the product, of course, has to be compelling that draws people in, et cetera. The packaging, the SKUs as a way to not sort of drive more monetization but actually to drive more usage. That’s a really interesting insight.

Paddy: It was not only usage, it was an upsell strategy too. It was a little bit of both because we knew if someone was in advanced or ultimate, they would actually go to higher, higher-level plans of our product. The second part of it was onboarding.

Tim: If you don’t set people on the right course, it’s hard to get them saved over time.

Paddy: Absolutely. And the first hour in PLG SaaS business, the first day in SaaS business is important in a trial period, but the onboarding experience that you offer and what you learn from that is the Achilles’ heel in SaaS business, because some businesses take 30 days for time to value, some take nine months for time to value. The reality is you’ve got to understand what is it that the customer is asking for and how do you understand what are you setting up for the customer and taking that, feeding that back into the product team.That’s where a self-service onboarding can become significantly better. We created a package for digital adoption where we took onboarding for free for six hours for 15,000 plus customers. It was massive in our churn reduction initiative.

Tim: Before that digital pack, was this not offered or did you charge for it?

Paddy: It wasn’t offered. It wasn’t thought through. A big part of this was to create a simple package. Once you had essentials advanced, go take essentials, get it done with customers, and we knew the customer had a higher opportunity to stay with us.

Tim: Do you do any free trial or POCs before people decide to adopt? And at what point did you layer on this digital pack for the kind of free onboarding training?

Paddy:All our products have a free trial, and then for a field and enterprise, you take them through a POC. The reality is go live is a misnomer. Usually, it’s a .one, it’s an MVP, but there’s a lot of work to make that product work after that, and so understanding those pieces were critical from the churn perspective. Onboarding was a critical path in making sure that you can make it very easy as well as you can do it at scale. That was one of the things that helped us reduce churn.

Tim: Onboarding needs to start at the free trial or POC, or you probably don’t convert to paid, but also once they’re converted to paid that go live, that’s a misnomer too because it has to continue to make them successful.

Paddy: That’s right. Yeah, because to see value takes a lot more than just the go light that they have. There’s more work to be done. The third area was around customer service and support, right? Think about it. It’s a goldmine of signals, ongoing touch points, with the customer. This is a place where you see customers reach out to you because the product is not working in a certain way or there are certain things that can’t work and even developers tend to think about is a simple feature, why can’t the customer use it? We learned a lot from the contact rate. Reducing contact rate, understanding contact rate, and getting the right codification on that is critical from a churn perspective.

You get this right. We had an amazing partnership with the product team where we said we want to bring down contact rate by 5% every quarter for the top contact codes, but contact coding also requires a lot of strategic thinking in terms of how you do the coding, so that way, you can really take that feedback back to the product team. All of these three areas are very core to what I call product related and product telemetry related in terms of an adoption perspective.

Tim: Your first part of your attention equation adoption, just to summarize, first, the product has to be wired so you have the right telemetry in place and you use it. Second, you have to get the onboarding right to really get them to adopt. Then there’s this ongoing contact rate, which is probably a segue to engagement, which that’s the next part of the equation to how do you solve retention.

Paddy: Right. So engagement is an overused term in the industry. People do QBRs with the person they meet on a weekly basis, and they will talk about how’s the health of the system and what have you. The reality is engagement really works when you have a strategic relationship with an executive on the other side. One of the things that we did was enforce this thing called the executive business review. In that, we wanted to ask the business leader on the other side, your simple questions like saying, “Hey, here are the best practices. Here’s what other customers are seeing from the product and how they’re benefiting from this.” Then ask the straight question, if you were to ask you today, would you renew or not? There’s no point in asking a lower-level stakeholder. You want to take this to the right level.

Executive business reviews are hard. Generally, you will not find the ability for a lot of sales leaders, account leaders, success leaders, being able to go and ask the questions saying, hey, I want to talk to a C-level person on the other side, because conversations are simple. The conversations are, hey, are you doing something to improve their customer experience? Are you doing something to reduce their cost? Are you doing something to improve their avenues? If it’s none of these three, and then what is the practice that’s used by their competitors that you can take to them? That is where the executive business reviews really works.

We had the top 100 customers adopted by execs. Every C-suite person inside the company had to own 10 customers. For example, we had a voice of customer once every month. We would get three customers to come and speak into the management team and just keep their verbatim feedback without being any coaching, without giving any coaching. Those were signals that we would learn from the engagement perspective to understand, hey, in the top tier, mid-tier, the bottom tier of customers, how are customers perceiving their relationship with the company?

Tim: That executive buy-in, it sounds obvious, but it’s so critical. I’ve seen so many times where, hey, the customer’s using it, they love it, the daily champions, but then you might get the legs cut out from under you a time of renewal because execs didn’t see the value, got rid of budget, or maybe were getting sold to by a competitor that comes in top down. When you talk to teams, it’s like, “Are the executives buy in?” “Oh, yeah, yeah, yeah, they are, they are,” right? Because if you might be just listening to your champion and they might not have it. So this point about enforcing that discipline and creating some mechanisms to ensure that those EBRs happen, et cetera, I think that’s really good advice.

Paddy: That’s right. Companies tend not to spend as much focus on this, but I do think that being very disciplined around the executive focus is a sure-shot way to make sure that you have customers who are happy.

Tim: We’ve been talking about change management and org alignment. There’s also interesting on your side. You referenced different execs across Freshworks that had customers, but there’s also always this question of was this an account management function? Was it a customer success function? Did you have both? How did those orgs dovetail to accomplish this set of things?

Paddy: We did an interesting exercise and one of the things was, how do we put the customer at the center and have these teams work with each other? If you look at what was what happens, it’s a blame game between, hey, this is not an expansion opportunity. This is a retention opportunity option thing. CSM works. CSM tells it’s a renewal opportunity and account managers take it. What we did is we built a pod. We made the account management and CSM function under one same set of customers. Account manager focused 70% on expansion, 30% on churn. Customer success leader manager focused on 70% on churn and 30% on expansion, and they worked like magic. Now, you put the customer in the center, EBRs and QBRs were happening, adoption packages were being sold in. There are things that we did that essentially structurally brought, which makes sense because hey, it’s the customer is at the center, why do we have two teams in two different organizations to really work with the customer?

Tim: Okay, last piece of the retention equation, advocacy, I would guess that’s the one. It’s like, okay, what does that mean exactly? I kind of get the, they got to use it, and we got to engage over time, but what is the advocacy piece and how does that bring it all together?

Paddy: Advocacy is the network effect, and I think if you see customers like to talk to other customers because being in a community is also a sense of, well-being, being part of something, learning from others, learning from others experiences. Advocacy became one of the goals where we say, if we can get a company to publicly say with us, say that they’re using us and this is the benefit that deriving, you would get CSM incentives and points on that. Driving that became a critical factor. One of the best things we found with advocacy was that when our advocacy worked well, we bought a lot of net new customers. The new business customers, when they came into some of these forums and they felt like, hey, they’re seeing other customers talk, that was probably one of the magical moments for them to say, hey, I think these guys are transparent. They’re talking about the product, the company’s listening, and the magic happens for new business there.

Tim: Retention = adoption + engagement + advocacy.

That’s good to remember. By the way, Paddy has published some blogs in some of these learnings, so we can point to those later. You can go and read more and learn more about some of the details here. Let’s move to one other key area here, and you alluded to it. There’s new logo, there’s keep the logos that you have, but then how do you expand within them? When you were talking about account management and CSM alignment, you teased the expansion question different ways that you can expand in accounts. There’s upsell, there’s cross-selling, other products, SKUs, et cetera. What was your approach to driving expansion and keeping customers happy at the same time at Freshworks?

Paddy: First principles, expansion can’t happen unless you have deep product adoption, you have advocacy, and you have a happy customer. To go back at renewal and say, “Hey, I want to expand my revenue or plan three months before,” is not necessarily the best path for expansion. Expansion, in my view, is one of the areas where product managers, product leaders, have to be very, very strategic about thinking about what is the roadmap for growth from an existing customer. It’s not so much of an account management function, but really the set of SKUs, the offerings, how do you package AI beyond Copilot?

Those are the types of things that product leaders need to think through because that is where the rubber meets the road. When I look at product management today, pretty much most SKUs are designed around T-shirt sizing: small, medium, large. You look at a competitor, you say, “Hey, let’s don’t match that.” The reality is you can learn from telemetry, you can think about usage. How do you increase usage and consumption in different ways in the way you design your SKUs, right? A big part of what I think about expansion is making sure that you have a slew of offerings that account managers have in their armory when they go to customers, and then be able to really talk with the best practices that other customers are using so that they can get them.

Tim: Were there any mechanisms internally to take all those learnings from your go-to-market org and feed it back into product management? Was it as simple as, hey, we communicated a lot, or was there anything that you did there to make sure that those learnings transferred in the right decisions that you referred to ultimately got made?

Paddy: A lot of it would come through the quarterly product reviews. In our business, we have a quarterly product review, and so in the core quarterly product review, you have sales, account management, CSM, everybody bringing in the feedback, customer success, customer support, onboarding teams to the product teams. It was, while it was not so structured, it was a forum every quarter to bring some of this back to the product teams. The second aspect is when you think about the amount of effort that businesses spend in the amount of money, in resources, in marketing, in sales towards new business, expansion marketing teams are relatively underfunded in most businesses. It is a lost opportunity.

It is a lost opportunity in the sense that, hey, not only can you drive more advocacy, you can drive more adoption, but systematically thinking about expansion marketing is very, very critical. I do feel that it is underfunded in most organizations, because everybody is basically smitten by the new business growth that one wants to get. One other aspect I will add is how do you make this more product led? The reason why I was putting it back product leaders is because making expansion more discoverable, which is how do you get more features, aspects of your products, your cross-sell upsell more discoverable in the experience of the product is critical. I think there is still a lot of room for growth in SaaS businesses to do that better.

Tim: You brought up earlier the pricing packaging aspect of this. Is there anything you did around structuring contracts that was also helpful in encouraging expansion or any tips on that side of things?

Paddy: From a contracting perspective, it’s pretty obvious. One of the things that you’ll find is most SaaS businesses discount heavily on year one and sometimes end up discounting for multiyears because of the desperation of winning that deal. It’s very important upfront to start thinking about whether it’s expansion rates that you apply, limits you apply for usage in how you package your structure of the deal in such a way that you continuously get on renewal rates, ARPA improvements, as well as expansion of the new expansion dollars that you can see. And I think that is definitely one of the areas I would say that SaaS businesses ought to do better. The big guys like Salesforce and Oracle, they steamroll you and do that. But generally, when you see startups and midsize companies, they struggle to find that voice and that muscle to be able to do so, and I would understand.

Tim: Panic a little bit right in discounts. So the advice is maybe like yes, as a startup, you have to realize you don’t have the pricing power of Salesforce, but maybe don’t get desperate too quickly.

Paddy: Yes, especially at the point of sale.

Tim: It’s harder than you think to make it up in year two and year three.

Paddy: You have the customer. The customers agreed to buy. I mean, I think at this point, you don’t give in on some of these areas.

Tim: Wow, what a fascinating journey and amazing success at Freshworks. If you tie this all the way back to startup founders and thinking back to your time leading founding AnswerIQ and having been a CRO here for these last several years, what do you think about org building there at earlier stage companies? This is always a question, when do we bring on a CRO? Any advice about that or general org building now for the early-stage companies that were maybe in that kind of first three-year journey that you took AnswerIQ through?

Paddy: Most startups go with head of sales to start with, and then they bring on a CRO at a later stage. CRO to me is a strange title, right? The title by itself has so little leverage. The customer’s only thinking about getting discounts from you. There’s no value conversation you can have with the CRO title because you have the name revenue written on it. Besides that point, the three things in a startup, you have to build are, you got to sell, and you got to tell. In the initial stages of a startup, a CRO is the seller and the teller because you’re learning from your customers and you’re building your story as you go along. It’s a critical function that you bring on early that is lockstep it the way you’re building your product and you drive growth for your business.

Tim: Paddy, this has been absolutely terrific. I know that the absolute most common set of questions we get from founders are all of these things we’ve been talking about around go-to-market, the decisions, the organization, the levers, what good looks like. This is incredibly useful to our audience. Again, Paddy has written some blogs that go into more depth. We’ll have links to those in the episode notes, but I can’t thank you enough for sharing your time and your insights with us here today in Founded & Funded.

Paddy: Thanks, Tim. Thanks for having me.