Building Predibase: AI, Open-Source, and Commercialization with Dev Rishi and Travis Addair

Building Predibase: AI, Open-Source, and Commercialization with Dev Rishi and Travis Addair

Listen to this Predibase episode on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Madrona Partner Vivek Ramaswami hosts Dev Rishi and Travis Addair, co-founders of 2023 IA40 Rising Star, Predibase. Predibase is a platform for developers to easily fine-tune and serve any open-source model. They recently released LoRA Land, a collection of 25 highly-performant open-source fine-tuned models, and launched open-source LoRA eXchange, or LoRAX, which allows users to pack hundreds of task-specific models into a single GPU.

In this episode, Dev and Travis share their perspective on what it takes to commercialize open-source projects, how to build a product your customers actually want, and how to differentiate in a very noisy AI infrastructure market. This is a must-listen for anyone building an AI.

This transcript was automatically generated and edited for clarity.

Vivek: To kick us off, maybe we can just start off by telling us a little bit about the founding story at Predibase. Dev, you were a PM at Google in the Bay for about five years. Travis, you were a senior software engineer at Uber in Seattle for about four years. How did you meet and co-found Predibase in 2021?

Travis: The company originally started out of Uber. Our co-founder, Piero, was working at the time as an AI researcher for Uber’s AI team on a project called Ludwig. He and I worked together. He came to me in March of 2020 saying that he thought that there was an interesting opportunity to productize a lot of the ideas from Ludwig around making deep learning accessible and thinking about bringing that to other industries, other organizations, other personas beyond the data scientists, but folks like analysts and engineers as well.

It started getting a lot more serious in the summer of 2020. We started saying, “We definitely need someone who’s not completely technical and engineer. Let’s try to find someone who knows how to build good products and can help think about some of the go-to-market issues, and that’s how we ultimately came to meet Dev. Every other person that I had talked to up to that point was very bidirectional conversation like, “What are you guys doing? What do you think the opportunity is?” It was a lot of selling on our side. Dev came in with a presentation like, “Here’s how I think that you can turn this into a business.” I was like, “Okay, this guy’s pretty serious.”

Dev: On my side, I had spent time previous to Predibase as PM at Google, and worked on a number of different teams. I saw how Google did machine learning internally. For a while I was jaded on this idea of making deep learning and machine learning a lot easier. I’d seen it and we had tried to do it at Google, and it hadn’t really gone as well as we wanted. I originally got in touch with Piero and Travis through a mutual contact because I got introduced as a skeptic on the space. I think the thing that I had said was, “I’ve seen a lot of people die on this hill of trying to make machine learning a lot more accessible.”

I remember the first meeting I had with Piero and Travis, they didn’t understand what I think the vision directly was, but then I tried the projects that they had tried to be open source and they walked me through a presentation of how they were thinking about it. That’s when it clicked and I knew that there was something here that could be more differentiated approach. We all got to know each other over the summer of 2020, and we officially started the company at the beginning of 2021.

Vivek: It’s funny how different the deep learning and machine learning space was three or four years ago compared to where we are today. It’s almost hard for people to think, “Oh, people were jaded about it when you see all of the optimism today.” I’m curious: what was the aha moment when things really clicked? You mentioned, Travis, that the original vision was around Ludwig and these open-source projects. Often, there’s a lot of difficulty in going from a cool open-source project to Is this a business? So what was that aha moment, when did you start to see things really click?

Travis: What convinced me about the idea of making deep learning accessible was an extrapolation of what I perceived to be a trend happening in the industry, which was on two different fronts. One was the consolidation of data infrastructure from data swamps and data lakes and unstructured towards more canonical systems of record for data. Data was getting better and getting more organized. The other was on the modeling side, that model architectures were consolidating as well towards transformer-based architectures. There were only a handful of models that people were using in production and fine-tuning as opposed to having to build novel model architectures from scratch with low-level tools like PyTorch.

My bet was that eventually, these things were going to converge, and the idea of training or fine-tuning models on proprietary data would be like a one-click type operation in a lot of cases where you’re not having to think very critically about how many layers the model has or having to do a lot of intense manual data engineering. That was going to be much more about the business use case, the kind of level of the problem that someone like an analyst or someone like an engineer would work at as opposed to a data scientist.

I definitely believe this has been proven out to some extent over time. Of course, I’m sure we’ll get into it, but I think with large language models, it happened in a bit of more of a lightning flash as opposed to a slow burn. But definitely I think that that trend is continuing.

Vivek: It’s interesting, Predibase was founded after you all started talking in the summer of 2020 and officially formed the company. When we think about the start of this most recent AI moment or AI era, it’s really when ChatGPT was released at the end of 2022, and everyone started talking about large language models and AI. It’s so interesting talking to all these companies that were founded before that moment. How has the original vision changed or not changed since ChatGPT was released in this pre-LLM era? What do you think has changed and hasn’t changed at Predibase in terms of how you’re thinking about the future and the vision for the company?

Dev: It’s funny because I think a cheeky answer to this question could be that our vision hasn’t necessarily changed that much, but our tactics have changed entirely. Our vision initially was as a platform for anyone to be able to start to build deep learning models. We initially started with data analysts as an audience in 2021. We had an entire interface built around a SQL-like engine that allowed them to use deep learning models immediately. That’s where the name Predibase came from. Predibase is short for predictive database, and the idea was let’s take the same types of constructs that we’ve brought for database systems and bring it towards deep learning and machine learning.

At the end of that year, we found that analysts weren’t our niche. The people who wanted to use us were more like developers, engineers, the types of people that were along the lines of, “Hey, just give me some good technical documentation and I’ll figure this out.” That’s where we started to shift from an audience standpoint. Our vision was, how do we make deep learning accessible toward this type of audience? When we started pitching deep learning in 2021 to a lot of organizations, we came up with the value propositions of deep learning as, “Hey, you can start to work with unstructured data like text, and images. You get better performance at scale. And oh yeah, you can use some of them pre-trained, and that way you don’t need as much data to be able to start to get your initial results.”

The biggest change that happened is that last value proposition, which was third on our list, has become the most important thing that people have started to care about. Which, if I think about what was popular in our platform in 2022, people’s eyes sort of glazed over when we said deep learning is cool, but what they liked was we had this dropdown menu where you could select different text models, and one of them was a heavyweight model called BERT. You could start to use it pre-trained. They didn’t need as much data to be able to get started. They loved the idea that they could actually maybe fine-tune it inside of that platform. At the time, it was just one feature among many of the things that we had done on the platform, among many other value props.

In 2023, when large language models came out in a large way, we started to think about what we wanted our platform to be. One of our very first takes and maybe one of my very first takes specifically was LLMs are another dropdown item in the menu. You had BERT and all these other menus of deep learning models, and now we’re going to add Llama as an example. We needed to recognize that the market had changed how it thought about machine learning. It was no longer thinking about training models first and getting results after it. It was thinking about prompting, fine-tuning, and then re-prompting a fine-tuned model overall. Our tactics significantly shifted. We considered a product pivot that we did in 2023 to be able to actually better support large language models. Funnily enough, it’s still in service of the vision of how do we make deep learning accessible for developers.

Vivek: It’s almost as if the vision has stayed the same, but the market has come to you in some ways like, “Hey, we were talking about deep learning and machine learning before it was “cool” in the current context.” Large language models have opened up a whole new sphere of who you can market to.

Dev: We definitely got lucky there. We had to meet the market halfway. We had to make sure that we also were responsive and not trying to do or meet the market using our old form factors, or our old tactics. The market did come to us in essentially figuring out, “Hey, there are three value propositions you mentioned. One or two of these really matter. How do you center your offering around that?” That’s been one of the most helpful things for a business. As a startup, I find one of the biggest challenges is getting people to care. How do you get anyone from another startup organization to spend the 45 minutes with you or a large enterprise? One of the nice things that’s happened over the last year and a half is we no longer have to explain why deep learning matters. We frame it as LLMs and being able to fine-tune those.

Vivek: You both talk about the big thing for startups, and a lot of the founders who listen to this podcast will attest to the same, is you have this great idea in your head, and you see the tech maybe before anybody else does, but then there’s a question of, well, why should we care? How do you go from figuring out, “This is a cool open-source project,” to “This is something we can commercialize, and people and businesses are willing to pay for this.”?

Dev: A lot of open-source projects are very popular in GitHub, and I do think a subset of those are probably best as open-source software. They’re a framework that makes something easy but doesn’t necessarily need the full depth of a commercial or enterprise already product for it. No one knows more about these types of challenges of actually taking some of these open-source frameworks and running of the infrastructure than Travis. He’s been working directly on it both for Predibase as well as with users who have tried to use this independently. Travis can talk about the challenges we solve, translating open-source frameworks without a commercial offering, and why we thought there was a real commercial business to be built around these frameworks.

Travis: When it comes to open source and particularly open core models, I think the easiest argument to make is that at Uber we had a team of 50 to 100 engineers that were working on building infrastructure for training and serving models. The cost of that is quite significant, even for a company like Uber. For companies that don’t consider this part of their core business, maybe they consider it core infrastructure but it’s not differentiated IP for them, you could invest in building an entire team around it or you could just pay a company like Predibase to help solve those challenges. With our most recent project, LoRAX for example, there’s a good open-source piece of software that can be productized and productionized and used in situations where you need high availability, low latency, and all those sorts of things. That’s sort of the layer on top. We have internal software that’s been running on top of Kubernetes and running across multiple data centers to optimize the availability, latency, and throughput of the system that goes above what’s in the open source.

That’s inevitable when you’re talking about something that’s going to be used day-in and day-out thousands of times a day, that what starts off seeming like long-tail issues like, oh, this request failed or this service was down for some period of time, become mission-critical at certain points. That’s, where there’s a good opportunity to appeal to organizations that need that. There’s a good synergy, I think, where they need those particular levels of service, and we’re able to offer it, and that’s something that they’re willing to pay for at that point.

Vivek: It sounds like, unlike many other open-source projects where you start the open-source project and then you say, “Hey, let’s see if people are willing to pay or not,” this is almost where at the very start of it you knew that there is a willingness to pay given your time at Uber and having seen this scale. Let’s start with this open-source project and start with this product that’s out there, get people to try it, and there’s definitely a willingness to pay. So it’s a different angle. We see many open-source projects that are out there, you wait for a lot of people to use it and say, “Hey, are people willing to pay or not?” and then it’s a different debate you have at that point.

Travis: Dev actually has a good analogy about the front of the kitchen, back of the kitchen when it comes to this sort of thing. I think that serving definitely is this very front-facing thing where you have to get every detail right, and those minor differences make a huge difference in terms of the overall value of what you’re offering. So yeah, I’m not sure, Dev, if you wanted to maybe speak more to that.

Dev: I have two analogies that I’m going to throw out. The first analogy I think about with open-source projects is there a commercially viable business around this? The way I think about it is for us, Ludwig and LoRAX are sort of the engine, and what we’re trying to do is sell the car. There are some people, maybe advanced auto manufacturers, who just want an engine and they want to put it in their tractor or some other kind of setup. Most people want to be able to buy the fully functioning car, something that’s going to allow them to unlock the doors and have a steering wheel and other things along those lines, which in our world is the ability to connect to enterprise data sources, deploy into virtual private cloud, give you observability and deployments that you don’t necessarily get if you’re just to be able to run the open-source projects directly, and finally connect that engine to a gas line, which I think in our world will again be the underlying GPUs, cloud infrastructure that this is going to run on.

The second analogy is for how we think about the product, and then the other piece you always have to figure out is how much the core problem that the open-source product is solving really matters to that end customer and what the visibility around it. That’s where I think about there’s some things that can be done in the back of the house as a kitchen, where maybe someone has an internal pipeline for doing something. It doesn’t need to be pretty, doesn’t need to be production-ready. It could be so much better if they had this commercial product that was built on top of the open source, but it’s not necessarily mission-critical, or they’re not going to lose customers and users because it doesn’t work flawlessly. Think about these as especially those internal cell pipelines.

I think about the front of the house, which to me is taking, for example, fine-tuned models and being able to serve them very well, things that are going to go in front of customers and user traffic. This is the part of the restaurant that you want to make sure services folks really, really well. So, I’ll need to figure out how to combine the car analogy and the restaurant analogy, but to me, the car analogy is how we figure out what’s the commercial viability around the open-source projects, and the restaurant analogy is a little bit of how you think about if an open-source project is going to be important enough to be able to justify some of that commercial viability.

Vivek: I love it. Don’t be surprised if we steal both of those analogies for some of our own companies because it’s really important distinction, like selling the car versus the engine, and the front of the house versus back house. At the end of the day, all of these things roll up to what’s most important for customers.

One of the things that I love when we talk about the front of the house or even what people see is on your website you have a great tagline, which is, “Bigger isn’t always better.” With the explosion of GPT and everything we’ve been hearing from them, we’ve been hearing about these models with billions of parameters. For a while, it was a bigger is better, and we got to create the best model and how many parameters is GPT-5 going to be. In your view, why is bigger not always better? Specifically related to models and the customers you serve, where do you find that bigger is not always better?

Dev: My favorite customer quote is, “Generalized intelligence is great, but I don’t need my point-of-sale system to recite French poetry.” I think that customers have this intuition that bigger isn’t better, and we don’t always even have to convince them that much. They sort of hate this idea that they’re paying for this general-purpose, high-capacity model to be able to do something rote. I want it to be able to classify my calls and tell me did I request a follow-up or not. It’s a very common type of task that people might be able to do using GPT-4. Today, they have a model that can do everything from that to French poetry to write code. There’s this intuition that when you’re using a large model like that you’re paying for all that excess capacity, both in terms of the literal dollars but also in the latency, reliability, and deferred ownership.

When I talk to customers, I think they’re very enamored with this idea of smaller task-specific models that they can own and deploy that are right-sized toward their task. What they need is a good solution for something very, very narrow. Where I think the trick is that customers have is, well, can those small models do as well as the big models? It’s very fair if you’ve played around with some of these open-source models, especially some of the base model versions, you have that intuition that they don’t do as well as the big models as soon as you start to prompt them. That’s where we’ve spent a lot of our time investing in research to figure out what actually allows a small model to do and punch above its weight and be as good as a large model.

To us, what we’ve just unlocked might not be a massive secret, but it’s been around data and fine-tuning. What we found is, that if you fine-tune a much smaller model, a seven billion parameter model, a two billion parameter model, probably an order or two orders of magnitude smaller than some of the bigger models people are using, you can get a parody with or even outperform the larger models, and you can do it in a lot more cost-effective way and also be able to do it a lot faster so you don’t have to wait for that spinner that you often see with some of these larger models.

Travis: A big aspect of this that every organization should think about is the type of tasks that they’re trying to do with the model. If you’re primarily interested in very open-ended tasks where you do want to be able to say, like have a chat application where you want to be able to ask anything from generate French poetry to solve this math problem to whatever, you do need a lot of capacity. That’s why ChatGPT is as successful as it is, is because when a user comes in and uses it, they don’t know a priori what type of question the user is going to have in mind. You do need something very general purpose.

When you’re productionize something behind an API, it’s just like an endpoint, you’re calling it, “Classify this document,” and you’re going to call that over and over and over again thousands of times, you don’t need all those extra parameters just at the baseline. That point depends on how complex your task is the capacity of the model that you need. The less capacity you need, the smaller the model you can use, and the lower the latency. It goes on a task-by-task basis that people should be evaluating these sorts of things.

Vivek: I feel like we are now in the moment of time where we are seeing that the balance swing back to, I might not need this really, really massive model with a trillion parameters and all of that. I need something that works for me, that works for my use case. Dev, you mentioned customers there and what your customers have been saying to you. Take us to the first customer, how did your first customer come through the door? How did you land them?

Dev: I wish there was a very repeatable lesson for founders here, but sometimes your first customer is a little bit of luck mixed with just a little bit of internal network and elbow grease. I remember we started the company in March 2021. I had no idea how we were supposed to get our first customer. What people told us is the common advice, and I think this is correct, is your first few customers are probably in your network. Looking initially at my network, I didn’t know really where to start digging. We ended up getting in-bounded from a large healthcare company based here in the US. They were curious just to know, they had seen Ludwig out there. They weren’t even active users at the time, but they had seen Ludwig and they’d seen it solve a very specific use case that Uber had published a case study around, which was customer service automation. They wanted to know if there was something that could be applied with Ludwig inside their organization.

I’ll never forget the very first customer meeting that we had was with this organization. We started in March, this customer meeting was in April. It was an hour-long meeting where we walked them through what Ludwig had been for a few minutes, but also what our vision was and what we were building with Predibase. The meeting ended with, “If you guys have a design partner signup sheet, just put our name on that list.” That was the end of the very first customer meeting that we had. It came in because of that open-source credibility. I walked away being like, “Are they all that easy?” They’re not just as a very quick recap, but the first one for us did come in a way through network, but really from organic open source inbound that came through.

From there what we’ve found is helpful is the repeatable lesson of content that attacked a use case that somebody cared about and that allowed them to come to us. That’s something we still see as a pretty effective channel now as we’re landing our next sets of customers as well.

Vivek: See, folks, it’s just that easy, just start the company and a month later someone’s going to ping you. Having that channel, as you mentioned, this is one of the nice great benefits of open source is people can start playing with it. Someone may find intrinsically there’s a lot of value and say, “Hey, how do we go from where we are today to doing even more with this?”

Zooming out a little bit, I would say to the outside observer, at least today compared to a few years ago, the AI infrastructure space has become very, very crowded. It seems like there are a lot of infra companies building at the inference layer, at the training layer, doing things around fine-tuning. I would say often to the outside observer and even sometimes to the inside observer, it’s hard to tell what’s real, what’s working, what’s the difference, do we need all these products.

In some ways, it’s really healthy because it gives people a lot of options, and I think when we’re early in the AI era as we are right now, you need a lot of these options, and there’s a lot of space to build. How do you both think about it? One, there’s a day-to-day of just maybe there’s hand-to-hand combat against some product, some more than others maybe, and then there’s just the long term of how do we resonate and stay above the noise and build for the long term.

Dev: I think this market is extremely competitive and that there is a lot of noise that gets introduced into the system. In terms of staying above the noise, the only way that we have found to be effective to stand out as an organization, especially when you don’t have that hour with a customer, you need to go ahead and build a brand where people are just going to look at you for maybe a few seconds or a minute and make an assessment of, “Is this worth my time?” The only way that we’ve seen work is you have to do work that advances the ecosystem yourself.

What I’m saying is I think you need to do something that can be a narrow slice but somewhat novel or a differentiated take. There are two ways that we’ve thought about doing this. The first is people have always liked this idea, that small task-specific models that are fine-tuned will be able to dominate these larger models. I spoke with a customer who said something that stuck with me; he said, “I want to believe that these small task-specific models actually will be the way my organization will go, and I want to use open source, but I just don’t know if it actually works.”

The world out there today is a lot of anecdotal experiments and memes on Twitter and others. One of the first things we did was we started to benchmark data and put out results in our benchmark. We put out a launch in February called LoRA Land, a mix of La La Land as well as the play on LoRA fine-tuning, which is how we did the process, where we took 29 datasets and fine-tuned Mistral-7B against those datasets. We wanted to compare initially to see how much fine-tuning helps against the base model. What we actually found was that fine-tuned Mistral-7B will be at parity with or actually, in many cases, outperform GPT-4. When we do some prompt engineering and try and find prompts that work the best for both of them, it’ll outperform that one out of the box.

That became a moment where we went semi-viral. We were at the top of Reddit for a little while. We had a partnership with Hugging Face and Mistral to go ahead and re-share it; Yann at Meta also, I think, re-shared this. It was a way for us just to start to put some data in the industry. We also released all these models as open source. We even built a little interactive playground where people could play around with these models directly firsthand and start to see what actually the model performances would like.

From there, we’ve scaled this out 10 times. We’ve not only now trained 27 models, but we’ve trained over 270, because we stopped just benchmarking Mistral. People would say, “What about Llama 2? What about Microsoft’s Phi?” We’ve added a number of different models, Google’s Gemma, to this entire list, and we’ve started to just build out our own internal benchmarking to understand what is the fine-tuning leaderboard. We’ve put this content out there, we’ve put these models out there, and there’s going to be more on that very soon. That was one way that we thought about advancing the ecosystem.

The second way, I would say, is honestly building novel frameworks that didn’t exist before. The best example of this is LoRAX. I’ll let Travis speak towards LoRAX as the lead author and creator of the framework, but one of the things that made it very popular was we attacked a problem in a way that no one else had been thinking about, and that really helped us cut above the noise.

Travis: To Dev’s point, attacking a narrow slice of the market is the only way that I found to be able to stay above the noise. The reality is that we talked about how pre-LLMs so much of our focus was on getting people to care, even understanding what the value proposition was, and now everyone cares. Therefore, there’s tons of people in the market attention and many of them are much better capitalized than we are. We’re talking about companies that have hundreds, thousands of employees, in some cases, working on this stuff.

The challenge, I think, was definitely on a product and engineering side to think about ways that we could attack something that, while they were technically capable of doing it, we could do better than them just by sheer focus and execution. We saw an opportunity with this multi-LoRA inference in the second half of last year. It was definitely on people’s radars; there were some early blog posts about it and some research that was happening at institutions like the University of Washington and UC Berkeley, but no one had productized something in this space. We launched LoRAX in November of 2023 and really tried to make it clear that this was a paradigm shift for organizations that instead of thinking about productionizing one general purpose model, you could productionize hundreds or thousands of very narrow task-specific models but solve the essential question, which was, how do you do that at scale in a way that doesn’t break the bank? The previous conventional wisdom was every model is a GPU at a bare minimum. If you have hundreds of models and hundreds of GPUs then you’re paying tens and thousands of dollars per month.

Breaking down that conventional wisdom was the first way that we saw to attack this problem, the goal, of course, being to establish ourselves as a thought leader in that particular space of building these very task-specific models in a way that’s cost-effective and scalable. LoRA Land was a way of building on top of that, saying, “Now that we have this foundational layer with LoRAX, here’s what you can do with it.” I think that that demo of being able to swap between all these different adapters that were better than GPT-4 at the specific task that they’re doing and do it with sub-second latency, I think that started to prove to people that there’s actually something real here. Not to diminish research, but it wasn’t just research, it’s something that you can be using in your organization today.

Vivek: I love reading the Predibase benchmarking reports and seeing how these various models do. There’s almost a sense of fun every time a model comes out, “How’s it going to do? How well does it perform relative to all these benchmarks and relative to all these other open-source models out there?” And because you guys are so close to this and have a great perspective, right now Llama 3 launched. Meta, they’re crushing the open-source model game right now, and obviously, they’re spending a lot of money, resources, and time behind this. It seems to be initially resonating really well. I am curious: how do you see the open-source model world playing out over time? Is it feeling like we’re going to have a handful of providers of large open-source models, Llama, Mistral, Google, or do you think that there’s going to be a world where we’re going to see a long tail of developers and we’re going to see many different types of open-source models and providers of open-source models?

Travis: My opinion on this is that it will break down a little bit by model class. I think that these larger foundational models, particularly the ones that people are open sourcing for general applicability as opposed to building internally on proprietary data to be IP in some way, that is a little bit constrained by the resources of these larger organizations like that. It’s not something that’s generally accessible to a small group of enthusiasts or something like that today. That might change in the future, but right now, the two big barriers are you need lots and lots of compute, and you need lots and lots of data, and both of those things are difficult to come by.

I do think that, at least for the foreseeable future, there’s going to be a requirement to lean onto some of these larger organizations like the Metas of the world to provide those foundational models. Where I do have a lot of optimism on the ability of less well-capitalized, the GPU-poor, to be able to make inroads is definitely on the fine-tuning side. I think that as fine-tuning matures from the research point of view and from the data efficiency point of view, there is an opportunity to create much better-tailored models for specific tasks that do have general applicability that can potentially be something that is valuable to lots of organizations beyond just the individual that made it.

Once we start seeing that become true, there’s a whole new space of creators similar to how content creators create art or videos or music or whatever, being able to create fine-tunes that attack very specific problems and have that be something that people consume at scale. A very interesting opportunity that I believe is coming on the horizon. We’ve seen it to some extent in the computer vision space with diffusion models where diffusion LoRAs for style transfer are starting to become mainstream, and communities around finding different LoRAs that help adjust the way that the models generate images. I definitely think that that moment is coming for large language models as well, where this sort of work moves beyond being restricted to individual organizations that have lots of data to something that can be even more open and transparent, and community-driven.

Vivek: Well, this leads me to my next question, which is for both of you, what do you think is over-hyped and under-hyped in AI today?

Dev: Over-hyped today is chatbots. A lot of organizations started to see value in GenAI actually post GPT-3. The very first thought that organizations had was, “I need ChatGPT for my enterprise.” We were talking to some companies about a year ago and they were like, “I need ChatGPT for my enterprise.” I was like, “Great, what does that mean to you?” They said, “Don’t know. I need to be able to ask the same ChatGPT-style questions but over my internal corpus.”

A lot of the early AI use cases have looked at how to build a chatbot that I can start to ask questions over documents. And one of the main reasons for that is because the interface that went viral was ChatGPT. It was the ability to do this in a consumer setting. The way I think about GenAI models is you essentially have an unlimited army of high school-trained humans essentially that can do different workflows. If you had this kind of unlimited army of knowledge workers, is the most interesting thing you’d really apply them to just better Q&A and better chat? I struggle to think that’s the case. Instead, where I think a lot of the case is going to be in these automation workflows. We’ve used the back-of-kitchen analogy, but there are also the back-of-office tasks, ones that are repetitive and mundane; how do I go ahead and automate document processing? How do I need to go ahead and automate being able to reply to these items? Now, we’ve started to see this become more of a thing.

That’s where a lot of the future for AI is going to go. The over-hyped sentiment is all of these organizations that are saying, “I want ChatGPT for my enterprise,” probably want to start thinking a lot more about, “How do I consider the fact that I have access toward this large essential talent pool that has general purpose knowledge and then can be fine-tuned to do a particular task very well?” I think about it like a college specialization. I can take this high school level agent and give it my college specialization in how Predibase does customer support, for example, and put it to work. That’s the biggest delta that I see from what might be hyped in the market today and where I think a lot of the production workloads are going to be going over the next 12 to 24 months.

Travis: I liken it too saying that it’s the boring AI that’s really under-hyped right now, but that’s where most of the value’s going to come from. I think that in any hype cycle, there’s this very overly optimistic view that we’re going to get 1,000X productivity improvements because we’re going to replace every knowledge worker with an AI system of some sort. Already we’re starting to see the reality unfolding is that, oh, it’s not that easy. It’s never that easy by nature of these things; the 80/20 law — getting the little details right ends up being where the majority of effort is spent, but those things matter.

We’re still quite a way from generalized intelligence and chat interfaces being able to do everything, like replace coders, but certainly, I think that it’s very real that we can get material productivity improvements and efficiency improvements on the order of 20% here, 50% there on very specific parts of the business. It’s going to be through these very narrow applications, to Dev’s point of saying, “We have a system here that requires humans to manually read all these documents. What if we can automate that into something that just turns it into a JSON or turns it into a SQL table for them, and they can just run a few quick sanity checks on it and then send it downstream to the next system?” Those are the sorts of things I can see having a very meaningful impact on the bottom line of businesses, and those are the things that are actually attainable.

Vivek: That’s part of the fun of the hype cycle. Now the initial euphoria has worn off, what are the really interesting things to build from here, right? Let’s get in the nitty-gritty and figure out something that may not have been just the initial like, okay, let’s go build this chatbot on top of GPT. There’s so much more you can do with all of this. Guys, let’s end with this, you both came from iconic companies, as well as Piero, and you obviously have seen a lot of really interesting things and been around some great people, and you’re all first-time founders. If you had to give one tip to a first-time founder, what do you think it would be? And maybe Travis, we’ll start with you, and Dev, we’ll end with you.

Travis: I think the biggest learning for me is you’re always a bit too ambitious when you start out with what you think you can do as an individual or as an organization. Oftentimes, particularly if you spent most of your time being an individual contributor, you have this idea that “If I’m a good engineer and then we hire 10 good engineers, then we’ll be 10X more productive, and we can do all these amazing things.” The reality of doing something and then doing something well enough that people want to buy it and rely on it in production every day is quite a big gap. Definitely being very narrow in terms of the type of problem that you want to tackle early on and say, “Let’s do something very highly specific that maybe doesn’t have a very big TAM in and of itself, but get that working perfectly and then start to think about where we go from there,” that’s definitely been the biggest learning for me I’d say.

Dev: One tip I would say is to be wary anytime someone suggests that you should pick something that’s strategically important to your business, like pick your go-to-market motion or pick what you want your differentiator to be. A real risk that happens toward first-time founders is you sit on a couch, and you’re like, “Hey, what can we do that would be really interesting?” And that’s a really interesting trap, I think, to be able to fall into that doesn’t essentially take in the customer lens of what customers actually care about. I think a lot of first-time founders are very smart. They worked at iconic tech companies, and they’re like, “I saw this happen at, let’s say, a Google, or I saw this happen at an Uber, and so the right way for the future is X, Y, Z.”

That’s a really good starting point, but that needs to be baked in what you’re hearing directly from customers 100% of the time. It’s very easy to essentially pick something that you think would be interesting, cool and differentiated that customers don’t care about. The reality is the thing that you’re suggesting might be the right idea, but it’s a different framing, it’s a different form factor that you need, and you won’t know whether or not you’re right until somebody is willing to purchase an invoice and send you money for it. Make sure you hold that as your primary objective function.

The last bit of advice I liked a lot was everyone who’s done startups has emphasized the importance of velocity. It’s very easy to mix up velocity with, “I need to go ahead and pull 16-hour days and be building a lot of code.” To me, velocity is building highly iteratively. How do you get feedback as soon as possible? The easiest way to do that is what Travis’s point is, is cutting scope. One of my favorite bits of advice that I’ve gotten, which is a bit controversial, is nothing takes longer than a week. The reason that I’ve liked that bit of advice is because I think it forces you to think, “How am I going to take whatever that I’m working on building this week and make sure that I understand at the end of the week is it actually worth doing, is it something that’s putting me in the right direction, is it delivering customer value?”

Both are about baking it into feedback and listening to customers, but understanding that you want to be able to optimize for that feedback cycle. That’s the only way that you’ll probably get to where you’re going.

Vivek: Great advice from both of you. And it’s super exciting to see everything going on at Predibase, at least from the outside. I’m sure it’s even 10 times more exciting and incredible from the inside. Congrats to both of you on all the momentum and really excited to see where things go. Thanks, Dev and Travis.

 

Read AI’s David Shim on Making Meetings More Efficient With Intelligent Agents

Read AI's David Shim on Making Meetings More Efficient With Intelligent Agents

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Madrona Managing Director Matt McIlwain hosts David Shim, founder and CEO of Read AI. These two have known each other for over two decades, having worked together at Farecast and then Placed, which David founded in 2011. They came together again in 2021, when David started Read AI, which has raised $32 million to date, including its $21 million round led by Goodwater Capital in 2024.

Read AI is in the business of productivity AI. They deliver AI solutions that make meetings, emails, and messages more valuable to teams with AI-generated summaries, transcripts, and highlights. In this episode, David and Matt examine how leveraging emerging technologies, getting engagement from multiple members of your ecosystem in the early days, and learning your way into a business model have been three themes across David’s startup journeys. They explore the challenges and success of implementing AI in various categories, the benefits of using AI model cocktails for more accurate and intelligent applications, and where they see the opportunity for founders well beyond productivity AI. It’s a great discussion you won’t want to miss.

This transcript was automatically generated and edited for clarity.

Matt: I’m just delighted to be back again with David Shim. Welcome back, David.

David: Excited to be here. It’s been a while.

Matt: It’s been a while, and you’ve been up to some good stuff. Let’s talk about Read AI. Why don’t you take us to the founding and the genesis?

The Genesis of Read AI

David: I was CEO of Foursquare for about two years, and I left the company. During that time when I was leaving, I was in a lot of meetings where I wasn’t making a lot of decisions at the end because I wanted the team to make those decisions because they were going to inherit that for the next 12 months. I took a step back, and I just started to watch and saw, hey, there are a lot of people in this meeting. What are they doing — and half of them are camera off and on mute. I made this term up called “Ghost Mode,” where there’s no sound, there’s no video — you’re there, but you’re not actually there. I started to notice that as I left Foursquare and took a couple of months off, bummed around different beaches in Mexico. I started to have a lot of video conferencing meetings, and I started to think about this problem again to say — this is not a good use of time to have this many people in a meeting.

A lot of the time, we know within the first two or three minutes if this is going to be a valuable meeting for me or not. I started to say, can I notice people who are and aren’t paying attention when I’m bored? I looked at one person’s camera and realized they had glasses on and the colors in the lenses looked very similar to what I had on my screen — and it was ESPN. So I looked closer to the glass reflection and they were on ESPN. So I was like, there’s got to be a model or machine learning model that you’re able to go in and say, can I identify when someone is paying attention using visual cues versus just audio cues? And then, if I combine those two things together, is that something that’s differentiated in market? And what I found out there was no one was doing it.

There were people doing it for different use cases, but not for video conferencing. That was a really interesting thing to dive into. Then I reached out to someone that I know who heads up Zoom, specifically Eric Yuan. I said, “Hey Eric, real quick question.” We know each other from emails every once in a while, but I was like, “Hey, I’m really interested in this concept of engagement and video conferencing meetings. Is this something that you’re working on? And if it is, is this something that I should be thinking about as well?” And his response back was, “We thought about it before Covid, it was one of the features that we were going to invest in, but with Covid, priorities changed. I think you should absolutely go into this category and into this space.” That checked the box to say, “Okay, now the platform is bought into saying this is something that is valuable, and this is something potentially that they might support.”

Matt: There are a couple of things that, if we grounded in Placed, will apply to Read AI. Let’s start with this. What were the emerging technologies at that time that it wasn’t obvious how they were going to be useful in terms of building a next generation company? How did you approach those emerging technologies?

Leveraging Emerging Technologies: The Journey of Placed and Read AI

David: When we started Placed in 2011, smartphones were just starting to get into the mainstream. iPhone 2 had just released, Android hadn’t been released yet, or the dev version of Android had been released. People didn’t know what exactly to do other than, hey, there’s Fruit Ninja, there’s games you can play, there’s calculators, there’s flashlights, and those are all great, but that was version one where people saw some novelty with the applications that were available, but people didn’t know what to do next with that.

Where I started with Placed was on the thesis of can you actually get some unique data out of the smartphone that you couldn’t get out of a computer. Previously. It was this concept of could you measure the physical world in the same way that you do the digital world, and is that smartphone going to be that persistent cookie. For the longest time it wasn’t the case. I had engineers going to the iPhone and the Android conferences and say, “Can you get GPS-level location data in background with software?” The answer for a while was, “No, that isn’t possible.” It was only around 2011 that all those things fell into place where that was possible to do.

Matt: You had the mobile phone that was becoming increasingly common. You had location services and GPS data. I remember one of the challenges early on was being able to ping the app, the Placed app that you had often enough that you could get that data without burning up the battery life of the phone. That was a fun adventure, huh?

David: A hundred percent. I had my cousin, I think he was 19 or 20 at the time, use 12 different phones with different models that we had bought for battery life optimization, walk around downtown Seattle, go to different places inside and outside so that we could actually infer what is the best model to last throughout the day where the user isn’t impacted, and also get enough fidelity or context to actually infer did they go into the store or did they walk by the store.

Matt: Let’s talk about Read AI. There were some interesting technological and even some societal “why nows” there. This is early but in the Covid-era. Video conferencing exploded. The cloud has come a long way — applied machine learning and all the different kinds of models. Talk us through some of the technologies that enabled Read AI that were, at a minimum, super-powered versions of things that you were using a decade before to start Placed and maybe some others too.

David: I think the bottoms-up approach to video conferencing was something that we hadn’t seen until Covid came along, and that accelerated where you’ll see all the research reports now that says five years of adoption was pulled up within the first two or three quarters of Covid because everybody went remote. You started to see that people were using the best solution available to them. It wasn’t a top-down approach of you have to use this solution like BlueJeans. A lot of us used to have the equipment for BlueJeans and video conferencing. Now it was you’re remote, what is the easiest solution to use? It was Zoom. Then you saw Google Meet come in, and Microsoft Teams come out.

Now you saw this bottoms-up approach where people were adopting video conferencing as the default where you’re interacting with someone. You don’t go in and say, “Hey, I’m going to go fly out to see you.” Now the default is, “Hey, let’s set up a call.” And the call, if you say, “Let’s set up a call,” It’s not a conference call number. I can’t remember the last time I had a conference call. It always defaults to video conferencing, but that was brought up five years ago. That brought up the demand in market where people were used to going in and saying, “I’ve now chosen a platform. I am able to use this platform on my own for essentially every single meeting that I have.”

That’s very much like the iPhone and the Android devices that came out where, at first, people were like, okay, this is kind of interesting. Then the app started to catch on, and people started to implement them, and the businesses weren’t ready; the enterprises weren’t ready for the smartphone. You started seeing people install apps, they connected their email, they connected their calendar and they didn’t have policies in place. That was actually a good thing for driving adoption because it was a block. The flood of usage was actually so strong that I believe that is the way that we are seeing the same thing that we’re seeing when it comes to Read AI and AI. It’s that we’re seeing the same level of adoption as we are for smartphones. I’d even say more so from a mainstream perspective because the cost is so minimal, it’s no longer sign a one-year or two-year subscription to get a smartphone and then you have to sign up for a data plan. It’s sign up where it’s completely free or maybe $15 or $20 a month.

Matt: Let’s go all the way back to Placed. You had to learn your way into the business model. I think you had a first assumption, and then you evolved because you listened to the market and you listened to the customers. Tell us about that evolution.

The Evolution of Business Models at Placed & Read AI

David: That was a hard one. Madrona was great for this one. I believed that location analytics was going to be a multi-billion dollar industry back in 2011-2012. I think, ultimately, it did become that; it just took a little bit longer. But the use case that I had was not the right use case. The use case I stuck with for about 12 months was I could pick any business in the United States and give you a trend of foot traffic across time. You could start to see trends like Black Friday, where did people go shop and where did they go shop next. Really cool data.

We got to the point that we were on the Wall Street Journal and the New York Times; it was not a problem getting press. It was also not a problem getting big C-level or VP level meetings because they had never seen the data before. They’re like, “Oh, you could tell me where people come after they visit my competitor? Okay, this is really interesting. I want to look at that data.” Or do you get gas first or groceries first when you go on a trip? We were able to answer those questions, but the downside was there wasn’t a use case for that data. They would come in and say, “This is a great meeting, we love it. Can you give me a little bit more data?” We send the data over and they’re like, “All right, thanks.” And we’re like, “Hey, do you want to buy anything?” And the answer was like, “No. We’re out, peace out, we’re gone.” What do we use this for?

The use case ultimately was the customer coming to us, and they said the customer wasn’t the end consumer. It wasn’t the enterprise clients that we were directly talking with, surprisingly, it was the mobile ad networks and the mobile publishers. They had come to us and said, “Hey, installing games is the ecosystem when it comes to mobile apps today, but we’re trying to get more dollars from brick-and-mortar retailers because we believe that people are in the physical world and you want to be able to close that loop.” They said, no one trusts our data because we’re already selling them the data or selling them the advertising. You don’t trust the person that sells you the advertising to tell you that it works generally in market. That’s changed a little bit today.

But they said, “We know that you have the cleanest data out there. Can you intersect our ads with your panel’s store visits and actually attribute, did someone hear an ad for Target, and then did they actually go to the brick and mortar target location data three days later?” And for the longest time, I said, “No, I believe that we’re a pure play analytics company and we’re not going to do any advertising.” Then you and Scott and the Madrona team were very much like, for the first six to 12 months, “If they’re willing to pay you money for this, maybe you should try it.”

Matt: Maybe you should see what customers who are willing to pay you money would actually be willing to do. The rest is history. It’s a very, very well-built, successful company. Let’s talk about Read AI. You’ve got these technology changes, these societal changes, and then you had to get engagement. How did you think about that? Ultimately, getting alignment with different parties, not just the consumer, but even making it work reasonably well with Zoom and these other platforms.

Engagement and Distribution: Partnering with Platforms

David: On the engagement side, we took the approach of: Work with the platforms. They have the control at the end of the day. They’re the ones that can also get you distribution. And I think with a lot of startups, distribution is a problem where you can have a great product, but if people can’t find it, if people can’t install it, that becomes a problem. And so what we did was we took the approach of working with the platforms. We had great partners at Zoom that said, “Hey, we’re launching this program called Essential Apps. And what Essential Apps does is we’ll put it in front of 10 million users on a daily basis where they will see an app on their right-hand sidebar.” So that was an incredible opportunity and we’re like, “Absolutely, let’s get it done.”

And the same thing came along this past year with Google Add-Ons, where Google said, “We are going to introduce apps or add-ons into Google Meet. We would like you to be one of six apps that are featured in that app store.” We’ve been featured over the last three or four months, and that’s driven significant adoption, and Teams has been similar in terms of the promotion that they’ve given us.

Those platforms and discoveries have enabled us to get a lot of traction. The thing I would say is I made a very similar decision with Placed, but I made it a lot faster. With Placed, I was wrong the first time, the location analytics to understand where people go in the physical world and not combining it with anything else, I just said that standalone is the use case, and that was not the case. The use case was attribution. With engagement, the use case was, “Hey, in a real-time call, when I’m talking with Matt, and he’s a venture capitalist, I want to know when he’s disengaged because when he’s disengaged I can try to recover in that meeting and say, ‘I know this slide’s not very interesting. Let me go to the next one, Matt.'”

The problem was once people started to use it, they didn’t know what to do with it. They saw this chart, it would go up and down. As it went down, they started to get more nervous, “Well, what’s going on? How do I actually recover from this?” And there wasn’t this knowledge base to pull from to say, “Oh, when engagement drops, you should do this.” And so, there was a lot of education that was involved in that process. We found a lot of users; there were certain use cases that we did really well for, especially large groups and presentations. The stickiness wasn’t there. Where we found the stickiness was to go in and say, what can we combine our ability to measure engagement and sentiment in real-time based on a multimodal approach, audio and video?

How can we take that really unique data? These are proprietary models that we’ve built out tons of training data. How can we actually apply that to something else to make it even better? What it came down to was transcripts. There were companies that had been doing transcripts for the last 10 years; some charge per minute; some spot-joined the calls, and some were built into the platform. What you started to see was they were starting to do some summarization, and this was partially due to OpenAI, partially due to their own models in hand where people were asking for, “I don’t read a twelve-page transcript after a call, but I would love to see a summary, and I would love to share that summary with other folks.”

We took that approach of, this is interesting, but everyone can do this. This is a wrapper. This is a commodity at the end of the day where I could take a whole transcript, write a couple prompts and get a summary. And that wasn’t interesting. So we said, “What do we do that is different?” And we applied the scores that we had. So when Matt says this, David reacted this way. When David said this, Matt reacted that way. We created this narration layer that wasn’t available in any transcript, and we played around with it, and we started to see that, “Okay, this is incredibly valuable. This materially changes the content of a summary because you’re taking into account that reaction from the audience.”

Matt: I think moving from trying to give me the assessment of the engagement in the sentiment, you then created what some call an instant meeting recap, but not just a superficial one, quite a robust one because you were actually using a variety of types of data, video, voice, and a variety of models. How did you think about which models are you going to take off the shelf? Which models are you going to customize? How are you going to mix these models together with the data that you have to ultimately produce this output of an instant meeting recap?

The Power of Model Cocktails: Enhancing AI Applications

David: Yeah, that’s a good question. Where we really focus in on them was we are the best when it comes to measuring engagement and sentiment, and then we are the best when it comes to layering that engagement and sentiment on topics, on sentences, on content, on video. Those things were really strong. We then went in and said, “Okay, what is a commodity?” At the end of the day, if you look at OpenAI great partners, you’ve got Llama with Google and Place, you’ve got a bunch of other open-source solutions in market. That is a very hard problem to solve, but it is kind of like the S3 bucket. At the end of the day, it is the commodity that everybody will be using at some point, and you’ll choose between the licensed models that you prefer or the open-source models.

We said, “Hey, if 90% are in-house models, that last 10% where I’ve identified 14 sentences that do the best job of recapping what that meeting was about, here are the action items. We think this had 90% engagement when this content was being discussed.” If you can then load that in to just summarize it down to four sentences, that’s what we’re going to use the third-party solution for. It wasn’t about figuring out how they can analyze engagement. It was more how do we bring our secret sauce into their system? That really did result in differentiated results where on the Coke challenge, we’ve been winning more and more over the last 12 to 18 months where we’re seeing more traction in terms of market share adoption. I think the best feature that we’re seeing is people are starting to copy our features, the legacy incumbents are starting to copy our features.

Matt: Always a great source of compliments when people are copying you. Let’s take us back maybe 12-14 months. You’ve learned a bunch of things. You’ve got this product and this ability to deliver these instant meeting recaps. It’s what I like to call an intelligent application. How do you start to get momentum around customers? At the time, I think you had very little revenue, and today, you’ve got tens of thousands of paying customers — incredible momentum in the business. How did you get from essentially zero revenue at the beginning of last year to the great growth you’ve had over the last 12-15 months?

The Momentum of Read AI

David: That was a bit of a journey. I think where we had in 2022, we had very little traction like I already talked about. We had a lot of interest, we had a lot of users, but ultimately it wasn’t what we’ve seen in the last 12 to 15 months. And that was an iterative process to be upfront. We had the summaries that launched, and we got some traction there, but then people started to come in and say, “Hey, can you do video?” At first, we were like, “Ah, we don’t know if we want to do video.” We did the video and then we started to tune the video where we’ve got three different concepts. The full video recording, everybody has that, but that’s table stakes. Then we went in, and we said, “Hey, highlight reel.” Think of it like ESPN for your meeting where we can identify the most interesting moments by looking at the reaction.

If you think about it this way, if you’ve watched an episode of Seinfeld, you can go into the laugh track, and if you actually look at 30 seconds before the Laugh Track, that actually does a really good job of understanding what people are interested and what people find funny. We started to build this highlight reel, but then we also took into account content. Now you’ve got the content plus the reaction that creates a robust highlight reel. We did something very similar to create a 30-second trailer. The idea here was customers were asking us for video, we enable that. The funny thing that we didn’t do that others did do was we didn’t roll out transcriptions until the summer. We said that is table stakes. Everybody has transcription. That is a commodity service at the end of the day. Yes, you could do it better than other folks, but everyone has it. It’s available in-platforms where you hit a button.

We said, “We don’t want to deliver another copycat product. What we want to do is be the best when it comes to meeting summaries, the intelligent recaps, the key questions, the action items, the highlight reels.” And then we started to go and say, “Okay, we got all this, this is great, but how do we activate against it?” That’s a little bit of the advertising background that I had and attribution background is how do you activate against it? Interesting data is interesting data, but if you’re able to activate it becomes valuable.

So we started to test things like email distribution and Slack distribution where we pushed out the summaries to where people consume the information. We didn’t need to be that central hub for reading the reports. We’re going to send it to wherever you’re used to reading it, and that actually started to gain more traction where people said, “Oh, this is great. After the meeting is done, I get a recap.” Or, “Hey, this is a recurring meeting and you’re going to send me the notes an hour before, so now I can actually review the notes because I forgot why we were going to even do this call.”

Matt: I love that feature. Going back to, I think, what was your original inspiration here is really trying to make me personally more productive and the teams that I meet with collectively more productive. It’s this awesome combination of productivity and action, and I think something you like to call connected intelligence. Talk a little bit more about this concept of connected intelligence.

Read AI: The Future of AI Productivity & Intelligent Agents

David: It really plays in with intelligent apps. Intelligent apps, you’ve got that marketplace set up, but when you go to connected intelligence that’s going in and saying, How do I connect those individual apps so they talk with one another? If I have a meeting about a new product launch, well, that meeting will generate a report today, and it’ll send it out to everyone, and that’s great, and there’s some actionability there, but what if that email, sorry, what if that meeting then talked to an email that was sent up that had follow-ups that said, Here’s the deck associated with that. Here’s some proposed wording for the specific product launch and timelines. Now if those two can talk together, it creates a better summary, it creates a better list of action items, and now the action items are updated where it could go in and say, Hey, David was going to send this deck over to the PR team.

Did David actually send it over? Well, the email’s going to tell the meeting; yes, David did send that over; check that off of the list. So that is a completed deliverable. Now, the follow-up in the email is, Hey, the PR team is going to provide edits associated with this, and it hasn’t been delivered yet because we’re connected to email. These entities, at the end of the day, are able to talk with one another, and they act as your team. We tried the concept of team really early on where it’s like, it joins your meetings, it does this. That was the early version.

What we’re seeing now with the prevalence of AI is you can make each one of these things an entity and these entities can independently talk with one another and deliver content just in time where you’ve got a meeting coming up, you had an action item for the pre-read, but now we’ll look at your Slack messages, your Gmail messages and say, “Hey, these things haven’t been delivered yet, or the client’s going to ask you these three questions. Keep this in mind. That’s going to create a much more productive interaction at the end of the day. A shorter interaction. a more productive interaction.

Matt: No, I love this. It’s kind of going both to some of the things that you’re already doing and also some of the vision of where you’re going with the company. There are all these business processes and all these workflows, and they’re increasingly digitally based. As you point out, it is interconnected between email or Slack or Zoom calls or whatever it might be. I like to think of them as sort of these crossflows. They’re workflows and processes that cut across different applications that I live in. And effectively all those things are different forms of applications. So maybe say a little bit more about where you see this world going and Read AI’s role in it around this vision of connected intelligence.

David: Where we’ve got two similar visions, and this is in the next year we expect to get there. One is, let’s say you’re a project manager, and you have a number of different meetings that occur. You have a number of interactions that happen via email. You’ve got internal follow-ups within Slack and teams, and then you’re updating Asana, Jira, and updating tickets. All of those things today are manual. You have to go in and connect the dots. Now I’m going to look at the meeting report and look at what was delivered to the client. Now I’m going to think about did this file actually get delivered, and then I’m going to go into Asana, I’m going to check some things off, and I’m going to go to the Jira ticket. Those are a lot of steps that take place, and those are steps where it doesn’t require critical thinking. All the information is there. That connected intelligence is there.

Where Read AI is going is we’re going to update all that for you. Where if you’re a product or project manager, all of that mundane work, that busy work that moving around of papers that is taken care of where you don’t need to worry about that, that ticket is automatically updated. Then Jira is going to send an update to the people that are looking at that ticket and then it’s going to say, “Hey, this got completed. This is the next step because that was discussed in one of those entities, one of those connected apps.” You actually inform what the status is.

A bigger problem that comes up is sellers, Salesforce, and HubSpot; where I remember leading revenue at Foursquare, I could not get my sellers to update Salesforce. I would threaten them to say like, “Hey, you’re not going to get paid unless you update it.”

At the end of the day, it didn’t matter how many times I said it, and I was CEO at the time; people were busy. They don’t have time to do it. They’re going to prioritize it internally to say, I’m going to close deals. I’m not going to update the books. Where Read AI is going to come in is we’re going to go in and update that Salesforce opportunity. We’re going to go in and increase the probability of close based on the information that’s available. By doing that, you’re enabling your sellers to do what they do best and go into market. That’s what Read AI does on the back end, is to make sure that everything is up to date, and it’s talking with one another that says, “Hey, seller, you might want to go ping that client because it’s been three weeks and normally your cadence is to ping every three weeks and we see this is missing.”

Exploring Product-Led-Growth and Market Expansion

Matt: I love those prompts and nudges that allow the individual to be more personally productive. I think that’s been one of the great attributes of Read AI. It’s really been a bottoms-up product-led growth kind of motion. What’s cool, of course, is I all the time am using it and people are like, “Oh, what’s that?” And I get to tell them about it. There’s a neat embedded virality, but how have you thought about PLG, and what have you been learning about how to be successful as a product-led growth company?

David: The approach that we took is a little bit not the norm. I think when you’re a startup, you want to focus in on one segment. I think when we started Read AI, we took the approach with support from Madrona, and Matt was the one to say, “Hey, we want to go broader.” We want to go in and be mass market when it comes to engagement sentiment and wherever we apply it because we don’t know. This is a new technology, and we don’t know where it’s going to be used. It took us a while to figure out that product-market fit, but by being broad, we’re able to see use cases that we would’ve never gotten the ability to experience or get feedback on.

I’ll give one example that’s a little bit more individual and less of a big market, but really impactful. We have someone who has dementia, and they reached out to us and said, “This has changed my life because now when I meet with my family, Read AI joins the calls, it takes the notes, and before I meet with my family again online in Zoom, or in Google Meet, I can actually look at the notes and remember what we discussed.” They actually had us connect with their caregiver to say, “Hey, this is how you make sure this is paid for. This account is up to date and make sure all these things are set up.” Because he wasn’t able to do that, but he said, “This has changed my interaction with my family.” That was awesome. That wasn’t a market that we were going after, but that was great.

We see off-label use cases too. Off-label use cases would be —we have government agencies, state agencies, we have treasury departments in certain countries that are using Read AI. When they use Read AI, a lot of times it’s bottoms up. They just saw it, and they’re like, oh, this would help me out. What we found is that the bottoms-up approach finds new use cases. For this one agency that I won’t name, they have clients which we’ll call patients and they’ll go see these patients out in the field.

The old way to do that was they would go to the meeting, they would have a tape recorder, they would record that meeting, they would take notes, and they would interact with that person, and then they would go to the next client, the next client, the next client. A lot of the time, they would spend about one day a week writing the notes, putting them into their patient and client management solution. That was a lot of work. Well, they started to go, and we introduced a feature where you could upload your audio and video. They started to record on their phone. They uploaded the audio from the interaction with the client, and they generated these meeting notes and summaries and action items and key questions, and they just cut and paste and uploaded that into their patient management solution.

They loved it. One person said, “I do not know how to write a report. I did not learn this in college. This is not what I specialized in. Now that I can use Read AI, I can interact with the clients, which is what I want to do. This is my job, and Reed will take care of it and upload all that information. They said this is phenomenal. Then they started to use our tagging feature and say, Hey, we’re going to tag individual clients, and now we can see how things are progressing across time because we don’t just summarize for a single meeting, but a series of meetings. So hey, are things improving? Or did we answer this question that came up last week? Hey, they wanted to know what was going on with this. Did we actually deliver an update on that? Those are things where, a lot of times, we get lost in the noise with the amount of work that we have. Read AI is able to make sure no one’s lost in the noise — none of those action items are lost.

Matt: That’s fantastic. I remember back to some of the earlier days of cloud collaboration and how Smartsheet, one of our companies that we backed, gosh, 16-17 years ago, started out in a very horizontal set of use cases like you’re describing. I think it’s important to have a big disruption, whether it was cloud back then or now, all these capabilities, all these different kinds of models I can use in applied AI to build connected intelligence. And I think that’s part of why you can start more horizontally at this point in time and let the market teach you and tell you about different kinds of use cases

Horiztonal v. Vertical

David: The level of understanding is key, especially in this early market because if we had gone too narrow, we would’ve missed out on these opportunities. I can tell you that 30% of our traffic is international. Outside of the US, that traffic is predominantly centralized in South America and Africa. If you said when I first started Read AI, would 30% of my traffic come from South America and Africa, I’d say, “No, that’s not the market that I would expect to adopt AI very quickly and go in and use it in their day-to-day.” What we’re finding is the adoption has been phenomenal where we’re covering 30%, 40% of a university student base where they’re starting to use it and adopt it.

We’re starting to see our peak; this was a couple of weeks ago: 2% of the population in South Africa was using Read AI, not necessarily as the host of the meeting, but Read AI was on a call that had participants in that meeting. I think those things get me really jazzed up to say like, “Wow, this is something where AI is not just about the enterprise.” There is a clear enterprise opportunity, but it’s how do you help the people from a PLG perspective? How do you actually deliver ROI to the oil driller in Nigeria who has to write a report and send it back to China, which is an actual use case — and they’re using it.

Matt: Wow, amazing set of use cases there. Then sort of embedded in that is just the ability to do this across languages and there’s all kinds of exciting things that you’ve done and you’re working on. One of my wrap-up questions here is what are the challenges for what I’ll call a gen-native company like yours, and in particular relative to the incumbents, the companies that are trying to enhance their existing software applications with generative AI capabilities, how do you think about native versus gen enhanced?

Gen-native v. Gen-enhanced

David: The gen-enhanced, if I was going to say from a competitive standpoint for Read AI, a lot of people would say, is it Copilot? Is it Zoom Companion AI? Is it Google Duets? And for us, it’s not really the case. It’s going in and saying they’re educating the market. I’ve been in a market where I had to educate everyone, and that is a very expensive thing to do. These incumbents are educating the market about the value proposition. People are using it. The free users are going to go in and 80% are going to say, “This is great, this is good enough.” There there is the audience a little older like me where it’s like — if you remember Microsoft Works, that was the cheap version of Microsoft Office. Microsoft Works was $99. Office was $300. A lot of people used Works, and they started to use it, and they’re like, “Oh, this is actually pretty good. But when I need to do a pivot table. Okay. I need to upgrade to that next version.”

What we’re seeing is there’s this whole new base of users that understand AI and the value, and they’re going in and saying, “I need more. I need specialization. I need cross-platform compatibility where half of our users use Zoom and some other solution, or use Google and some other solution, or Teams and some other solution.” From that standpoint, it has been great to actually get the incumbents to adopt this technology and evangelize it.

What you’re going to see is the Smartsheets of the world come in when it comes to AI. You’re going to see the Tableaus of the world, where there’s an incredibly large market to be had there. I think it’s just the start, and I think this is where the consumer and the horizontal play is actually really big, is that we are seeing that AI provides value even without an enterprise involved. If you can take that level of education, accelerate it, and show the value of one step above for $15-$20-$30 a month, that’s a slam dunk. We’re seeing that level of adoption today.

Importance of Model Cocktails

Matt: You’ve got this whole set of cross-platform capabilities. I’ve also been really impressed with the way that you’ve used different kinds of models, some things that you’ve fine-tuned yourself, and others that you leveraged something like an OpenAI and how you’ve brought those things together to get the transcription you were talking about before, to get the very best out of, I like to call it model cocktails, where you’ve mixed a bunch of models together to create these amazing instant meeting recaps and now increasingly these kinds of connected intelligence crossflows.

David: That will be key because if you only rely on one single model, you become a prompt engineering company at the end of the day. We’ve seen some competitors, great competitors, of course, use our solution, but competitors are good, but they’re going deep into like, “Hey, do you want to pay a little bit more for ChatGPT-4 versus 3.5 versus 3?” For us, that just highlights that you’re too dependent on that solution. You’re not differentiating, you’re not adding enough value that you’re just going to show that underlying technology that you’re utilizing. It’s been really important to go and say, “Let’s use a mix of models.”

It’s valuable from a language model perspective for transcription if you use only one single model. There are some really good ones out there, open source as well as paid. If you’re able to leverage two on top of each other, it goes in and says, “How do I stop some of the hallucination that comes up where certain words are totally incorrect? If we’ve got a score between model one and model two and the variance can’t be more than X, you can start to identify points where it starts to hallucinate a little bit or goes off the rails.” Those are kind of those checks and balances that you get when you have multiple models running, and then you bring in your own proprietary model on top of that, it says, “Okay, what other secret sauce can I put into that mix?” I think that is where the market is going to go more than a standalone.

Matt: I totally agree with you. Even more generally, apart from your specific area of personal and team productivity, this is going to be a big year around these intelligent applications and applied AI. Where do you see some of those big areas of opportunity outside of your particular category? What are things that you’re excited about in this world of intelligent applications?

Opportunities for Founders

David: From an education standpoint, I think there’s a lot of opportunity. I’ve been talking with a few teachers at different grade levels, and some of them don’t even know OpenAI exists. Some of them are starting to say like, “Hey, I’ve heard about this. I think my kids are probably using this, but I don’t have a POV there.” I think there’s an ability to provide personalized, scalable education that’s customized to the student. I’m excited about that as an opportunity, especially as an uncle to go in and say, “Hey, where you’re strong, we can make adjustments, and where you’re not as strong, we can provide a little bit more focus in the hand holding that the school system might not be able to provide at any given point in time.” That’s really interesting for me.

I think when it comes to productivity AI, so it falls in our space, but I think there are some really interesting things around things that we do every single day, like emails. Emails could be so much better. There’s the concept of a context window, and these context windows are getting larger and larger. If you can have intelligent apps that have connected intelligence, those context windows aren’t related to just email, you can start to bring in other things. The ability to bring in different data sets is going to find some interesting learnings.

Matt: I love both of those points. The education domain, there are so many opportunities to be helpful to the teachers, more personalized to the students. It’s going to be a very exciting time ahead. As you point out, whether it’s when we just had Llama 3 announced, no doubt GPT-5 is quickly around the corner here. As you say, things like context windows are going to make the capabilities even more robust. It’s going to be an exciting time ahead. You’ve been just an awesome founder and CEO to work with. You’ve got an amazing team, and I’m looking forward to the journey to build Read AI into realizing its fullest potential. Thanks for joining me here today.

David: Absolutely, Matt and you and the team at Madrona have been phenomenal champions, especially for Read AI and for Placed when we were just an idea and the market was starting to get founded. This is the opportunity that if you’re listening here, it’s a great time to actually build a company. It’s never been a better time. It’s never been faster and easier to build and scale.

Matt: Well, let’s go build. Thanks very much, David. Enjoyed having you.

David: All right. Thanks, Matt. Appreciate it.

 

Transforming Corporate Travel: A Conversation With Steve Singh and Christal Bemont

Transforming Corporate Travel: A Conversation With Steve Singh and Christal Bemont

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Madrona Digital Editor Coral Garnick Ducken chats with Madrona Managing Director Steve Singh and Christal Bemont, the CEO of Direct Travel, Madrona’s newest portfolio company. Over the last few years, Steve has been seeking out companies that are transforming the corporate travel ecosystem with the goal of delivering a dramatically better value proposition to business travelers, the companies they work for, and the travel providers that serve them. The acquisition of corporate travel management company, Direct Travel on April 2nd is the fourth pillar in Steve’s vision, and that’s what these three dive into today

This transcript was automatically generated and edited for clarity.

Coral: So, to kick us off, Christal, why don’t you walk us through how this new venture is going to work?

Direct Travel: Transforming Corporate Travel Management

Christal: Sure, I’d love to. I am absolutely thrilled to be partnering with you again, Steve. We spent many years at Concur, and you’re also a very dear friend, so thank you for this opportunity. As you mentioned, we recently announced that there was an acquisition of Direct Travel, and Steve, you were a critical part of that, as I know Madrona was as well, along with a number of other renowned investors. I’m very excited about serving as the CEO and having you as the chairman on this. This is an exciting adventure that we’re headed into. For anyone who doesn’t have a lot of background in travel management companies, I’ll get into Direct Travel. A travel management company works with corporate travel businesses and focuses on companies with a managed travel program.

What that means is that there’s usually a group of individuals that sit within a company who have oversight of the safety of their travelers, the spending of the program, and making sure that they optimize the travel experience for their employees. And so essentially what that means is there’s usually a group of individuals that sit within a company that have oversight of the safety of their travelers, the spend of the program, making sure that they optimize the travel experience for their employees.

This managed travel space is something that’s been around for a very long time.

And so essentially what that means is there’s usually a group of individuals that sit within a company that have oversight of the safety of their travelers, the spend of the program, making sure that they optimize the travel experience for their employees.

Direct Travel has been around for many years. They are the fifth-largest corporate travel management company in North America, focusing on mid-market and enterprise customers. Let me give you a couple. Cirque du Soleil just announced that they are going to move forward with Direct Travel. Topgolf is another example, a Chick-fil-A here in Atlanta where I’m at.

If there is something I can say that stands out that’s most unique about Direct Travel, it is our customer service. That comes down to the people at Direct Travel. So the people at direct travel are working with customers who, in some cases, have been customers for 30 years.

So, what are we here to do at Direct Travel? There is a $1.4 trillion business travel market. We see Direct Travel as a critical component of doing a few things: providing incredible value to business travelers, making sure that those business travelers can support the companies that they’re working for, and making sure that we support and take care of the suppliers that take care of that ecosystem.

We feel that there’s a great opportunity and even responsibility for making sure that we show up with the incredible service that we’ve always shown up with and also making sure that we bring some of the new technology that I know Steve’s going to talk in just a moment about to the forefront.

Coral: Steve, before we dive into Direct Travel and how it fits into this tech stack that you’re envisioning, perhaps you can detail the challenges business travel has today and what you thought needed to be tackled.

The Challenges of Modern Business Travel

Steve: Before I delve into that, I want to touch on a few things Christal said. There are times in life when you get to work with amazing people, and while it’s always wonderful to do great things in life and create great businesses, it is all made incredible by the people that you work with. And Christal, I feel you have no idea how happy I am to work with you again. And obviously, the other members of the team as well: Todd, Scott, Christine, and John Coffman. These are incredible human beings, and that’s what makes life just a blast.

When you think about the travel industry, this is a multi-trillion dollar industry. There isn’t a business person who isn’t touched by it or who doesn’t take business trips. This is a very antiquated, fragmented technology stack that serves this multi-trillion-dollar industry. Whether you’re talking about the distribution of content, which is the GDS layer or global distribution systems, or online booking tools or mid-office tools that are predominantly provided by Concur, but other companies such as Neko and Dean. Or you’re talking about the integration into the back office. All this stuff is an aging infrastructure. More than that, these are closed systems, which basically means that you’ve got fragmented data and you’ve got disjointed travel experiences — and that’s not a great recipe for delivering delightful travel experiences.

So, even today, basic things such as integrating into a traveler’s calendar, providing access to all the travel inventory that we want to consume, predicting the needs, providing proactive and intelligent responses to travel disruptions, or, frankly, just simple change requests. Things like checking into the hotel at the ideal time, seamlessly integrating ground transportation into that travel experience, or, ideally, eliminating the concept of the expense report. These are just ideas. We’re sitting here in 2024, and they’re not a reality. And you have to ask yourself, well, why aren’t they a reality? And by the way, that’s normal business travel. The problem is even bigger when you think about group travel, which, by the way, almost every element is manual. It’s typically Excel spreadsheets and the like, but it’s manual.

I’ll just add one more thing about the legacy systems. I think what makes it even worse is that because they’re closed, the level of innovation that you can drive on top of these systems is very, very limited. I would argue that, at best, it’s limited. So, take one step further, advanced concepts like transparency at the point of purchase between the traveler and the suppliers to serve them — that’s completely non-existent and can’t even be enabled on the legacy infrastructure in the industry today. This is a big set of challenges. Now, we’re fortunate that these can be solved with modern cloud-native architectures. Frankly, it’s not just technology; it’s an open platform mindset that’s also critical. Anybody can build technology, but you have to take this mindset that we’re going to be open, that we can allow others to innovate, and that all of us can benefit from the work of others.

Coral: So now that Steve has set the scene a little bit. Christal, I’m sure you remember Steve coined the term “The Perfect Trip” while you guys were together at Concur. Why don’t you tell us how that concept came about, and what that concept really meant for you guys?

The Perfect Trip: A Vision for Seamless Travel

Christal: It is something that I’ll never forget, and I really mean that. It’s a feeling and an experience when it happened many years ago that just made so much sense. Maybe the best way to frame it, in my opinion, is that it’s something that you never stop the quest for. There’s never going to be a perfect trip because there will be things outside of your ability to control them. But, even as we were thinking about it back then, and as Steve presented i, it really was about everything that goes into the minute you start thinking about taking a business trip to the point where you’re back home. What is that connected experience? How do you take care of the travelers who are such key employees? What are all those gaps or things that could be disrupted that we can anticipate, and how do we get ahead of them?

Look, we all know that travel is a massive grind. It’s probably the least enjoyable thing. People think that travel is fun and all these great places you go. You just groan at the idea of all the things you might encounter. So, back when that first came up, it was an exciting proposition about looking at the complete picture. At the time, I was looking at it, and many other employees were looking at it in a lane of a few critical things we could do. When Steve took those blinders off and said, “Hey, this is really about us showing up from the moment someone thinks about it and every step of the way to the moment when they return home.” There were limitations back then in terms of what we could do, but now I feel it’s why you hear this deep passion from me about wanting to be on this journey again. Because I feel like we’re in a much different place and have a much bigger opportunity to solve some of those things. And I know Steve, you just mentioned a few of them, but I don’t think you ever stop on the quest for the perfect trip. It’s a responsibility we have. And when you’re in this industry, you just see so much opportunity, and I feel like we’re at a perfect time to embrace it.

Coral: So, I know, Steve, from talking to you before that there was a time when you decided, okay, this perfect trip is not going to happen. You’d have to reimagine all these foundational elements of these closed systems that you just explained to us. Why don’t you tell us about that first meeting with Sarosh at Spotnana, when you realized, “Oh wait, maybe we can still do this. Maybe transforming corporate travel is possible.”

Spotnana: Building the Next-Gen Travel Platform

Steve: This speaks to the mindset comment I made earlier. First of all, I also believe in karma, by the way. I had not prioritized meeting Sarosh even though he had reached out to me multiple times. When I did spend time with him and listened to what he was trying to do, I kicked myself for not taking that meeting earlier. He’s just a wonderful human being who also has incredible experience in the travel industry, and, frankly, he has a great technology vision. The part that spoke to me in that meeting was that he understood that there are lots of different ways to build a next-generation travel company. One is to do a better job than the last one, like Concur, and go build something that is Concur 2.0. To me, that’s not interesting. Because all that happens is there’s incremental improvement in the experience. Maybe there is a slightly better user interface.

What Sarosh was talking about displayed a level of understanding around why some of our travel experiences are disconnected or disjointed. Or why the customer service experience is so poor. He said, “Look, the thing I want to go focus on is I want to fix the plumbing of the travel industry.” And we spent a bunch of time defining what that is. What does that mean? What is the plumbing of the travel industry? And what he was really talking about was, that we have to have a data model. It is a little bit geeky, but we have to have a data model that is broad enough to encompass a modern definition of what business travel really is.

We have to have an open system that anybody can build on, that anyone can extend that data model. We also need to be able to allow the supplier to know who the buyer is at the point of purchase so that the two people who are the most critical elements of the trip can collaborate in a way that ends up being a better outcome for both of them. As we aligned on that vision, I quickly changed my mind about wanting to invest in the travel industry again, and I said, “Look, I mean when you find people like that, you want to support them, and you want to help them deliver on their vision.”

And, we’ll talk about Naveen and Dennis shortly, but this is the same thing that’s true with Christal. You invest behind incredible human beings who also are solving some very, very big problems that make the experience dramatically better for you and me as consumers of those experiences. So, that meeting with Sarosh led to us investing in Spotnana, which was building out this open platform that could be consumed on whatever services a client of Spotnana platform would want to use. It was built on the idea that there’s transparency between the traveler and the supplier.

And, that was back in late 2019. We went through COVID-19, and that was a difficult time for all of us, for every member of the travel community. But here we are now in 2024, where Spotnana sits. Some of the biggest customers in the world have moved to the Spotnana platform because they want all the content. They want serviceability on NDC; they want open, extensible platforms they can build on that others can build on where they can benefit from that innovation. But it’s not just customers. Suppliers — American Airlines, Copa Airlines, Qantas Airlines, and — these are some incredible names in the travel industry, and you’re going to hear more in the not-too-distant future, have decided they want to partner with Spotnana. They are partnering with Spotnana, right now, on this concept of NDC. I want to spend just two seconds defining what NDC is because most people are going to say, “Well, geez, NDC has been around for a long, long time.” And it has. But just because it’s been around doesn’t mean it’s been useful. It hasn’t been used because buying the content is one thing. You also want to provide service on the content should the traveler ever need servicing. And what Spotnana figured out was how do you actually, in this new technology stack, how do you take content from the GDS and content directly from the supplier and provide the technology capability to provide service on top of that content regardless of where it came from? That was a game-changer. That’s what allowed Spotnana to be incredibly valuable as a next-generation platform.

So great customer adoption, great supplier adoption, but there’s also another piece that they really executed well on, and that is a testament to the fact that they’re an open platform. They’ve gotten other leaders, companies like Center, which obviously delivers an expense management platform. Center is integrating at the travel policy and travel profile level. So you don’t have to replicate this if you happen to use both services. Troop is extending Spotnana’s services into the group meetings and events arena. Direct Travel, obviously, as we just heard, is integrating all three of these services in combination with their own and all the data that they have on their customers to deliver a highly personalized travel experience.

So you’re seeing Spotnana become this open platform that’s being adopted aggressively across the industry. And I think that’s going to lead to not only better experiences for the traveler and for the supplier but, frankly, a new generation of technology companies in the travel industry that will define what this industry looks like in the decades ahead.

Coral: So with Spotnana, that’s resolving this poor plumbing issue that you’ve talked about. And then, of course, you just mentioned Naveen and Dennis, which is Center and Troop. So why don’t you talk briefly about each of those solutions and how they fit into this travel stack that you envision for transforming corporate travel?

Integrating Solutions: Center & Troop’s Role in the Travel Ecosystem

Steve: It was critical to get the travel infrastructure right, and that was the Spotnana platform. But, it was also just as important that other core services that were required to deliver a delightful travel experience were also reimagined and reinvented. And so, one of them was the expense report. Christal and I certainly know a lot about this market segment. And I would argue that Concur did a lot to go from the concept of paper-based expense reports. I don’t know if most people would remember this, but it wasn’t more than 20 years ago that all expense reports were done on Avery forms. Well, now we should be thinking about this and saying, well, why does the concept of the expense report even exist? Why can’t it just go away? Why can’t it just be automatically, effectively created in the course of my business travel?

And turns out that that’s possible, but you have to reimagine the technology landscape to do this. And so Center really said, “Well, look, we can build a tech stack that integrates all the way down to the card processing layer.” So, at the point of swipe, we can pick up all the information we need to process that expense report. In fact, through a range of AI services that are also built into the stack, it can actually go from swipe through approval, through audit, and then integration into the GL, typically in three seconds or less. Literally, the concept of an expense report is now just a swipe. So, as you’re using your card, you’re actually creating the expense entry. And that fundamentally changes the user experience. But more than that, they innovated further and said, “Look, we’re going to integrate this with financial products like the corporate card.” So, even the economics of what an expense reporting company looks like are fundamentally different.

So now, let me move over to troop. I met Dennis, the Troop CEO, a number of years ago. We had this view at Madrona that there has to be a better way of planning, booking, and expensing group travel. To be far, so that everyone understands, group travel is about half of the corporate travel industry. This $1.4 trillion number that you sometimes hear — about half of that is group travel. All of that group travel is manual. And our view was that we could bring a level of automation and better client experience to business processes the same way that Spotnana is driving a better experience in individual travel in the same way that Center is driving a delightful experience in expense reporting. And so we invested in Troop. This is much like Spotnana many years ago — Troop has spent the last few years really building out the technology infrastructure to solve for the planning process, the booking process, and the expense reporting process.

There are a couple of things within that that, I think are a further testament to this idea of an open platform. You can plan group travel within Troop, and we’ll manage your itinerary at group itinerary, by the way. So you can see when are your colleagues arriving, what’s the group itinerary for the entire trip? But not surprisingly, when you book, there are API calls into Spotnana to do the booking, and you don’t even know it. It’s just completely seamless. When you file the expense report, not surprisingly, it’s Center that’s doing all the expense reporting. It’s all seamlessly done and integrated into the process. And to me, this is the modern example — this is how modern applications will be built. You’ll consume services from the best-in-class providers of those services. In the case of expense reporting, it’s Center. In the case of group travel, it’s Troop. In the case of core travel infrastructure and individual business travel, it’s Spotnana. All are seamlessly integrated. And then, obviously, we’ll talk about how that’s integrated into Direct Travel, but these are incredible companies that just expand the value proposition.

Coral: Perfect segue, Steve. So we have Spotnana, that layer that everybody can build on top of now. Troop is going to manage the group travel side of things, and Center is going to take away these pesky expense reports that we all just love doing. So then, Christal, why don’t you tell us how Direct Travel fits into this vision of transforming corporate travel and what your vision is now that you’re CEO.

Transforming Corporate Travel: Direct Travel’s Strategic Vision

Christal: It might be helpful to give a little bit of the landscape of the different types of travel management companies. It’s kind of split down the middle. There’s a smaller group that approachs it from a service and very heavy agent perspective. It’s built on technology that’s somewhat antiquated, closed — very much like what Steve just described, these kinds of aging travel infrastructure. So traditional travel agencies that you might be familiar with that just very much a lead with service and working off of this older, aging technology, which is a disadvantage for them and their customers. And then you have another group of companies that really lead with technology, and they think about the technology standpoint and really just have, in many cases, a technology-only view and then maybe the very far second or distant second is the servicing.

So, if you take what exists out there in the ecosystem today, you take really two very different types of approaches. I think this goes back to the point that I was making earlier about why Direct Travel as this acquisition and it’s pivotal to the way forward is — it’s not just about the technology. It is also about the service, but you have to have exceptional technology and service combined. It’s the perfect coupling and curated set of technology that Steve mentioned before, but the very first time that the three technologies will come together seamlessly in a way that’s serviced as seamlessly, and in a way that, from Direct Travel’s standpoint, allows us to really show up for our customers. These are big changes for customers. These are things that really evolve what it means to partner with a travel management company like Direct Travel.

Being able to bring the technology is certainly part of it, but being able to service the technology as seamlessly and as carefully as we have in the past, but on this new tech stack is really, really important. So that’s No. 1 — bringing that together in a way that’s very fluid and very seamless for our customers. Steve also touched on something that’s really important, and it’s about data. It’s about being able to provide insights to not only our customers. We think about what managed travel means today is likely to change in the future in terms of the way that people think about the programs they have set up. We will be incredible partners in the ability to leverage the data that we have, the insights to help our customers along the way and making sure that they evolve as programs and as new opportunities change.

But the same thing goes for suppliers. Being able to really work with our suppliers to make sure that we provide personalized information and information that really is the reason why NDC is so important right now. Being able to get suppliers and personal offerings to the travelers, connecting those things. And the last one, we’ve already talked about, but i think it’s important to reiterate. It’s about the open platform and the continuous innovation — it’s not just about the innovation that we will certainly set out with AI and some of the other things that we’ll be doing with building on top of the application stack that Steve just talked about. But it’s about being able to provide value to our customers by having an open platform that others can innovate on top of as well.

We feel like when people decide who they want to partner with from a travel management perspective, it’s going to be a partner that evolves with them, leads the changes in the market, and makes sure that we can not only provide this best-in-class open architecture technology that brings all the benefits that Steve just talked about. We can do that and work with them on service, and we can continue to evolve as their needs evolve and even lead some of those changes in the future as well. Going all the way back to that perfect trip, each one of these components is critical to us on this quest of really trying to fully realize the perfect trip. It is putting them all together very carefully and very thoughtfully so that our customers and their travelers can benefit from it.

Coral: So as we’ve talked about all of these different pieces and how it all is going to fit together, Steve, when are we all going to be able to live this life of the perfect trip?

Realizing the Perfect Trip

Steve: Well, first of all, you can see why I’m so excited to partner with Christal again. I think that the mindset and the team that tends to follow Christal is really what will allow what are great ideas to become a reality. So, I’m very excited about the next five years. Now that said, let’s bring it back to today. What Christal and the team I know are working on is standing up this new stack of Spotnana, Troop, and Center on the Direct Travel platform. Everything from our core systems that run our business to how we provide service to our customers. We expect to be done with that in the summertime period. We plan to actually showcase integrated travel and expense offerings, plus what our group travel offering might look like at GBTA in Atlanta. We are looking forward to any customers who want to stop in an see the products that we’re building. We’d love to have you join us.

We think that sometime in late summer to early fall, we’ll be shipping our first sets of products. Now that said, one of the things that I love about Christal and the team that she is working with is that’s version one. And there’s an ongoing innovation cycle and an innovation mindset that makes this incredible. So, literally every single month, you’re going to see new functionality being delivered and new services that we will make available to our customers. So very, very excited about what we can do together in not just delivering a perfect trip but, frankly, reinventing the business travel industry to be far more customer-focused, far more supplier-focused, and just a more streamlined industry.

Coral: Well, I know that all of us business travelers are going to be eagerly awaiting this, and we could keep this conversation going between the two of you, I’m sure for hours. But why don’t we go ahead and stop there? I want to thank you both so much for joining me today.

Christal: Thank you very much.

Steve: Thanks for having us, Coral.

Building a Modern Database: Nikita Shamgunov on Postgres and Beyond

Building a Modern Database: Nikita Shamgunov on Postgres and Beyond

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Palak Goel hosts Nikita Shamgunov, co-founder and CEO of Neon, a 2023 IA40 winner. Nikita is an engineer at heart, with early stints at both Microsoft and Facebook. He previously co-founded SingleStore, which he grew to over $100 million in revenue. Neon is powering the next generation of AI applications with its fully managed serverless Postgres. In this episode, Nikita gives his perspective on the modern database market, building a killer developer experience, and how both are speeding up the creation and adoption of intelligent applications. Congratulations to Neon for recently hitting GA. And if you’re watching on YouTube, make sure to hit that subscribe button. With that, here’s Nikita.

This transcript was automatically generated and edited for clarity.

Palak: Nikita, thank you so much for joining me. Why don’t you share a bit about your background and what your vision is for Neon?

Nikita: I’m a second-time founder, and my core background is systems. I started, after finishing my Ph.D. in computer science, my first kind of real job, even though I was doing a lot of moonlighting jobs while studying, my first real job was at Microsoft, where I worked on the SQL Server engine. From there, I started a database company called SingleStore. It used to be called MemSQL, and I started as CTO. I wrote the first line of code. I had the experience of building a startup from scratch, went through Y Combinator, and lived in the office for a year until I got a place where we would wake up and code until we went to sleep. Then, I took over as CEO in 2017. The company had about $7 million in run rates, and I scaled it to — as a CEO, that’s how you measure yourself — I scaled it to about $40 million, after which I joined Khosla Ventures as an investing partner. Walking in, I told a famous venture capitalist, Vinod Khosla, that there was another database idea that I was really itching to build, which I couldn’t act on while a SingleStore because you can only have one engine in the company. As I was thinking about it, I already had the idea because I had been thinking about it for three years. And he’s like, “Why don’t you just incubate it here at Khosla Ventures?” Which I did. It took off really quickly. Now that’s the company. Neon is three years old, raised over $100 million, and we’re running 700,000 databases on the management.

Palak: That’s awesome and super impressive. You’ve worn a number of different hats. One of the things you said that caught my attention was that you had the idea for Neon for three years. When did you feel it was the right time to build it?

When To Launch Your Startup Idea

Nikita: I think I’m late. That’s what I think. This idea should have started when AWS introduced AWS Aurora, and it became clear that this is the right approach to building and running database systems in the cloud, as well as the separation of storage and computers. It was just such a good idea for the cloud in general. It doesn’t matter what staple system you’re running. Once that became obvious — and it became apparent in 2015, so I might be seven years late — I was trying to convince some of my former colleagues at SQL Server to go and build it while I was still running SingleStore. I said, “Every successful cloud service inside a cloud could be unbundled, and every successful SaaS service deserves an open-source alternative.” Those are the clear openings for building an AWS Aurora alternative, which I already knew was just such a juggernaut as ramping revenue really, really quickly.

That allows you to first counter-position against Aurora and have very clear messaging because people already understand what that is. I was sitting on that for a while and unable to act on it. I also thought somebody else would build it, and no one did. I see all these database attempts by building shared-nothing systems such as CockroachDB, Yugabytes, and CytosDB. None of them were that gigantically successful, but Aurora was, and I was like, “This is the right approach.” Because I was thinking about this for a while, the map in my head was already planned out. The downside is timing is everything. The opening is narrower, but our execution was very good because the plan was well-thought-out.

Palak: Technology leaders at companies have a number of different databases to choose from. You mentioned CockroachDB, Yugabytes, Aurora, Postgres, and even SingleStore. Where does Neon stand out?

Neon’s Bet on Postgres in the Modern Database Market

Nikita: It’s important to map the modern database market. It’s roughly split in half for analytics and operational systems. The company that still reigns operational systems or OLTP is Oracle, but nobody is excited to start new projects in Oracle. If you take somebody graduating college and want to build a new system or app, they’ll never choose Oracle. Because of the commoditization pressure, they will choose one of the open-source technologies, which is MySQL, Postgres, or Mongo. People won’t choose Yugabytes, or SingleStore, which breaks my heart, and people won’t choose a long tail of niche databases.

Fundamentally, there are two major use cases in that very large modern database market. One is “I want to run my analytics,” a data warehouse and use case or big data analytics use case. And then there is “I want to run my app,” your operational use case. Those of nature on the operational side, it’s clear that Postgres is becoming the dominant force, and workloads are shifting into the cloud.

Now the question is who will own Postgres in the cloud and who will own the share of Postgres as a service? Right now, the king is Amazon, which is why we’re not inventing a new database. We are actually saying we’re Postgres and will be on the trend lines of Postgres becoming the most important operational database on the planet. All the trend lines are there. We want to be on the right side of history. The question is, how do you become the default Postgres offering? And that’s the question that we ask ourselves every day. The answer to that is differentiation. Like what differentiation do you bring to this world? And, what we think that differentiation is — it is the cycle speed for developers. Operational databases exist in the service of their users, and their users are developers.

Palak: That makes a lot of sense. You even touched on a consumerization of developer experience. I’d love to get your thoughts on how you build that 10X better developer experience.

The Importance of Developer Experience

Nikita: When we think about our ICP today, it’s a team of developers who don’t have enough skills to start building their infrastructure. That small team wants to deploy daily and optimize them for cycle and overall speed. Over time every team is going to be like this. What you want to do is zoom out first and see what the set of tools developers use today is. Some of them have complete permanence. They’ve become standard. Something like GitHub has a ton of permanence, and GitHub is not going anywhere. Every day, GitHub entrenches itself more and more in developers’ minds.

Standing up VMs and running node servers in those VMs will go away. Then you keep zooming in, and in there, you will see, “What do developers do every day when they build a feature?” That comes down to the developer workflow. In that developer workflow, people create a branch, and Git Sync creates their developer environments and sends a pool request. Inside that developer workflow, modern tools plug in there very, very well. If you take something like Vercel, it allows you to generate previews. Every pool request will have a Vercel preview. It’s like a copy of your web app but created for just this pull request before you pushed it into production. Guess what doesn’t fit that workflow? Well, the database.

The database is the centralized piece, and you give access to all your previews and developers for that centralized piece, but you run this thing in production. “Whoa, this is scary.” Right? What if some developer writes a query that destroys the data or does something with the database, so now you protect the thing, and databases don’t support that branching preview slash preview workflow? We’re changing that. At Neon, you can always branch the database, so that becomes an enabler. Of course, we are prescribing developers that use Neon, and we’re telling them how they go about building features and what the role of the database is as they build features.

We introduced the notion of branching. We understand migrations, but what I mean by that is schema migrations. We let developers instantly create those sandbox environments with full protection. If there’s PII data, you can mask the PII data and whatnot. Their local and staging environments are fully functional and have access to a clone of their production database. And you can create those environments easily with an API call or a push of a button. This is an example of zooming into the developer workflow and ensuring developers feel like this is built for them. It follows their day.

Palak: As you think about building, especially a modern database company, there’s pressure to have good performance and reliability. How do you build the culture of the company? Or advice for founders who are serving the developer market? how do you keep developer experience as a first-class citizen both in the culture of the company, the people that you hire, and the customers that you bring on? And the reason why I ask is I think Neon has just done an incredible job of that.

Building a Culture of Reliability and Performance

Nikita: There are a couple of things. One is developer experience, and the other is time, performance, and reliability. The latter is incredibly important for a modern database company. If you don’t have them, you will never succeed. The first one is the developer experience, which is your differentiation. You have it, you’re succeeding because the latter — performance, reliability, uptime — they are requirements, it’s necessary. But it doesn’t guarantee that you will be able to compete with Amazon because they have it. Let’s be honest about it. RDS and Aurora are good, reliable services. You can build a business on them. You need both. So how do you get both? Reliability is a function of a few things. Do you have a robust architecture and a good team to develop that architecture? Once you have that in place, it’s a function of time.

You won’t be reliable day zero. That’s why we took more than a year to be in preview, and we started with being extremely open and transparent with ourselves and the world of where our reliability is by showing our status stage and feeling a little naked when people see how reliable your system is. For a modern database company, if you’re not reliable, it attracts a good amount of criticism. And we got it. A good amount of that was in the fall, where the usage went up, and we had this hockey stick growth. We were onboarding, call it a hundred databases a day, at the beginning of 2023. And we started to push 3000 a day by the end of 2023. And so all hell broke loose. Then, if you set the priorities right when you write postmortems, you have a high-quality team, and you slow down your feature development when you need to in the name of reliability.

The architecture is key. If the architecture is off, you’ll never get there. Then, over time, you will get incrementally better and better and better, and eventually, the reliability will be solved. Now that we’re a GA, we feel good about the past history of our reliability and the systems in place, but we can’t, with a straight face, say we are a hundred percent reliable. I don’t think there is such a thing when you run a service, but you can get incrementally better, and at some point, you have enough history to take the calculated risk of calling yourselves GA. When it comes down to developer experience, there are a few things that are important to get right. One is the team like Jiro’s Dreams of Sushi; you must eat good food to make good food.

The tools that we use internally should be top-notch. The tools that we praise out in the world should be top-notch. We love GitHub, we love Vercel, we love Linear, we love super well-crafted, razor-sharp tools that get a lot of developer love. I’m going to call my competitor, which is like a big no-no, but I think Supabase is doing a great job in terms of developer experience. We’re not shy about talking about this internally to level up and potentially exceed the developer experience that they provide. I guess it’s emphasizing what’s important, which is relentlessly investing in your team, having good architecture, understanding what good looks like, and getting better every day.

Palak: You have a long history of working on SQL Server and SingleStore, focusing on reliability and winning those upmarket accounts and workloads. When do you start thinking about when it’s the right time to prioritize that at Neon versus focusing on net new workloads and really good developer experience?

Product-Led Growth v. Sales Teams

Nikita: Heroku is a $800 million run-rate business. Half of this is Postgres, half of that is PLG Postgres, product-led growth, small teams coming into Heroku using Postgres, and the other half is enterprise. You can make a lot of money by providing the best Postgres in the world for small teams, but you don’t want to get stuck in the same place as DigitalOcean. By the way, there is no shame in building a public company, and it’s still growing very healthily all these things, but the stock does not reward DigitalOcean at this point in time. DigitalOcean didn’t go upmarket. We want to go upmarket, but we gate it. We gate it as the signal that we’re getting through our own users that are coming through the side door and just signing up.

We have a good number of enterprise customers, companies like EQT and companies like Zimmer Biomet. There’s a fairly long list of small use cases inside large enterprise customers that will obviously keep expanding, but we don’t want to spread too thin and over-focus on that. So, for example, what does it mean specifically? We don’t have a sales team. We don’t have a sales team, and in all our growth in the last month — we grew 25% months and months in revenue — it is all PLG. And so that’s what focus is looking like. We do have a partnership team and the partnerships team helps us with strategic partners such as Vercel, Re-Tool, and Replit, and making sure we give them the best quality of service possible. And now we’re also in the AWS marketplace and then working with some strategic partners.

Standing up a sales team is not a strategic priority today. We have a very good and thoughtful board that is completely aligned with us in that strategy. And so, how do we gate it? We’ll look at the signal of people coming in on the platform converting and what kind of teams those are. And we are also looking at the partner pool, and these are the things that will eventually tell us like, “Okay, now is the time.” And the book that created a very good impression on me was “Amp It Up” by Slootman. And when he talked about the data domain, they were in the technology development, and then they’ve proven their go-to-market motion in the enterprise because the updated means, obviously, enterprise.

They scaled their sales team very, very quickly. I want to live in the PLG world, and my 200 million of Heroku is telling me that there are plenty of dollars to be captured in this world, but track very closely what’s happening with larger customers driven either them coming in directly to the platform or through partners. And once that’s happening, I’m going to scale the sales team very, very quickly. What I want to avoid is a prolonged process of having a sales team and then keep scratching my head about how expensive it is and how effective it is. I think having a very tiny team early on proving everything, but then scaling very, very quickly is the right way to go.

Palak: It’s 2024, and everyone’s talking about AI. You mentioned you’re in two unique roles. Not only are you CEO of a company like Neon that’s raised $100 million, but you’re also a venture partner at Khosla. You’re uniquely able to weigh in. How big do you think this wave is going to be? Is it all hype, or is it something that is here to last for a while?

Nikita’s Hot Takes on AI

Nikita: So I’m a true believer this will change the world. It’ll change the world of software development and it will change the world of living. The question, how long this is going to take and are we going to have a trove of disillusionment sometime in the future? I don’t know the answer to this question, but I’m as bullish on this thing. I even think that we’ll live in a much more beautiful place — as in our planet — because AI will eventually drop the price of beauty down so much that it will just make sense to have it. And you know how in the seventies we had all this Brutalism architecture, which is super-duper ugly.

I was born in the Soviet Union and grew up in Russia, where Moscow was very pretty. A lot of cookie-cutter houses were built, mostly in the sixties and seventies. The reason they’re so ugly is that they’re cheap to build. I think we will arrive at a place where the cost of building things, software, or even physical things with robots and all those models will be in their robotic brains, will allow us to live in a much more beautiful world. The cost of design is going down, the cost of software engineering is going down, the cost of construction will go down. We’ll kind of re-terraform the planet through this.

Palak: One of the things that every enterprise is figuring out is that every AI strategy requires a data strategy. How are you seeing this impact at Neon, especially focused on net new workloads and net new projects?

Nikita: We’re potentially moving into OLTP, which is more important than analytics. What I mean by that is we just went from, zero cloud spend on analytics to, I think Microsoft is 20 billion. It could be 10, it could be 20, you can cross-check.

Palak: It’s a lot.

Nikita: This is your big data; this is your data warehousing; this is your data lakes; this is your training models and stuff like that. Training like old school models, just like ML and AI workflows of the past. We’re going to have more apps. Apps need databases, operational databases, modern databases. That’s not going to go away, that we’re going to have more apps and therefore we’re going to have more databases. The whole inference thing does not belong to an operational database. It’s all a new thing. It’ll be triggered by both. So that’s an additional line of spend.

Some days, I’m bullish on data warehouses, and other days, this will be older technology because it feels like we’re babying those data warehouses way too much. I’m observing by looking at the data team at Neon, where we’re obsessed with the semantic model, how clean the data is, and how much that represents. The data quality is very important because garbage is in and garbage is out. If you want to have the data in your data warehouse reflect the state of your business, you have to be obsessed with it. Today, people are working on that. I think tomorrow, that whole thing might be a lot more simplified where AI can deal with data being a little dirty because it understands where it’s dirty.

So, you won’t need to make a picture-perfect schema representing your business. I think the center of gravity might shift a little bit where people calling into AI, and I’m going to call out my friend and his tweet on Ankur Goyal and Braintrust Data, where he talked about what changes we are going to have for the data warehousing. He’s obviously looking from his standpoint, and he thinks evals will start pulling data closer to the model and start replacing observability and start replacing maybe even some product analytics at the minimum and then, over time, data warehousing there. I don’t know. It feels archaic to collect data around ETL jobs and run a SQL on a SQL report, but the alternative doesn’t quite exist today, so it’s still best in class.

But that whole AI thing should, some one way or another, disrupt those things where the quality and the full history of that data will matter slightly less.

Palak: Totally. Referencing Ankur’s tweet, it is almost like a real-time learning system of how the app can update itself using AI versus doing it offline, the human way, looking at the warehouse, looking at how users are reacting, et cetera, et cetera.

Nikita: Yes, and the engine of crunching the historical data should still exist. I still want to know how my business is performing. I still want to analyze the clicks, traffic, and product analytics, but how tedious it is today to set all those things. How tedious it is, that should somehow go away. A self-learning kind of AI system that ingests the data like kind a human brain and that thing becomes the driver for running those aggregations, that may change the center of gravity. Today, it’s firmly in the data warehouse or data lake; that is where the centerpiece of the enterprise data today is. We’re super eager to partner with all of them because that’s what the data is. You want to be close to that. Is it going to stay there or not? I don’t know.

Every App Needs A Modern Database

Palak: Do you see there being tailwinds on the transactional side or how do you see Neon fitting into some of these new modern AI architectures?

Nikita: Look who’s doing AI stuff today versus yesterday. In the past, yesterday, it was a data scientist. I don’t know if we need that many data scientists anymore. Today, it’s app developers and prompt engineering and building RAGs. All of them are real-time operational systems. Every app still needs a database. So we’re not replacing operational systems; we are augmenting them. The people who are driving AI value in the company are mostly product engineers running stuff in TypeScript.

The whole Devin demo broke the internet, and we had an incredible amount of excitement about Devin and how Devin can add print statements and, debug your code and run full task. We’re actually using Devin already at Neon. We gave Devin a task to convert our CLI tool from TypeScript to go. It didn’t complete it, but it made enough progress for us to see that maybe it’s not quite there, but it’s almost as good as an intern. And then tomorrow, well, tomorrow, we’ll just do the task. There are so many tedious things that you want to do, like you see, look at the design, and it could be picture perfect. Replace this button from red to gray. We just ran something like this today, and then a human had to do this, but those things shouldn’t be humans doing this kind of thing.

Palak: What intelligent application are you most excited about and why?

Nikita: I’m mostly excited about Devin-like systems. In the whole AI world, we’re replacing humans. We’re not giving humans sharper tools. Software engineering is one of the most, it’s an elite job. We’re able to replace a software engineer, obviously starting from a very junior one, but over time, that’s an incredible amount of economic value because most companies are bottlenecked on the amount of engineering they can ship. I think that’s probably my high-end excitement there. And we’re certainly doing work internally on that front.

Palak: Beyond AI, what trends, technical or non-technical, are you excited about and why?

Nikita: Obviously, our developer experience is very on-brand for us. I think we’re entering a world where there’s just so much technology, and the way to stand out is the incredible degree of craftsmanship that goes into creating a new product or a new tool. I’m both excited about it and in fierce competition with Supabase. They get it, too; we get it, and we will see. This is a Jiro Dreams of Sushi kind of argument here again.

Palak: This isn’t a question, but on one hand, you have AI that’s potentially replacing the junior developers, and then you also have killer developer experience that’s making software engineering more accessible.

Nikita: Correct. By the way, if you make software engineering more accessible for humans, you are also making it more accessible to agents.

Palak: What unexpected challenge have you had where something didn’t go as expected, and how did you deal with it?

Nikita: The team was born as a systems project. Then, the developer experience was layered on top, and we couldn’t really move on to anything developer experience until the foundation of that storage was built. Those things were not on the core DNA of the founding team; it took us a couple of iterations to get there, and we’re not quite there yet, but we’re better and better and better that we see material progress. Something that for a systems engineer may look like, “Oh, of course, that’s a lot easier than doing the hard stuff like the storage thing.” But that turned out to be quite hard to get, and I appreciate the skill sets and the people who go and work on those things.

Palak: Any other parting advice for founders who are just starting the founder journey?

Nikita: I think the one that I would give is, A, don’t wait. I waited on this idea. I should have started this earlier. If you feel like you are ready— and maybe you feel like you’re not quite ready, but close enough—just go and do it. Then, go on the lifelong journey of learning and being passionate about your users and the technology that goes into making delightful experiences for those users. And never look back.

Palak: Awesome. Well, thank you so much, Nikita. I really appreciate you taking the time.

Nikita: Absolutely. Thank you so much.

How Bobsled is Revolutionizing Cross-Cloud Data Sharing

How Bobsled is Revolutionizing Cross-Cloud Data Sharing

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Investor Sabrina Wu hosts Bobsled Co-founder and CEO Jake Graham for the latest episode of Founded & Funded. Jake had stints at Neo4j and Intel before joining Microsoft, where he worked on Azure Data Exchange. Jake is revolutionizing data sharing across platforms, enabling customers to get to analysis faster directly in the platforms where they work. Madrona co-led Bobsled’s $17 million series A last year, which put the company at an $87 million valuation.

In this episode, Jake provides his perspective on why enabling cross-cloud data sharing is often cumbersome yet so important in the age of AI. He also shares why you can’t PLG the enterprise, how to convince customers to adopt new technologies in a post-zero interest rate environment, and what it takes to land and partner with the hyperscalers.

This transcript was automatically generated and edited for clarity.

Sabrina: Jake, it’s so great to have you with us today. Maybe to kick things off for us, why don’t you start by telling us where the inspiration came from to start Bobsled?

Jake: Absolutely. I’ve always wanted to start a company because I love the competitive aspect of business, the idea of trying to go out and win and bring something into the market. Over the first 15 years of my career, I found that you can get that in pockets in larger organizations, but it’s hard to get that feeling that it’s only the efforts of you and your team that stand between you and success.

I’ve always been on the lookout, but I’ve also always had two tough requirements before deciding to jump in. I wanted to believe in the idea. I wanted to feel, especially in my data infrastructure space and focused on the enterprise, that I understood the problem and that a product in that space deserved the right to exist. It was critical to me to have at least one strong co-founder. For me, especially, it was a CTO I deeply believed in, and I thought we could win together.

As far as how Bobsled itself came about, I sometimes joke that life has three certainties. There are death, taxes, and reorgs. And it was a reorg that made Bobsled exist. I’ve been at Microsoft Azure in a role that I enjoyed. I spent about 18 months building the strategy and plan to create an Azure Data Exchange and an Azure data ecosystem to make data significantly easier to share across Azure consumers. And we finally got a significant budget to start and hire in earnest to start building.

When that happened, a significant reorganization resulted in the birth of Microsoft Fabric. Microsoft Fabric was a fantastic decision. It was and is a product I’m excited about, but it made what I was building not make sense at the time. It’s finally starting to make sense as Fabric has gone GA. But I remember thinking, I took a run, and I realized that I’d finally uncovered a problem specifically in this case; it was around not just data sharing but changing how data is accessed for analytics and evolving data integration into being cloud and not AI-native. I was motivated to solve that problem. I’d finally found something that deserved the right to exist as a successful business.

It would also be totally unfair for me, not to mention that my wife had been pushing me to start something for a while. She realized that — I wouldn’t say that I would be happier founding because this is a really hard job — but that I would be a lot more fulfilled. So, my wife, Juliana, was the secret co-founder who pushed Bobsled to exist.

I spent a couple of days maturing the idea. I started thinking, who do I know I can start to bounce this off of? I called the person who is the best engineer I’ve ever worked with. The gentleman I worked with at Neo4j. We hadn’t spoken in probably a couple of years. I gave him this idea, and he said, “Jake, are you asking me to be your co-founder?” And I said, “I mean, I’m starting to think about this.” And his response was, “I’ll quit tomorrow. Let’s do this. How long do you need? I think we should get going quickly.” I was really taken aback, and I said, “Tomorrow feels a little fast, but we can talk again tomorrow.” At this point, I sat back and realized I was starting a company.

Sabrina: I love that, and I love the co-founding story. It’s definitely a testament to you as a founder to be able to attract great talent and build Bobsled. Can you help us set the stage more and explain what exactly Bobsled does? What traditionally were companies doing before Bobsled to access and share data?

Jake: Bobsled is a data-sharing platform that allows our customers to make their data accessible and available in any cloud or data platform where their customers work without having to manage any of the infrastructure accounts, pipelines, or permissions themselves. Fundamentally, the product is pretty straightforward. You grant Bobsled read access to wherever you store your data, whether in files in an S3 bucket or indirectly within Databricks in BigQuery, Snowflake, et cetera. You reach out to our API to say, “This data needs to be consumed by this individual in this cloud, this data platform, and be updated in this way.” We manage intelligently and efficiently replicating that data to where it will be consumed for white-label infrastructure. Whoever’s consuming that data feels like they’re getting a share from ZoomInfo or CoreLogic, or any of our other customers, but they didn’t have to build any of those pipelines. It allows them to move from putting all the work on their consumer to making it easy without suddenly having to manage an infrastructure footprint in every platform where their customers work.

It’s still crazy to me that the volume of data used for analytics, data science, and machine learning has grown a couple of orders of magnitude over the last decade. The actual mechanisms for that data to be accessed are almost exclusively the same as ten and even 20 years ago. The overwhelming majority of data that’s used to drive any form of data pipeline is either pulled out of an API or an SFTP server. That doesn’t make sense in a world where so much of the value being generated by modern enterprises is in data and in which you need that data to be consumed by others, whether in your organization or others, to extract that value. We needed to see the cloud-native data exchange mechanism take off.

The reality is that the right way to solve this problem is generally by the actual data platform itself. The best way to access data within Snowflake is Snowflake Data Sharing. They have built a fantastic experience where you don’t have to go through and ETL data from another platform. You can query live, well-structured, up-to-date data as long as it’s already in the same cloud region and in Snowflake.

That sharing mechanism was pioneering, and every other major platform followed it. The problem is that it puts a significant infrastructure burden on the actual producer. We want to move away from a world in which every data consumer has to ETL the data out of whatever platform it’s in. The issue you get with that is that sharing protocols aren’t connectors. It’s not just taking the traditional model of we’re going to toss data over the wall. They provide a better consumer access experience because you have to bring the data to where it will be consumed. You have to structure it in a way that is ready for analytics and then grant access to it.

I believe data sharing would be how data is accessed for modern analytics and ML pipelines. But in order to make that happen, the data producers needed a way to interact with all of these systems without having to do it all themselves. That’s fundamentally what Bobsled is.

Sabrina: As you alluded to, data sharing is a critically important part of the stack. I love how easy Bobsled has made it for data providers actually to share the data and how you’re also agnostic, to your point, across the different cloud platforms. If you’re on Snowflake or you’re on Azure, maybe it isn’t easy. But you’re allowing companies to share across different cloud platforms.

You’re flipping this idea of ETL on its head, which is one of the parts I love the most about what Bobsled can do. I’m curious, though, to this point about different cloud providers, what role does cross-cloud data sharing play in this new age of AI and the new way that companies are starting to build?

Jake: Someone recently told me, “I really hope this ML wave sticks this time.” I’ve been working in ML for over a decade, and it gets bigger every year. This just seems kind of like a natural evolution of what we’re doing.

Something you talk about a lot is the idea of intelligent applications. That term applies to an enormous amount of where the market is going. There’s something there’s a really strong definition around. The way I think of them is they’re applications that leverage data in order to better automate whatever workflow they actually solve for, and then also generate more valuable data as a part of that. I think that’s true whether you’re leveraging an LLM, or you’re leveraging more traditional machine learning, or whether this is built on more human-in-the-loop analytics. In order to continue to move toward this more data-driven and now AI-focused age, data has to be able to move between the systems and organizations where it’s generated.

One of the things that people are starting to realize is that often, in any application, a lot of the value it provides is actually in the data generated by running that application. There’s an enormous amount that you can do in that workflow to use that data to improve it and continue to automate it. Another thing we’ve learned about data over the past decade is that it becomes valuable when it’s blended with other data sets. Within data, almost always, the whole is greater than the sum of its parts. When you realize that if you want to think intelligently and predict, and I think even more if you want to do that in an automated fashion using LLMs, you have to be able to bring in the data that represents different aspects of a problem, and that is never sitting in one system.

We saw a push led by Salesforce around the Salesforce Data Cloud. Well, if we can get everyone to bring all of their data into our application, we can solve all the problems. And the answer is no, you can’t. You might be able to answer many questions, but in reality, data is being generated across this enterprise and its partners and other vendors. It needs to flow into the systems where it’s going to generate insight. I fundamentally believe that data sharing will be the mechanism to do that.

How I think this shift enables the move toward the age of AI is we’re going to allow every company to create data products and to have them be accessible wherever they need to work without, again, having to manage an infrastructure footprint and have an army of people who understand how does clustering work differently in Databricks versus Snowflake? How does the permission protocol work differently in an S3 endpoint versus an Azure Blob store? How do I replicate this data? There are better things for people to be working on. Bobsled is going to be a lot of the plumbing for how the world becomes AI-driven.

Sabrina: Data is key to what everyone has referred to for years as a critical part of the modern data stack. You’ve hinted that you think the modern data stack is at a turning point. We’ve talked about this a little bit, but I’d love you to walk me through your thought process. What is this turning point? How do you think that might impact the tech sector and startups overall?

Jake: My general feeling about the modern data stack is it’s no longer a valuable term because it won. There was a time when the modern data stack described a few companies and categories that were bringing analytical infrastructure into becoming cloud-native. That was a relatively exclusive set of companies, so it didn’t include any of the previous vendors who weren’t cloud-natives, whether that was Informatica or even Microsoft, in a lot of ways at that time. And it also didn’t include any part of data that wasn’t pure analytics. There was a separate data science and an L stack, and there’s been a separate operational stack.

I think what has happened is that those incumbents have caught up and are now cloud-native. The companies that weren’t purely in the analytics stack have started to move in there. The more successful companies in the modern data stack are branching out beyond it. The discrete categories within the modern data stack are blending.

There’s been a lot of unnecessary angst around the death of all the companies in the modern data stack. I view it as a victory. The modern data stack is just now a key part of technology. We’re moving from a purely software-centric technology market to a data-centric one. That’s the idea for me of intelligent applications, or if you want to call it the age of AI. The software is incredibly important, but it needs the data. It’s no longer enough to build for a very specific set of users in a very specific category. We now have a much larger field to play in, but also, it’s a much more competitive field.

I think that’s a little bit of what people have been shying away from in the modern data stack. It is like: But wait, this company we thought was going to be a unicorn isn’t going to get there just by solving this slice of the problem.

I go back to the first thing I said: I love the competitive nature of this. I love the fact that every day you can wake up and figure out, okay, how do we execute our way to win, and how do we make sure that we’re solving real problems and that we can bring those solutions to market, and that we can get that feedback loop going? A lot of modern data stack companies are going to be incredibly successful. I don’t think that term is valuable anymore.

Sabrina: Do you think there’s going to be more consolidation of the players within the data stack? Do you think people are going to start to bleed into the different swimming lanes?

Jake: It’s going to happen for good reasons because the most successful company is going to want to keep growing back. Again, this is a competition. You have to win more every day. You’ve earned the right to expand beyond existing categories. I think that’s really good.

I use a joke term: the enterprise strikes back. The vast majority of software spending in the United States is done by the Fortune 1000, and that’s true in the world. The way in which enterprises buy technology is fundamentally different. The idea of fully adopter-led tying together many different solutions is just not working.

Benn Stancil is probably the best writer in our space. I don’t want to parrot his words and take credit for them myself, but he wrote a really great piece on this, with the example of Alteryx. About a lot of startups are looking at Alteryx doing all of these things, and they can only say they do their best in class. It became easy to say, “We’re going to attack this part of it.” Until you get in and realize Alteryx is selling to the enterprise and that complexity, it’s not for nothing. It’s because they built it to meet their customers’ needs. And that breadth is in and of itself a feature.

We’re seeing a little bit of that enterprise strikes back — of the way in which software was built and brought to market for a long time. There’s still a lot of value in it. We need to learn to blend some of the PLG and more pure means of software and product development with some of the realities of your enterprise is going to get you to the promised land. If you can’t add value to the largest companies, you’re putting a pretty low ceiling on yourself.

Sabrina: One phrase that I’ve heard you say before, and we’ve talked many times before, is that “You can’t PLG the enterprise.” Maybe you could talk a little bit about what that means and how you view that statement.

Jake: It means a couple of big things to me. Holistically, as an industry, we’ve lost respect for the enterprise sales part of enterprise software. The pendulum has swung a little bit too far toward the product should sell itself in some ways that’s for really positive and great reasons. It has pushed us to think about product design and user experience. A lot of it has been pushed by individuals within organizations being much more empowered to adopt technology. There are hundreds of millions or billions of dollars in truly product-led growth revenue happening every day. I’m not saying that’s not real, but it doesn’t consider how large enterprises make decisions around technology.

If you think of a few things, often, your buyer and your user are not the same person. Generally, if you’re building something that’s of strategic value and is looking for, you’re not starting small; you need to be attached to a strategic initiative in which there are multiple decision-makers, not just the person who will be using your product. Creating not just a sales motion that allows you to get in front of those people, understand their requirements and goals, navigate their organization, and transfer your excitement about your product to them. That’s a big part of what people have missed: the art, craft, and need for actual enterprise selling.

The second part is that you also need a product development process that feeds into that. Every company, but really every startup, you live and die based on your feedback loop. One interesting thing is that, as an industry, we’ve all internalized the idea that our product ideas are not that good. Focus on a problem, not your solution; ship quickly, get feedback, and iterate. That is awesome in a PLG world where the cost of getting that feedback is incredibly low. It’s challenging in the enterprise space because there is a gap between your buyer and your user. It’s often easier to get time with executives and the users who are going to implement. You’re not getting perfect information there. If you are building something entirely net new, like there is no direct equivalent to Bobsled, even your user will think they’re going to understand how they’re going to use their product. And it’s going to be somewhat wrong once you actually get into production.

The biggest part of it is that the gap between my interest in using this product, I’m evaluating using this product, and actually using it in production is unpredictable and longer. What you end up with is if you don’t build a product development process that views your sales process as a key component of feedback, you end up going back to building in a vacuum. You really need to get to a place where, one, you trust and listen to your sales team or your go-to-market team in general. Two, you’re much more actively asking your customers questions. Third, you’re much more willing to ship even more iteratively and recognize even faster that, a lot of times, what I talk about the team a lot is how we win by fixing the problems our customers identify with our product faster than they think it would even be possible.

If you take the existing agile processes we’ve all built over the past five to 10 years and try to apply them to this more challenging feedback loop, I don’t think you’ll figure out how to interpret the signals. When I say that you can’t PLG the enterprise, it’s not just that you can’t put a trial on your product and hope people come in. It’s not just if you don’t figure out how to navigate the enterprise and get to real decision-makers, budget, and stakeholders. If you build your product in a way that doesn’t take into account how feedback is generated from enterprises in your development cycle, you’re not going to build the right product.

If you’re a product like Bobsled in which larger customers experience our problem more acutely, that’s where we’re starting; you won’t build a product they can adopt. I think you’ll also have the other thing that many companies, especially from the modern data stack age, is we know we solve a real problem, but the product and the go-to-market don’t seem to quite fit where the actual money is. How do we get over that? I really think the answer is to fall in love with enterprise sales again. Really care about it.

Sabrina: Many companies that we talk to are struggling with this because PLG has become so favorable and fashionable, to say the least. But in reality, it sometimes comes back to basics. That’s what you’re alluding to here: How do you get that feedback loop going? How do you listen to your customer, especially when there’s a disconnect between the user and the buyer?

One critical component for Bobsled is being friendly with all the hyperscalers, but you partner with many of them, including Snowflake, Databricks, GCP, Azure, et cetera. How have you gone about managing these partnerships? Especially as an early-stage company, I think it can be very challenging. These companies are very large. Do you have any advice for entrepreneurs who might be navigating some of these relationships?

Jake: I’ve always been inspired by Apple’s 1984 commercial where they made clear they had an enemy, in which case was IBM, to try to motivate the team. I assumed it would be one or a few of these platforms because they were building walled gardens, and we were building a product to bring those walls down and connect these different platforms. I was shocked when none of those platforms oriented Bobsled that way. They all oriented to Bobsled for, “Oh, wow, this solves a real problem for our customers. It’s one that we don’t want to solve ourselves.” I go back to our specific problem, it involves managing infrastructure across all these different platforms, and that’s a hard thing for them to do themselves, although there are plenty of examples of them doing it.

We were pleasantly surprised. There was a warm reception from executives early on to partner. I was fortunate to be on the other side of partnership motions both at Microsoft and at Neo4j, from both the startup and the hyperscaler itself. I think they’re super dangerous for early-stage startups and that they can suck an enormous amount of your time, energy, and mental thought into them in ways that will eventually pay off, but not in the timeframe that you need them.

My advice for the early stage would be to focus on partnering with individuals at hyperscalers. So all of these companies have effective machines that move billions of dollars in revenue for partners, and almost none of those billions of dollars in revenue come from early-stage startups. You need to get to a place where you’re at scale, and then they can help you scale more effectively. At that point, you need to be able to be relevant to the individual AEs across every platform with an incredibly repeatable sales process, pricing motion, and mature integration. I look forward to that day, but it’s not series A.

The way that we’ve approached it, which has been effective, is we’re now starting to clip into the machine a bit more and start training sales teams at some of our partners like Snowflake and Databricks. The first two years have been finding executive champions willing to help us navigate and take calls with prospects and propose Bobsled, who are willing to individually send us leads. I can think of a few of our first enterprise customers where we’d be on, I was sharing a cell phone, actually my head of sales was sharing a cell phone with an executive from, in this case, Databricks. Or we were having joint meetings with executives at Snowflake and having them really reinforce the story, not just that the problem we’re solving was real, but that we are the company to help solve it.

You’re not winning by convincing 5,000 companies at once or by getting 50,000 Microsoft sellers to sell your product. You’re winning one by one. It’s like a day by day, you go out and convince these customers that your product is worth betting on. Focus on those individuals, get those wins, and that will earn you the right to scale.

Sabrina: You thought about all these different partnerships. Were there any ways that you’re prioritizing them? Or thoughts on, obviously you said it can take up a lot of time, and so if you’re an early-stage startup, how do you think about who might be the most valuable partner to you? Or how do you stack rank as you’re thinking about building? That’s a critically important part of the process for founders, so I’m curious how you thought about it.

Jake: I mean, part of that goes back to the last piece of advice that you’re partnering with individuals, not with organizations at that point. So, where you find individuals who really want to lean in and have influence in their organization, and really the specific part of their organization you want, you should be a lot more opportunistic, I think, than anything else.

For us personally, what we found, and despite my coming from Azure and having a lot of close contacts at AWS and GCP where we are partnered, was that the breadth of their portfolios made it much harder. So for us, Snowflake and Databricks, because everything that we do helps them achieve their goals. We’re in one of the few positions where it’s by working with Bobsled, a customer does two things immediately. It enables them to share their data on every one of these platforms, which directly drives consumption for every one of those platforms. So, getting data shared in is a good thing.

The other thing it does is it allows you to centralize more and more of your data in the platform of your choice because you are now able to access these other platforms, and you don’t need to worry about lock-in. So we’re in a bit of a unique position with most of these platforms where they all win when we win. It’s one of the few times where we could go to an enterprise and say, “Hey, Databricks would like to talk to you with us because they’re going to win if we win. And Snowflake would like you to talk to us because they’re going to win if we win.”

Anything that you can do to focus on where you are driving value they care about is similar to enterprise sales in the same way. If you’re trying to sell something, you must attach it to a strategic initiative that people care about. You’re not going to close your first half-million-dollar deal on a science project. Over time, it needs to tie to something. It’s the same thing with a partner. Find an individual that you attach to those strategic priorities. As much as you can, find the actual part of the business or possibly the entire company where you’re helping to drive their strategic priorities. You’ll find that they pay a lot more early dividends than trying to figure out if I could get in front of GCPs 30,000 sellers; I know that I wouldn’t have to hire my own sales team.

Sabrina: That’s great advice and I’m sure founders listening today will find that to be very helpful.

As we wrap up here, Jake, just a couple of questions left for you. One is I’m curious, what’s an unexpected challenge? Maybe one of those oh my gosh moments where something just didn’t work out the way that you thought it would, and how did you deal with it?

Jake: I think the one that’s coming to mind right now is that I believe in executive coaching, so everybody at Bobsled has a coach, and I have two. One of the challenges of being a CEO is figuring out the right timeframe for you to focus your energy on. You are responsible for the strategic vision. Especially if you’re building a VC-backed business, you can’t just be focused on, well, if we just win these customers. It needs to be a part of a larger master plan. Moments where I drove the least clarity were when my mind was focused on what it would look like for Bobsled to win two years out or even a year out and not focused on what exactly we needed to do to win right now.

I have an amazing leadership team that is often much more experienced than I am in the roles they bring. I can’t just say, “You figure out now, and you go figure out where we go next.” That’s not how this works. Make sure you’re actually defining what winning looks like for your team and talking constantly about how to win right now.

Make sure you’re still separating the space for yourself to evaluate: Are we going in the right direction? Are we building ourselves into a corner? Is there enough total addressable market here? When is the right time for me to start thinking about expanding our TAM? When is the right time to think about bringing on that next leader? But don’t get caught up in constantly building and thinking about the future. Make sure you’re focused on your actual wind condition today.

Sabrina: I love all that advice. It’s critically important to stay focused on what’s happening in the moment but also have that broader vision, knowing that the 10-year plan is potentially to get some other big wins, maybe that IPO down the road, or whatever else you’re looking forward to.

Jake: You’ve got to be able to convince yourself that there’s a path for you to get there. Otherwise, don’t take this path.

When we had a customer win, I must’ve had at least two minutes of real excitement around that before I thought, okay, we need 100 and some odd more of those to IPO. Like you just got to, you can’t celebrate the touchdowns. That’s the reality of VC. Madrona didn’t invest in Bobsled because they thought we could get to the Series B, you’re investing in Bobsled because you think we can go significantly further. It makes this decision point harder. Well, it’s like, well, I’m building for IPO, and I need to validate that. If we don’t do what we need to do to get to series B, it’s all for nothing. How do you constantly live in that time shift? It’s a mental challenge I hadn’t thought about, and I now spend a lot of time thinking about it.

Sabrina:

Well, it has been a pleasure and honor working with you, Jake, and the rest of the Bobsled team. We’re excited about what you guys are building and know that the best is yet to come. So thanks for allowing us to be part of the journey. And thanks for joining us today. Really appreciate it.

How Writer CEO May Habib Is Making GenAI Work For the Enterprise

Writer is a full-stack generative AI platform helping enterprises unlock their critical business data to deliver optimized results tailored to each individual organization.

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Vivek Ramaswami hosts May Habib, co-founder and CEO of Writer, a 2023 IA40 winner. Writer is a full-stack generative AI platform helping enterprises unlock their critical business data to deliver optimized results tailored to each individual organization. In this episode, May shares her perspective on founding a GenAI company before the ChatGPT craze, building an enterprise-grade product and go-to-market motion with 200% NRR, and whether RAG and vector DBs even have a role in the enterprise. This is a must-listen for anyone out there building an AI.

This transcript was automatically generated and edited for clarity.

Vivek: So, May, thank you so much for joining us. And to kick us off, we’d love to hear a little bit more about the founding story and your founding journey behind Writer. And speaking of, you founded Writer before this current LLM hype, before ChatGPT and in this pre-ChatGPT era, we like to call it. And so much has changed since then. So what is Writer today? What is the product? What are the main value propositions for the customers that you’re serving? We’ll get into the enterprise customers, this great list that you have, but maybe just talk a little bit about what Writer is actually doing and how you’re solving these problems for the customers you’re working with today.

May: Writer is a full-stack generative AI platform. Most of the market sells folks either API access to raw large language models or productivity assistance. And there’s lots of room for insider companies to use both solutions and different use cases. For team productivity and shared workflows, there’s a real need to hook up LLMs to unstructured data to do it in ways where there are real guardrails around the accuracy risks, the hallucination risks, the brand risks, the compliance risks, and regulated industries. And the time to value when you’re building things like synthesis apps, digital assistant apps, and insight apps, there’s a real need to capture end-user feedback.

Putting all of that in a single solution, we have drastically sped up the time market on folks spinning up really accurate, ready-to-scale applications; doing that in a secure way, we can sit inside of a customer’s virtual private cloud. And so we’ve had to be able to build that. We’ve had to own every layer of the stack. We built our own large language models. They’re not fine-tuned open source, and we’ve built them from scratch. They’re GPT-4 quality, so you’ve got to be state-of-the-art to be competitive. But we’ve hooked up those LLMs to the tooling that companies need to be able to ship production-ready stuff quickly.

The importance of building your own AI models

Vivek: You talk about how Writer has built their own models, and this has been a big part of the moat that Writer has built, I think this is something that we’ll touch on, but we continue to see as so important is being able to own your stack, given how many different technologies we see competing on either end. In terms of building your own models, what did that entail for how you thought about building this company from day one? And how would you describe what Writer has done around building models and this movement from large models to small models?

May: It’s really hard to control an end-user experience in generative AI if you don’t have control of the model. And given that we had chosen the enterprise lane, uptime matters, inference matters, cost of scale matters, and accuracy really matters. We all really deemed those things early on pretty hard to do if we’re going to be dependent on other folks’ models. And so we made the strategic decision for text, so multimodal ingestion, text production, so we can read a chart and tell you what somebody’s like, I don’t know, blood sediment rate is because we read it somewhere on the chart, we can analyze and add and tell you if those compliant to brand guidelines, but we’re not producing imagery. With the multimodal ingestion to text and insight production, we made a strategic call almost a couple of years ago that we’re going to continue to invest in remaining state-of-the-art. Today, our models are from the Palmyra-X general model, to our financial services model, to our medical model, and our GPT-4 zero-shot equivalent.

When you pair that with folks’ data, it’s pretty magnificent. Unlike other powerful and state-of-the-art models, this is a 72-bill model that can actually sit inside somebody’s private cloud and not require a ton of resources. For us, a whole host of things have allowed us to be state-of-the-art and still relatively small. That’s still a massive internet-scale model, but everything from the number of tokens the models have seen to just how we have trained them has helped us be super-efficient.

Those are all decisions that stem from that first strategic one, and really important problems are going to have to be connected to data that folks don’t want to leave their cloud. And so to do that, we’d have to be in there, and it would have to be a model that could be efficient, and so we weren’t going to have a bunch of different skills in one. That’s why we’ve got 18 different models, similar types of training data, not too dissimilar, but the skills that they are built for are different.

The role of vector databases and RAG in enterprise AI

Vivek: One point you made here makes me think of a LinkedIn post you recently wrote, illuminating in many ways. You talked about unstructured data and where Writer can go using its models. You sit inside an enterprise and take advantage of the enterprise’s data, which is so important. This is something we hear a lot from companies, which is they need to be able to use their own data securely and efficiently when entering data into these models. We’re hearing a lot about RAG, Retrieval-Augmented Generation, and a lot about vector databases, and a number of them have caught fire. We’re seeing a lot get funded. And I’m sure a number of the founders who listen to this podcast have either used or have played with a vector DB. You have an interesting perspective on RAG and vector DBs, especially from the enterprise perspective. Please share a little bit about the posts you wrote and the perspectives that you have about this tech.

May: I don’t want to be like the anti-vector DB founder. What we do is an output of just the experiences that we’ve had. If embeddings plus vector DB were the right approach for dynamic, messy, really scaled unstructured data in the enterprise, we’d be doing that, but it didn’t, at scale, lead to outcomes that our customers thought were any good. A 50-document application, so a digital assistant where folks are querying across 100 or 200 pages across a couple of things, then vector DB embedding approach, fine. But that’s not what most folks’ data looks like. If you’re building a digital assistant for nurses who are accessing both a decade-long medical history against policies for a specific patient, against best practice, against government regulation on treatment, against what the pharmaceutical is saying about the list of drugs that they’re on, you just don’t get the right answers, when you are trying to chunk passages and pass them through a prompt into a model.

When you’re able to take a graph-based approach, you get so much more detail. Folks associate words like ontologies with old-school approaches to knowledge management, but especially in the industries that we focus on and regulated markets and healthcare and financial services. Those have really served those organizations well in the age of generative AI because they’ve been huge sources of data so that we can parse through their content much more efficiently and help folks get good answers. When people don’t have knowledge graphs built already, we’ve trained a separate LLM. It’s seen billions of tokens. So this is a skilled LLM that does this, that actually builds up those relationships for them.

Vivek: I was going to say you were saying that you don’t want to be the anti-vector DB, and I don’t think it’s anti-vector DB; I think it’s things that chunking and vector DBs work for specific use cases. What was interesting about your post was that, hey, from the enterprise perspective, you need a lot more context than what chunking can provide. This is helpful because many of the founders or companies working in narrow areas don’t often see the enterprise perspective, where all of this context matters versus some chunking. You probably got some interesting responses.

May: Both customers and folks were like, “Thank you so much. I sent that to my boss. Thank you, God.” I’m a non-technical person, so when I explain things to myself, I try to share them with other non-technical folks so that they can also understand them, and that actually helps technical people explain things to other technical people.

We got a lot of folks reaching out, and thanks. Now, of course, our customers know this already. Still, we’re in market maturation, where the underlying techniques, algorithms, and technologies matter to people because they seek to understand. In a couple of years, when this is a much more mature market, people will be talking solution language. Nobody buys Salesforce and is like, so what DB is under there? What are you using? Can I bring my own? But we’re not there in generative AI. And I think that’s a good thing because you go way deeper into the conversation.

People are much better at understanding natural limitations. Nobody is signing up for things that just aren’t possible. The other side to this conversation being so technical is there are people who don’t know as much as they would like to and are now afraid to ask questions. We’re seeing that a little bit, especially in the professional services market, where folks need to come across as experts because they’ve got AI in all their marketing now. Still, it’s much harder to have real, grounded conversations.

Navigating the challenges of enterprise sales in AI

Vivek: The commercial side is interesting because there’s so many startups in AI, and there’s so many technical products and technical founders and companies, but not many of them have actually broken into commercial sales. Even fewer of those have broken into enterprise sales. I know Writer has customers like L’Oreal and Deloitte and a number of others, some of which we don’t really associate with being first movers in tech, and especially first movers in AI. And so maybe you can take us through a little bit of how Writer approaches the commercial aspect of things in terms of getting all of these AI solutions in the hands of enterprise users. Take us through the first few customers that Writer was able to get and how you broke into this. What was the story behind that?

May: Our first company sold to big companies, so in the machine translation and localization era, Waseem and I sold to large companies. And we started off selling to startups, and then I can’t remember, someone introduced us to Visa, and we were like, oh, that’s an interesting set of problems. And definitely, that was probably Qordoba circa early 2016. And so for three solid years, we penetrated enterprises with a machine translation solution that hooked up to GitHub repos, and it was great. We just learned a ton about how companies work, and it really did give us this cross-functional bird’s eye view of a bunch of processes because you think about a company going into a new market, they take a whole slice of their business cross-functionally, and that now has to operate in that language. And once you’re in kind of $100 million cost takeout land, it is really hard to go back to anything else.

Our mid-market deals are six figures, and it’s because of the impact that you can have. Now, it does mean that it’s harder to hire, so yes, we’re under 200 people. I’d love to be 400 people. But we’re interviewing so many folks, dozens and dozens for every open role because you really have to have a beginner’s mindset and just this real curiosity to understand the customer’s business. No customer right now in generative AI wants to have somebody who’s learning on the job on the other side of the phone. And the thing is, in generative AI, we’re all learning on the job because this is such a dynamic space, technology is moving so fast, the capabilities are coming so fast. Even we were surprised at how quickly we got to just real high quality. We launched 32 languages in December, GA in Jan, and it was like, whoa, I really thought it would be a year before we were at that level of quality.

All to say that, we need people who can go really deep. Enterprise sales requires everybody in the company to speak customer, and not generic customer, but you’re talking to CPG, it’s a different conversation in retail, it’s a different conversation in insurance, and really understanding how our customers see success. And it’s not this use case or that use case. That’s a real underutilization of AI when you think about it that way. But what are the business outcomes they’re trying to achieve? And really, not just tying it to get the deal done, but actually making that happen faster than without you. That’s what the whole company has to be committed to.

Hiring for GenAI

Vivek: How do you find that profile? Technology is moving so fast that we’re not experts, and many of us are learning on the job and as learning as things come through. At the same time, you have to find a terrific sales leader or an AE or someone who not only understands AI and the product but understands and can speak to enterprises. So hiring is difficult, but how do you find that person, or are there certain qualities or experiences that you look for that you think are the best fit for the sales culture and group that you’re building at Writer?

May: I would start with the customer success culture because that was hard to get to the right profile. We believe in hiring incredibly smart, curious, fast-moving, high-clock-speed people. And we’re all going to learn what we need to learn together. So there’s no like, oh, that was the wrong profile, fire everybody, and let’s hire the new profile. We don’t do that. What I mean by profile is what we need folks to do. And, of course, over time, you refine the hiring profile so you don’t have to interview so many people to get to that right set of experiences and characteristics. On the post-sales side, we’re looking for end-to-end owners. In customer success, it can be easy for folks to be happy that they got their renewal, or we’re over the DAU/MAU ratio we need to be. We’re just going through a check-the-box exercise. We have a 200% NRR business, and it’s going to be a 400% NRR business soon.

And that doesn’t happen unless you are maniacally focused on business outcomes. This is a no-excuse culture, and it’s necessary in generative AI because the gates come around all over. Matrixed organizations are the enemy of generative AI because how do you scale anything? The whole point of this transformation is that intelligence will now scale, and most organizations are not set up for that. As a CSM, you have to navigate that with our set of champions and our set of technical owners inside of the customer, and that just requires real insistence, persistence, business acumen, and a relationship builder who’s also a generative AI expert. So it’s a lot. And then on the CS side, on the sales side rather, it’s definitely the generative AI expertise, and it’s a combination of the hyperscaler salespeople swagger. And we don’t hire from the hyperscalers.

We interviewed a bunch of folks, but it’s like a guaranteed budget item and a guaranteed seven-figure salary for those sales folks. Obviously, the brands are humongous, and the events budgets are humongous, so it just hasn’t been the right profile. We have loved the swagger. When you can talk digital transformation, and you’re not stuttering over those words, there’s just a confidence that comes across. Interviewing lots of different profiles has helped us come up with ours, and it is growth mindset, humility, but real curiosity that does end up in a place of true expertise and knowledge about the customer’s business, the vertical we have potted up in terms of verticalization that’s going to extend all the way into the product as well soon. My guess is most folks who are building go-to-market orgs in generative AI companies are doing more in a year than other companies do in five because our buyer is changing, and the process is changing. It’s a lot of work streams.

Vivek: It’s a lot. And I think I heard you drop the number 200% NRR or something in there. I want to make sure I heard that correctly because that’s really incredible. And so hats off to the team that’s-

May: 209.

Vivek: That’s basically the top 1% of 1%. It’s interesting to contrast that with other AI companies that we’ve seen in the last 12 or 18 months. We’ve all heard stories of others where, probably not enterprise-focused GenAI products, but the term GenAI wrapper has been thrown around, and a lot of them have focused on more of the consumer use cases. They’ve had incredible early growth, and then in the last six to 12 months, we’ve also seen a pretty rapid decrease or a lot of churn. Is that something that you all had thought about early on in terms of the focus of Writer? Did you think about that early as a founder trying to see what worked?

Creating a Generative AI Platform

May: Around ChatGPT time, I think there was a fundamental realization among our team, and we wrote a memo about it to everybody and sent it to our investors, that real high-quality consumer-grade multimodal was going to be free. It was going to go scorched earth. That was clear, and it has come to pass. The other truths that we thought would manifest that we wrote about 15 months ago, every system of record coming up with adjacencies for AI that the personal productivity market would be eaten up by Microsoft. And so for us, what that really meant was, how do we build a moat that lasts years while we deepen and expand the capabilities of the Generative AI platform? And so what was already happening in the product around multifunctional usage right after somebody had come on, we basically were able to use that to really position horizontal from the get-go.

That got us into a much more senior conversation, and we worked to buttress the ability of our Generative AI platform to be a hybrid cloud. We’ve got this concept of headless AI where the apps that you build can be in any cloud. The LLM can be anywhere. The whole thing can be in single tenant or customer-managed clouds, which has taken 18 months to come together. We will double down on enterprise, security, and state-of-the-art models. That’s what we’re going to do, and we’re going to do it in a way that makes it possible for folks to host the solution. I think even those companies have reinvented themselves. A lot of respect for them. But the difference is that in a world of hyperscalers and scorched earth, all the great things OpenAI is doing are super innovative, and every other startup is trying to get a piece. The bar for differentiation went way up 15 months ago for everybody.

Vivek: Hats off on navigating the last 15 to 18 months in the way that you and the team have because it’s incredible to see, compared to a lot of the other companies that are both on the big side and the small size incumbents, startups that are all challenging different parts of the stack. Two questions for you that are more founder-related for you as a founder; let’s start with the first one: what is a challenge that came up unexpectedly, call it, in the last six months that you had to navigate, and how did you navigate it?

May: More than half the company today wasn’t here six months ago. Six months ago we had just closed the series B. And I think in the last few months, it’s been just this transition from running the company to leading the company — if that makes sense. Then working with the entire executive team around just naming the behaviors, the inclinations, the types of decisions that we wanted everybody to be making and to be empowered to make, and then running a really transparent org where information went to everybody pretty quickly.

We’ve got a very competitive market that’s really dynamic, that is also new. Signal-to-noise has to be super high or else everybody would end up spending 80% of their day reading the news and being on Twitter. We needed folks to make the right decisions based on the latest insights, the latest things customers and prospects were telling us, the latest things we were hearing, latest things product was working on, and all those loops had to be super tight.

Vivek: Its execution and speed matters, especially in this space.

May: Yes and execution while learning. I think it’s easier if you’re like, all right, Q1, OKRs, awesome, CU and a Q1. But this is a really dynamic space, and the hyperscalers are militant. This is really competitive.

Vivek: All right, last one for you, what’s the worst advice you’ve ever gotten as a founder?

May: The worst advice that I have taken was early in Qordoba days, hiring VPs before we were ready. It felt like a constant state of rebuilding some function or other on the executive team. That’s such a drain. We have an amazing executive team, we’ve got strengths, we’ve got weaknesses. We’re going to learn together. This is the team. And it’s why we spend so long now to fill every gap. We’ve got a head of international, CFO, CMO. We’re going to take our time and find the right fit. But those were hard-won lessons. The advice that we got recently that we didn’t take, was to not build our own models. And I’m really glad we didn’t take that advice.

Vivek: I was wondering if that might be something that came up because you’re right; we see so much activity around saying, hey, build on top of GPT, build on top of this open-source model. It works for some sets of companies, but as you say, thinking about moats early on in technology and IP moats from the very beginning is only going to help you in the long run. Well, thank you so much, May. Congrats on all of the success at Writer so far. I’m sure the journey’s only beginning for you, and we’re excited to see where this goes.

May: Thank you so much, Vivek. For folks listening, we’re hiring for literally everything you might imagine. So I’m May@writer, if this is interesting.

Vivek: Perfect. Thanks so much.

Dust Founders Bet Foundation Models Will Change How Companies Work

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Madrona Partner Jon Turow hosts Gabriel Hubert and Stan Polu, the co-founders of a 2023 IA40 winner, Dust. Dust enables information workers to create LLM-based agents that help them do their own work more efficiently. Gabe and Stan go way back, having started a data analytics company that was bought by Stripe in 2015 where they both stayed for about five years. After stints at a couple other places like OpenAI for Stan, they decided to come back together in 2023 to work on Dust.

In this episode, Jon talks to Gabe and Stan about the broader AI/LLM ecosystem and the classic adoption curve of deploying models within an enterprise. They talk about Dust’s bet on horizontal versus vertical, the builders of tomorrow, limitations of AI assistants, and so much more.

This transcript was automatically generated and edited for clarity.

Jon: You have made a fundamental product choice that I’d like to study a little bit, which is to build something horizontal and recomposable. Depending on what choice you made, I imagine we’d be having one conversation or the other. On horizontal, on composable, you’re going to say, “There’s so many use cases. Flexibility is important, and every different workload is a special snowflake.”

If you had decided instead to build something vertical, it doesn’t have to be AGI, but you’d be saying, “To really nail one use case, you need to go all the way to the metal.” You need to describe the workflow with software in a very, very subtle and optimized way. Both of these would be valid. How do you think about the trade-off between going vertical and going horizontal? How have you landed on that choice that you made?

Stan: It comes with complexities and risks to go horizontal. At the same time, if you assume that the product experience will evolve so fast, it’s also the best place to do that kind of exploration, that playground for product exploration. It’s the intersection of all the use cases at the intersection of all the data. During the current age of GenAI where we are in a conversational agent interface, we are making the right bet. It means we are on par with the UX, but we have access to much more data and much more flexibility.

Empirically, we have checked that hypothesis. This means that some of our users have been confronted with verticalized solutions. We beat the verticalized solution in many cases because we provide more customer stability. Customer stability comes in two flavors. First, provide instructions that better match what the user wants to do in their company, depending on the culture or existing workflows. Second, being able to access multiple sources of information.

On some use cases, having two sources of information and one being the one associated with the verticalized use case, you are much better off because you have access to all the information that lives on Slack, all the information that lives on Notion that’s related to the best case use case. Many of them could be sales and customer support use cases. That enables people to create assistants that are much better by being able to tap into that information that the verticalized SaaS products will never be able to tap into. As a verticalized product, it will be either the incumbent or somebody building on an incumbent, somebody building on a customer support platform, somebody going on the sales data platform, whatever it is.

In that interim, where the UX is being defined, we have an incredible advantage of being horizontal. That may close in the future because as we realize what it is to use models in a particular use case efficiently, then it’s all going to be about not getting a text, but it’s going to be about getting an action ready to be shipped. It’s about being able to send an email to prospects automatically from the product, etc. There, we might have a steeper curve to climb with respect to the verticalized solution because they’ll be more deeply integrated and have the ready-to-go action built probably before us. It’s still an open-ended question. As it stands today, it creates a tremendous advantage.

We’re still in the process of exploring what it means to deploy that technology within companies. So far, that bet has been good, but that’s the most important product purchase we’ve made. We’re checking it every week, every month, almost every day. Was that the right choice? We can comfortably feel that it is the right choice today. We also realize we’re inside an ecosystem that is on moving ground. That’s something that has to be revisited every day.

Gabe: Some of the convictions from data access being a horizontal problem rather than a vertical problem when you distribute to a company have helped with that choice. You can convince a CISO of the validity of your security measures and practices. It’s just as hard to do whether you need one sliver of that data set or all. Your bang for the buck is better when you want to play with the more concrete use case of having access to different sources of information to deliver on a fairly simple example. Imagine you had access to a company’s production databases. You could generate very smart queries on those production databases any day of the week. That’s a product that we hope to see today with these SQL editor interfaces that everybody in the team is supposed to use.

Where in that product is the half-page document describing an active user? Or what a large account is? What do we consider to be a low performance on customer support scores? Those documents live in a different division as the company grows. It’s a division that doesn’t even know a code version exists. It’s meetings where their definitions are updated in a very business-driven setting. That constitution of what an active user is in a separate set of data.

For somebody within the company at a low cost, to be able to ask an agent a question about the number of active users in Ireland or the number of active and satisfied users, you have to cross those data sets. That’s almost systematically the case. A lot of the underperformance you could look at or a skeleton audit in companies today comes from this siloed conversation. To us, being excited to start another company, trying to build an ambitious product company with the experience we’ve had from companies that have seen traction, that have grown fast, that have burnt out, incredible individual contributors that start seeing the overhead and the slowness and how tricky it is to just get a simple decision pushed through because nobody has the lens to see through all these divisional silos, it seems more exciting too to build a solution to that problem. When we pitch it to people who’ve seen and felt that pain, that’s the spark that goes off.

It is a risk. But the people excited with that type of software existing in their team, I argue that they were excited to build for the years to come. Come back in five years, and let’s see if we were right on it.

Jon: Oh, come on. It’s going to be one year.

Gabe: Yeah. That’s true. Fair enough. Fair enough.

Jon: It’s so tempting to project some Stripe influence on the horizontal strategy that you guys have taken. Before I hypothesize about what that would be, can you tell me what you see? What is the influence of how Stripe is put together on the way you’re building Dust?

Stan: In terms of how Stripe was operating, which influenced us in defining what we were to build with Dust, there is a lot of creating a company OS where people have the power to do their job at that desk, which means having access to information and a lot of trust. Some of that has trudged to our product decisions. We’ve built a product that’s not a great fit for a company with granular access control on who has access to that very small folder inside that Google Drive. Those people added only manually on a case-by-case basis.

We are optimistic that the companies that will work the best in the future are the ones that make their internal information available. That is, at the same time, a great fit for building AI-based assistants.

Gabe: Regarding the product, there’s a ton of inspiration from what Stripe did to developers. It gave them the tools to have payments live in the app before the CFO managed to meet with an acquiring bank to discuss a payments pool. It was like if the developer came to the meeting and said, “It’s life. We’ve already generated a hundred bucks in revenue. What are the other questions?”

I think if we can build a platform that puts to bed some of the internal discussions as to which provider of a frontier model we should go and trust for the next two years, a builder internally says, “I just built this on Dust, and we can switch the model in the back if we feel like it’s going to improve over the next months.”

That’s a scenario that the aggregation position is a good one. It requires you to be incredibly trusted. It requires composability. It does mean a set of technical decisions that are more challenging locally. But it enables, optimistically, I think to Stan’s point, some of the smarter, more driven people to have the right toolkit. That’s something that we take from Stripe. Stripe was not a fit for some slower companies when we were there, which ended up okay.

Jon: When we think about the mission you folks are going after, there’s so much innovation happening at the model layer. One thing that we’ve spoken about before is there’s a lot you can accomplish today. When we start talking about what it is that you can accomplish with Dust, can you talk about the requirements that you have for the underlying tech?

Gabe: One of the core beliefs we had in starting the company that essentially dates back to Stan’s time at OpenAI and his ability to be front-row seats at the progression of the various GPT models is that the models were pretty good at some tasks and surprisingly great at some tasks. We will face a deployment problem for that type of performance to reach the confines of the online world. The opportunity was to focus on the product and application layer to accelerate that deployment. Even if models didn’t get significantly more intelligent in the short term if you’re a little hesitant about calling an AGI timeline in the years to come, there’s an opportunity to increase the level of effectiveness people can have when they have a job that involves computers with the current technology.

In terms of requirements with the technology, for us, it’s let’s make the models that are available today, whether they’re available via API as business models or they become available because they’re open source and people decide to host them and make them available via API, et cetera, et cetera, and package them such that smart, fast-moving teams can access their potential in concrete day-to-day scenarios that are going to help them see value.

Stan: The interesting tidbit here is that the models will get better as research progresses. On their own, they’re not enough for deployment and for acceptance by workers. At the same time, the kind of use cases that we can cover depends on the capability of the model. That’s where it’s different from a classical SaaS exercise because you’re operating in an environment that moves fast. The nature of your product itself changes with the nature of the models that it evolves.

It’s something that you have to accept when you walk in that space — the hypothesis that you make about a model might be changed or might evolve with time, and that will probably require changing or evolving your own products as that happens. You have to be very flexible and be able to react quite rapidly.

Jon: There are two vectors we spoke about before when we’ve discussed this in the past. One is that if you stop progress today with the underlying models, there are years of progress that we can make. The other is that if we go one or two clicks back in history to say mobile apps, we saw that there were years of flashlight apps before the really special things started to arrive.

Where would you put us on that two by two of early models versus advanced and how much it matters versus not?

Gabe: What’s interesting is to talk about people who’ve been on the early adopter side of the curve, who’ve been curious, who’ve been trying models out, and who’ve probably been following the evolution of ChatGPT as a very consumer-facing and popular interface. You get this first moment of complete awe where the model responds in something particularly coherent. I’m asking it questions, and the answers are just amazing. Then, as I repeat use cases and try to generate outputs that fit a certain scenario, I’m sometimes either surprised or disappointed.

The stochastic nature of the output of the models is something other than what people think about at the very beginning. They attribute all of the value to pure magic. Then, as they get a little more comfortable, they realize that it might still be magic, or they might be unable to explain it technically. Still, the model isn’t always behaving in a way that’s effectively predictable enough to become a tool.

We’re early in understanding what it means to work with stochastic software. We’re on the left side of the quadrant. In terms of applications, the cutting-edge applications are already pretty impressive. By impressive, I mean that they fairly systematically deliver at a fraction of the cost or at a multiple of the usual speed, an outcome that is relatable with or on par with human performance.

Those use cases exist already. You can ask GPT-4 or a similarly sized performant model to analyze a 20-page PDF. In seconds, you will get several points that no human could compete with. You can ask for a drill-down analysis or a rewrite of a specific paragraph at a fraction of the cost of what you can ask on a Fiverr marketplace or an Upwork marketplace for some tasks. We already have that like 10x faster, 10x better.

In terms of broad adoption, especially by companies, if you take ChatGPT with a few hundred million users, that still leaves 5.4 billion adults who have never touched it and don’t even know what it means by some scale for sure. If you go into the workplace, there are very few companies that are employing generative artificial intelligence at scale in production that were not created around the premise of general artificial intelligence being available.

Some companies have been created in the last years and do, but most companies that existed before that timeline are exploring. They’re exploring use cases. They’re rebuilding their risk appetite and policies around what a stochastic tool might bring regarding upside and downside opportunities and risk. We’re still very early in the deployment of those products.

One indication is that the conversational interface is still the default that most people are using and interacting with when it’s likely that it shouldn’t just be conversational interfaces that provide generative artificial intelligence-powered value in the workplace. Many things could be workflows; many things could be CronJobs. Many applications of this technology could be non-chat-like interfaces, but we’re still in a world where most of the tests and explorations are happening in chat interfaces.

We still want the humans to be in the loop. We still want to be able to check or correct the course. It’s still early.

Stan: One analogy I like to use is that today, to recall what Gabriel just said on the conversational interface, it really feels like we are in the age of Pong, the game for models. You’re facing the model. You’re going back and forth with it. We still need to start scratching the multiplayer aspect of it. We haven’t yet started scratching and interacting with the model in new ways and more efficient ways.

You have to ask yourself, what will be the civilization for LLMs? What’s going to be the Counter-Strike of LLMs? That is equally important to dig into compared to model performance. The mission we set for ourselves is to be the best for our users to be the people who dig in that direction and try to invent what’s going to be the C-5 of interacting with models in the workspace.

Jon: Can you talk about the mission for Dust in the context of the organization that’s going to use it?

Stan: We want to be the platform that companies choose as a bet to augment their teams. We want it to be a no-brainer that you must use Dust to deploy the technology within your organization. Do it at scale. Do it efficiently. This is where we’re spending cycles. It’s funny to realize that every company is an early adopter today. The board talks about AI, the founders talk about AI, and so the company itself is an early adopter. But once you get inside the company, you face a classical adoption curve. That’s where product matters because that’s where the product can help deploy the companies through that chiasm of innovation inside the teams. We want to be the engine of that.

We’re not focusing on the models; we’re trying to be agnostic of the models, getting the best models where they are for the needed use cases. Still, we want to be that product engine that makes deploying GenAI within companies faster, better, safer, and more efficient.

Gabe: One of the verbs that we use quite a lot that is important is augmenting. We see a tremendous upside scenario for teams with humans in them. We don’t need to spend a lot of our time working for companies that are trying to aggressively replace the humans on their teams with generative artificial intelligence because that’s shaving a few percentage points of operational expenditure. There’s a bigger story, a play here, which is if you gave all of the smartest members of your team an exoskeleton, an Iron Man costume now, how would they spend their day, their week, their quarter? What could happen a few quarters from now if that opportunity compounds?

When we decide at Dust about different avenues to prioritize, one that’s consistently a factor is whether we are augmenting or replacing. By replacing, there’s a high likelihood that we, one, provide faster horses to people who see the future as extrapolated from the present. It’s like, “I need to replace my customer service team with robots because robots are cheaper,” when the entire concept of support and tickets as an interface between users and a company is to discuss how a product is misbehaving and may be challenged in the future.

It’s a tension for us because there are some quick wins or deployment scenarios that many companies are considering. It helps us explore and spend time on some of the cars instead of the faster horses scenarios dawning upon us.

Jon: I think it has implications not just for the individual workers, but to your point, Stan, and to your point, Gabriel, there’s going to be a difference in how the employees interact with one another. I’ve just put it to you one way. If I’m going to decide whether to join your company or not, and you’re going to tell me, “You should because I have Dust,” what would be the rest of that explanation?

Gabe: It’s a great point. I think that that’s an example we sometimes use to describe what an ambitious outcome for the company in a few years’ time or what an ambitious state of the world would be for our company in a few years’ time. If you take a developer today — the senior developer getting offers from a number of companies — and in the interview process getting to ask questions about how that company runs its code deployment pipeline. I can ask how easy it is to push code into production, how often the team pushes code into production, and what a review process looks like.

I can read a lot into the culture of that company on how it respects and tries to provide a great platform for developers. Today, developers are at the forefront of being able to read in the stack that the company has chosen, how they prioritize their experience. If you do not have a version control software that allows for pull requests, reviews and cloud distribution that works and is fast, I don’t think you’re very serious about pushing code.

We think that the future has more of the roles within a company having associated software. You could argue that, to a degree, Slack has created that before and after aspect, where if you’re applying at a company today and you ask how do employees check in with each other in an informal way to get a quick answer on something that they’re blocked on and the employer says, “We have a vacuum tube system where you can write a memo and just pipe it in one of the vacuum tubes that’s available at the end of the floor and you’ll get a response within two days,” that should help.

You’re like, “Okay, great.” I don’t think that real-time collaboration is prioritized. We think there’s a continuum of those scenarios that can be built. For us to be able to imagine a future where employees say, “Hey, we run on Dust.” We would love that to be synonymous with, “Hey, we don’t prioritize or incentivize busy work.” Everything that computers can do for you, which really computers should have been doing for you decades ago, we’ve invested in a technology that helps that happen. We’ve built a technology that helps burn through overhead and time sinks of finding the information, where it is, understanding when two pieces of information are contradictory within a company and getting a fast resolution as to which one should be trusted. The OS of that smart fast-growing team is something that we hope to be a part of the strategy for.

Jon: That’s such an evocative image of the vacuum tube. I actually bet if there were a physical one, people would like that as long as there was also Dust.

Gabe: It could be a fun gadget that people actually send employee kudos notes to at the end of the week and just team phrase updates.

Jon: What we’re talking about, though, is there’s a metaphor of the agent. Which is really in our 2023, and 2024 brain, we think of that as another member of the team. Maybe a junior member of the team at the moment. I think it was something you said, Gabriel. That the binary question of whether it’s good enough or not is actually not useful. But rather, how good is it? How should I think about that?

Gabe: Yeah. I stole it from the CIO of Bridgewater, who their communication around GPT-4 was compared to a median associate or analyst, I can’t remember what the name of the roles was. We believe that it performs slightly higher than a junior-level analyst on these tasks. Bridgewater is a specific company that has an opportunity to segment a lot of its tasks in a way that maybe not all companies are able to do.

As soon as you’ve said that, a number of logical decisions can be made around that. We often get asked about specific interactions that people have had with Dust assistants. Like, “Hey, why did it get that answer wrong?” I was like, “Assistants are getting a tough life in those companies because a lot of your colleagues get a lot of stuff wrong a lot of the time, and they never get specifically called out for that one mistake.”

That’s part of the adoption curve that we’re going through where it’s at the level of an interaction. You’re looking at a model that might be, on average, quite good and sometimes quite off. Instead of turning your brain off, you should probably turn your brain on and double-check. At the level of the organization, you’re getting performance that is, in specific scenarios, potentially higher than the median. Then, it was higher than if it got pushed to another team member for that specific set of tasks.

As models get better and the structural definition of the tasks we give them gets clearer, and as the interfaces that help feedback mechanisms get more and more used, those scenarios will progress. The number of times you feel like the answer is good enough, better than what you would’ve come up with on your own, or better than what you could have expected if you had asked a colleague.

One of the things that we systematically underestimate here is also the latency. Ask a good colleague to help you with something. If they’re out for lunch, they’re out for lunch. That’s two hours. General Patton is the one who says, “A good plan violently executed today beats a perfect plan executed tomorrow.” If, as a company, you can compound and rely on that compounding edge in terms of execution speed, the outcomes will be great.

Jon: What we’re talking about is assessing agents not by whether they’re right or wrong but by their percentile of performance relative to other people. Yet, there’s another thing that you both have spoken about: the failure modes will be different. It’s easy for a skeptical human, especially, to say that one weird mistake means this thing must be done.I don’t think it would be the first time in history that we’ve mis-evaluated what technology would bring us by judging it on some anecdotal dimensions.

Stan: Something interesting to jump back on your 2023 brain and how we might not be foreseeing correctly; there’s a massive difference between having intern-level or junior-level assistants where this is a human, so you want to leave the task to them entirely and leave some agency to them. The shape of tasks that can be given to that person is defined by their capability and the fact that they’re junior and have assistants where the agency is kept on the side of the person who has the skills. There’s a message difference between what you can do with junior-level assistants where you keep the agency versus just junior assistants for the humans.

It will be interesting to see how that automation and the augmentation of humans play out. It might be the case that it will be very different from adding 10 interns to a human and adding 10 AI-based assistants to human. It may well be the case that 10 AI assistants augment humans much more than having 10 interns. There’s going to be an interesting landscape to discover.

Jon: Depending on how you frame this, I’m either going forward or backward in history from unlimited manual labor to spreadsheets. A Dust agent reminds me in many ways of a spreadsheet.

Gabe: In terms of capability and the level of abstraction versus the diversity of the tasks, that’s not a bad analogy. It’s unclear if the primitives that currently exist on Dust are sufficient to describe the layer and space that we really intend on being a part of. If we are successful, Dust will probably retain some properties of malleability, the ability to be composable, programmable, and interpretable by the various humans that are using it, which does remind me of spreadsheets, too.

Jon: One thing that you see in your product today is a distinction between the author and the consumer of a Dust agent. It’s reasonable to expect there’s going to be a power law of distribution of more people consuming these things than creating them. Were there some way to really measure spreadsheet adoption? I’m quite sure we’d see the same. That a handful of spreadsheets, especially the most important ones, get created by a small number of us and then consumed by many more of us.

These things are infinitely malleable, and many people can create a silly one that is used once and thrown away.

Gabe: We see that today in some of our customers, who will see the assistants. I had a product manager admitting that they had created a silly assistant mixing company OKRs and astrology to give people a one-paragraph answer on how they should expect to be doing in the quarter to come. They were admitting that it was a distribution mechanism for them. It’s like, “I just want people to know how easy it is to create a Dust assistant, how easy it is to interact with it, and how non-lethal it is to interact with it, too.” There’s always that fear of use case, all of this usage scenario.

The reason we believe that they’re not going to be developers is that the interface has become a natural language in many cases; you’re essentially just looking at the raw desire for some humans to bend the software around them to their will. I think the builders of tomorrow with this type of software have more in common with very practical people around the house who are always fixing things and who won’t let a leak go for two weeks unattended, who’ll just fix the leak with some random piece of pipe and an old tire. It just works, and it’s great. That is seeing opportunity and connecting the Lego bricks of life.

One of the big challenges for companies like us is how to identify them. How do you let them self-identify? How do you quickly empower them such that the rest of the team sees value rapidly? One of the limitations of assistants like Dust within a company is access to the data that the company has provided to Dust. The number of people controlling access to the data gates is even smaller than the number of people who can build and experiment with assistants in some cases. How can a builder reach out to somebody at the company with the keys to a specific data set and say, “Hey, I have this really good use case? This is why I feel we should try it out. How could I get access to the data that allows me to run this assistant on it?” Those are all the product loops.

They have nothing to do with model performance. They have everything to do with generating trust internally about the company, the way the product works, the technology, and where the data goes, all these things that are product problems.

Jon: If I move half a click forward in history, you start to think about data engineering and how empowering it was for analysts to get tools like dbt and other things that allowed them to spin up and manage their own data pipelines without waiting for engineers to turn the key for them. That created a whole wave of new jobs, a whole wave of innovation that wasn’t possible before. To the point that now, it’s impossible to hire enough data engineers.

There’s this empowering effect that you get from democratizing something within a company that was previously secured — even if for a really good reason. I’m connecting that to the point that you made, Gabe, the data itself that feeds the agents is often the key ingredient and has been locked down until today. Based on the use cases that you’re seeing, this is going to be a fun lightning round. My meta question is, has the killer agent arrived? Maybe you can talk about some of the successes and maybe even some of the fun things that aren’t quite working that your customers have tried.

Gabe: I think that killer agent is a product marketable concept that you can slap on your website, and 90% of people who visit upgrade, regardless of their stage of company, their developing, et cetera, et cetera, I don’t think we’re there yet. They ask questions that the Dust, let alone an LLM without a connection to the data, would have no chance of answering.

Those are some interesting cases where I think we’re failing locally and tactically because the answer is not satisfying. Where I’m seeing weak signals of success is that people are going to Dust to ask the question in the first place.

On some of the use cases that we’re incredibly excited about, it’s almost similar situations, but with satisfactory answers, where people are asking surprisingly tough questions that require crisscrossing documents from very different data sources and getting an answer that they unmistakably judge as being way better than what they would’ve had by going to the native search of one SaaS player or by asking a team member, et cetera, et cetera.In some cases, the number of assistants that midsize companies generate on Dust is high. Do you see that as a success or a failure? Does that mean that you’ve been able to give the tools for a very fragmented set of humans to build what they need, and you interpret it as success? Or we’ve essentially capped the upside that they can get from these two specific assistants? That’s still one of the questions that we’re spending a lot of time on today.

Jon: If we go back to our trusty spreadsheet metaphor, there are many spreadsheets. They’re not all created equal.

Gabe: Yeah, it’s fine. Maybe it’s fine. Maybe yeah, not all spreadsheets need to be equal.

Jon: Thank you so much for talking about Dust and your customers. I think customers are going to love it.

Gabe: Awesome. Thank you very much for having us.

Stan: Thank you so much.

Coral: Thank you for listening to this IA40 Spotlight Episode of Founded & Funded. Please rate and review us wherever you get your podcasts. If you’re interested in learning more about Dust, visit www.Dust.tt. If you’re interested in learning more about the IA40, visit www.IA40.com. Thanks again for listening, and tune in a couple of weeks for the next episode of Founded & Funded.

 

From Creating Kubernetes to Founding Stacklok: Open-Source and Security with Craig McLuckie

From Creating Kubernetes to Founding Stacklok: Open-Source and Security with Craig McLuckie

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Managing Director Tim Porter chatted with Craig McLuckie, who’s most known as one of the creators of Kubernetes at Google. Madrona recently backed Craig’s company Stacklok, which is actually the second company of Craig’s Madrona backed (Heptio in 2016).

Stacklok is a software supply chain security company that helps developers and open-source communities build more secure software and keep that software safe. Tim and Craig discuss Stacklok’s developer-centric approach to security, the role of open source in startups, the importance of open source and security, and they also hit on some important lessons in company building — like camels vs. unicorns – and where Craig sees opportunities for founders. It’s a must-listen for entrepreneurs out there. But, before I hand it over to Tim to take it away, don’t forget to subscribe wherever you’re listening.

This transcript was automatically generated and edited for clarity.

Tim: I’m very excited to be here today with Craig McLuckie. It’s the second time I’ve had the privilege of working with Craig on a startup. So, Craig, It’s awesome to be with you here today, my friend. How are you?

Craig McLuckie: Oh, I’m doing great. Thanks for having me on.

Tim: Absolutely, It’s our pleasure. Well, I didn’t do it justice. Tell us a little bit about Stacklok and what you’re building now and a bit of the founding story, and then we’ll double back and talk more about how some of your experiences in Kubernetes and Heptio, et cetera, led you to build this company now.

Stacklok & Securing Open Source

Craig McLuckie: The story behind Stacklok is it’s a little company, a series A company, Madrona backed, that was started by myself and my friend Luke Hinds, who was the founder of a project called SigStore. The story behind Stacklok goes back several years. I’ve been known for my work in the cloud native computing space, and I had some success with open-source efforts like Kubernetes and many other projects that we built on the back end of Kubernetes to operationalize it and make it more accessible to enterprises. Open source has served me incredibly well as a professional, and I’ve spent a lot of time in open source building open-source communities and navigating those landscapes.

One of the things that occurred to me is that it seems obvious, but open-source is important. It is the foundational substrate for what is driving a substantial portion of human innovation right now. We spend a lot of time talking about generative AI, and you look at something like ChatGPT, and we dig into what’s behind the scenes, and there’s Linux, there’s Kubernetes, there’s Python, there’s PyTorch, there’s TensorFlow. All of those were precursor technologies before many ChatGPT-specific IPs even lit up. That’s a significant stack, and it’s all open-source technology.

The question that went through my mind and continues to echo with me is, we’re so contingent on open-source, but how do we reason about whether open-source is safe to consume, as a fundamental building for almost everything we consume? Historically, the currency of security has been the CVE, so there’s a critical vulnerability associated with a piece of technology. However, it’s increasingly been challenging for organizations to deal with this. My interest in this space predates things like the SolarWinds incident, which got people to think about supply chain security. This only predates the Log4J incident, which continues to plague enterprise organizations. It comes down to this: we are contingent on these technologies, but we don’t necessarily have a strong sense of the security or sustainability of the organizations producing them. We’re not necessarily creating the right amount of recognition for organizations going above and beyond to produce more open-source security.

What we’re doing with Stacklok is two things. One is we’re building a set of intelligence that helps organizations reason about open-source technology, not just in terms of whether it looks safe, whether it has a low number of vulnerabilities or static analysis being run against it, et cetera, but also increasingly whether the community that’s producing it is operating in a sustainable way or not. So, producing that signal is a good starting point. Still, we also want to make sure that on the consumption side, when organizations are looking to bring that technology in-house and continue to invest in it and build around it, we have the apparatus to drive sustainability to create control loops that enable folks that are both producing open-source and consuming open-source to institute policies to make sure they stay within the lines of safe consumption. It’s about bringing those two things together in a set of technologies backed by open-source sensibilities to help organizations navigate this incredibly critical landscape.

Tim: Stacklok is built on open-source, or at least around Sigstore for open-source, as companies and developers ingest lots of open-source, put that together to build their product, and then have to keep that updated over time. So those open-source projects that fed it continue to be updated themselves. So you get to track that whole chain. Talk a little bit more about how that works and how Sigstore plays into the strategy here.

Knowing where open-source tech comes from

Craig McLuckie: One of the most important things to know when consuming an open-source piece of technology is where it came from. It’s less obvious than people might think. If you go to a package repository and download a package, and then you install the package and run the package, you’ll look at the package repository’s metadata for where that package was published. So hey, it was built in this repo by this organization, et cetera. But how do you know that that’s the case? Because most of that information is self-reported, it’s not being generated. The answer is that folks aren’t even publishing the signatures associated with the public keys associated with signatures.

You need something that can deterministically tie a piece of software back to its origins. This is what Sigstore has done. It’s effectively a bridge between an artifact and the community that produced it, and it’s a set of tools that make it easy for a community that’s producing something to sign that thing and say, hey, we produced it, here, and this is what it contains.

Now, Sigstore is one little piece of the story. It ties the software to the origins but doesn’t tell the origin story. It does not necessarily give you insight into the community behaving responsibly or show what the transitive dependency set will look like. We’ve created this technology called Minder, which helps communities or consumers of software or producing software operate in more intrinsically sustainable ways. Let’s say, hey, I want all of these policies applied to all of the repos associated with producing this piece of technology and make sure that those repos stay in conformance.

When a new merge request comes along with something sketchy, let’s block that merge request and recommend an alternative instead. If someone misconfigures one of your repos so that branch protections are off, let’s catch that in the moment and make sure that that can be remedied. In so doing, you’re producing valuable information about how that community produced that piece of software, which then feeds the machine so that all that information then becomes a context that can get written into a transaction log like Sigstore so that the next person who is coming along to consume that piece of software now has that intelligence. It can make informed decisions about the consumption of that software. It’s turtles all the way down because when you look at a piece of software, it’s composed of other pieces of software that are composed of other pieces of software. Enabling these organizations that are in the process of composing things together and enhancing them to document their practices and write them into a record that someone can then consume subsequently is an incredibly powerful metaphor.

Tim: You mentioned supply chain security and talked about some of the major exploits that have occurred, like SolarWinds. For the audience, put Stacklok in the context of the broader supply chain security market. If it’s a market, it’s an umbrella term that gets used by a lot of different companies. It might be a little confusing for a lot of people and confusing for customers, too. Can you help frame this?

Craig McLuckie: First, Let’s look at the security landscape and consider how the world has worked. Historically, it used to be a world where hackers were effectively like burglars sneaking around your neighborhood, and they would look for an unlocked window and then open the window and sneak into your house and steal your silverware. That was the world that existed.

With the SolarWinds incident, we’ve seen that it’s not enough to lock your windows and turn on the alarm system with some of these active security practices. The burglars are now getting jobs at window manufacturing companies or are breaking into window manufacturing companies and tampering with the apparatus that produced the latches on windows so that the windows that are being produced and installed are intrinsically insecure, so when everyone’s out at a town banquet, they can go and clean out the entire neighborhood. That’s the sea change that we’re seeing in the security landscape.

It’s not enough to look at a piece of software today and say, hey, this thing has no known vulnerabilities, it’s good. No known vulnerabilities don’t necessarily mean anything. It could be that it’s good, or it could also mean that it’s perfectly malicious, meaning it was produced by someone sufficiently sophisticated to make it look exactly like what you want to consume. Still, they added a little naughty something that will ruin your day when it gets deployed into your production system. This idea is about understanding and knowing where your software is from and its origins, like understanding the farm-to-table journey around your software.

In terms of our positioning, where does this start? It starts with your developers. It’s not enough to say we will insert this into the production environment because, by the time you put something as a control in a production environment, your organization is incredibly invested. Failing a quality scan in a production environment is the first time you discover a Log4J instance in something you’re trying to deploy. That’s very painful because you must return to the development team. You have to figure out who produced that thing and go back to them and say, hey, this is not okay. They then have to go and reproduce it, revalidate it, redeploy it, and get it to you, and it takes an inordinately long time to deal with.

You want to intercept developers not just at the time when they’re submitting a merge request but when they type that import statement and give them signals saying, yeah, what you’re doing is good, or, what you’re doing is probably going to get you hurt later down the pipeline. Next, instituting further controls, starting with the Git repository, moving into the pipelines, the container registry, and the production environment. You have these controls along the entire journey, and you can look back on the farm-to-table story of a piece of software you’re looking to deploy.

The Role of Developers in Security

Tim: You mentioned being focused on the developer. Stacklok’s a very developer-first, developer-centric company and product you’re building. A lot of times, you think of security software as something that’s more top-down; it’s a necessary evil, and it’s being imposed on you from above within management. Talk more about why it’s important to start developer-first and how it can become something equal that a developer wants to embrace and ultimately create a better experience for their users.

Craig McLuckie: One of the things that is true about developers is they generally want to do the right thing. It’s not like developers are sitting there saying, you know what? I want to produce insecure code, or you know what? I want to mess with my ops team’s day and produce something insecure. They don’t want to go back and forth with the operations team, but it’s also worth recognizing that, at the end of the day, they’re primarily being metric-ed on producing software that creates business outcomes. The thing that’s going through their head is that I need to create customer delight through my code, and in creating customer delight, I’m going to create money for the company I’m working for. If you’re starting to produce capabilities that inhibit that, they may still want to do the right thing, but they will find ways to work around you.

The later you leave the discovery of something in the left-to-right flow of a developer’s workflow, the more intrinsically painful it’s going to be, and frankly, the less likely people are going to be to accept what you’re doing. That’s from the pragmatic side of getting this technology adopted. This is a constant challenge for the CISO, which is, hey, I want to introduce this, but the minute I do, my business counterparts yell at me because I’m slowing down their production world. So starting with technology that’s developer-friendly and developer-engaging is a good story.

Now, it’s important not to confuse developers as buying centers. Developers don’t have budgets by and large, but increasingly, when you look at the IT landscape, developers are just a disproportionately significant cohort, not because they’re necessarily going to buy your technology directly, but because all of the people that do buy your technology care about them, and want to enable them, and are going to see you as being a more intrinsically valuable capability if you appeal to the developer.

To Open Source or Not

Tim: A lot of the listeners are thinking about, hey, I’m building a new company. Should I open-source? Should I not? How do I capture developers’ attention, hearts, minds, and community? I’ll back up just a little bit. So Craig, you co-created Kubernetes at Google and left a while later and started Heptio, which was all around helping enterprises adopt Kubernetes. Kubernetes became arguably the most popular open-source project in the world. There was a point where only Linux had more daily interactions than Kubernetes. Of course, then at Heptio, you built it for two years, sold it for 600 million to VMware, and it continues to live on there. I know there’s a lot here and a lot of founders come and talk to you about it, but what are some of the principles that you thought about in building out Kubernetes and now are thinking about Stacklok or other companies that are focused on developers and using open-source to build this adoption flywheel and community?

Craig McLuckie: There’s a hard truth here, and this is important for the listeners to embrace, which is that open-source is effectively mortgaging your business future for lower immediate engagement costs. So, you get lower activation energy. It’s much easier to get the flywheel turning with open-source particularly if you’re a small company. it’s going to create a virtuous cycle with a lot of the individuals that you want to engage. They may even contribute directly to the project. They’ll certainly make it their own. They’ll give you feedback in near real time; they’ll build the project with you. It’s a wonderful way to build technology. It reduces your barriers to entry in enterprise engagements, particularly if you’re a rinky-dink little startup. If you have an Apache-licensed open-source project, particularly if the IP is Foundation Homes, you’re far more likely to get through procurement because, at the end of the day, they know they can just run it themselves if things get weird. So your path is easier.

When I was thinking about Kubernetes, what distinguished us at Google was that I didn’t need to commercialize Kubernetes. I just needed it to exist because I had something to commercialize it. It was called Google Compute Engines, which had decent margins. I needed to disrupt my competition with the technology to run better myself on Google’s infrastructure, which motivated us to drive the Kubernetes outcome. With the success of that project, Heptio was enabling enterprise, this idea of enabling enterprise organizations to achieve self-sufficiency, to bridge the gap between them and the community to fill some of those operating gaps associated with the consumption of the open-source technology, and that worked out well.

When I look at what I’m doing with Stacklok, I recognize that over time, if I’m successful with the Minder technology, I will have to accept a lower value extraction narrative than if it were proprietary technology. But realistically, the probability of me succeeding and getting something that’s consumable out there is far higher if I can embrace the community. You have to have a plan. What is your plan for commercialization?

In the case of Stacklok, I’ll be very open with our customers and the community. Our plan is to create incredibly high-value contextual data, which we’re manifesting as a trustee right now, to support the policy enforcement that Minder has. That represents a differentiated thing. It’s hopefully something that our customers will value over time as we bring more and more richness to that data set. It represents something that’s differentiated from the community open-source technology. That’s my broad plan. I’m very open; I wear my heart on my sleeve. I don’t plan to change the licensing model. I have to stand by my commitments to my customers and to the community that I stand by. But the point is, I do have a plan for commercialization. It’s not just, I’m going to be the RedHat of this thing because it turns out RedHat is a pretty singular creature.

There is another story here, and we see a lot of this, which is that people are happy to pay you to operationalize something. If you have built a system that’s reasonably complex and you’re able to operationalize it better than other people. You’re able to set the boundaries of where commercial starts and ends and open-source starts and ends; you can navigate building a SaaS business around a single vendor, open-source technology. We’ve seen great companies emerge in that sort of context.

Tim: You built Heptio over a couple of years and it was bought, great journey, faster than probably anyone anticipated when you started. What were some of the things you learned? It could have been about some of these open-source threads we were just talking about, or there’s a whole host of other just general building startups successfully. What things did you take from that experience that you’re making sure to bake into Stacklok? Were there any things that you didn’t want to repeat having done it once before?

The Importance of Culture in a Startup (Camel v. Unicorn)

Craig McLuckie: I enjoyed the Heptio journey, and it’s hard to complain about the outcome. It’s also worth recognizing that we were riding on the momentum of Kubernetes. It was a very luckily timed undertaking. I don’t claim to be able to create that kind of luck for myself again. We need to approach this from a bottoms-up perspective.

What is different with Stacklok is that I definitely had a bit more of the unicorn versus camel mindset. A common narrative around my leadership circles is no, that’s a unicorn thing; that’s not a camel thing. We’re camels, not unicorns. We’re building something that is incredibly lean and purposeful, that’s going across the desert to an oasis, and if we don’t make us die, that’s how we think. We know we have to get this far on this much money, and there are no alternatives.

Opportunistic hiring that’s a unicorn thing; that’s not a camel thing. Getting crazy with swag, that’s a unicorn thing, not a camel thing. I think one essential delta is just a reality of the environment that we’re in. It’s like you have to have a plan for commercials. Like the days of being able to raise on promises and a winning smile are over, you better have a business plan. We’re more thoughtful in terms of our approach. We’re much smaller than I was with Heptio at this point. We’re twenty-one engineers, but they are crazy efficient and really hardworking. I’m very proud of our two products in nine months. So that’s the one difference.

The second difference is that I’ve approached this with what I think of as a hardwood growth model. Heptio was like a linear growth model, resulting in a lot of fog of war. We struggled with folks feeling disconnected because we were growing so quickly. What we’ve instituted with Stacklok is a grow hard and grow hard and grow hard, like a growth execute and run the team really hard, produce an outcome, then grow, run the team really hard, produce a result, then grow. I haven’t seen that referenced or talked about much. Still, I’m finding it to be incredibly beneficial just to the culture and to the organization because you establish the team, by establishing the norms. Then you use those norms to direct your next wave of hiring.

The other important thing is that in Heptio, we were a very culture-forward company. I think that I’m retaining that culture-forward perspective with Stacklok, but I’m now very purposeful in this case that the culture isn’t the mission, the mission’s the mission, and the culture supports the mission, and making sure that as we embrace our hiring practices, we’re very, very diligent about not just hiring the kind of people we want to work with, but hiring people that have that sense of purpose, that sense of mission, that willingness to engage in the way that we need to. That camel mindset is very powerful.

Tim: I love the camel versus unicorn mindset. It’s absolutely essential in today’s market. The days of the unicorn are distant memories in many respects. You’re talking about culture and how you’re really intentionally and thoughtfully building it at Stacklok. You wrote this great blog early on in the company called The Operating System of the Company. I like your point about how it supports the mission but is different from the mission. You have these great company tenants that you sort of alluded to. Maybe just say more about how you’re building culture and how you thought about those company tenants. This is the most common thing between all startups and all founders, is that you’re building this culture, building it right for authentically you and your team, how that carries forward for the long run in the company’s just absolutely essential foundation.

Three Jobs of a Founder

Craig McLuckie: Yeah, I heard a founder, say this, and I’ve adopted it as my own story.,which is I have three jobs, right? I have to effectively set the strategy for the company, I have to execute the strategy for the company, and I have to define the culture of the company. And of those, the third is probably the least understood, and most important. So culture for us is it’s our operating system. It’s not just the warm and fluffy things you write on the whiteboard to make you feel good about yourself. It defines how we hire and how we make decisions, and It’s indistinguishable from things like our brand. Our culture is our brand. Our brand is our culture, it defines us in ways.

In my previous company, I built with Joe Beda, and Joe and I are old friends. We’d worked together for ten years and done impossible things together. You would not find any air between us. You could present us with the same data, and we’d always formulate the same conclusion, which was easy. So when Joe and I started Heptio, we first sat down to write the culture and then went off.

Now, with Luke, an amazing human being whom I’ve come to respect in the most fundamental ways possible, we were relatively new. We had yet to work together. We had spent many hours together and gone hiking but had yet to work together. So, we both did this exercise and wrote down what we believed, just drawing on our lived experience. These are the things that define us and how we make decisions. Those little statements Luke says, like you always have short toes. You can’t step on my toes because I have toes. You can say anything, and I’ll just take it at face value. I’m not going ever to take defense. So that’s an example of something he came up with.

We both wrote those down and then I did a day exercise where I built an affinity map and went through all of the tenants, everything that Luke and I felt. I tried to distill it into a set of five things that I could then articulate that would represent what I believed to represent the way we operate because you can’t fake culture. The minute you try to fake culture, at some point, a hard decision’s going to come up, and you’re going to make a decision that’s not aligned with your culture, and then hypocrisy is going to creep and your culture’s dead and you have to start from the beginning.

I wrote down the five tenets. One: We’re a team. No one gets lifted by pushing someone down. The organization is invested in community-centricity. Community is a symmetric advantage. Two: We’re mission-oriented, we’re a startup, and we’re a camel-based startup. There will be some long, dry patches, and you better have the world to make it to that oasis. The only thing that will get you there is if you believe in the mission. I want people that feel that burn, that really want to engage in this mission with us. Three: We’re a startup, so we have to be data-centric and find truth in data. We cannot gloss over a hard truth staring us in the face because we refuse to engage with and believe the data. Four: We have to be humble but determined. The camel is a humble but very determined creature. Some of the best leaders I’ve ever worked with have embodied this. Five: We’re a security company, so we have to stand vigilant. Bringing that all together and then operationalizing it. When discussing hiring candidates, guess what exists in our candidate management system? It’s these five cultural virtues. Our interviewers should provide feedback on those five things. We talk about how to assess this in people that we’re talking to. When we’re talking about promoting individuals, we reference back to the cultural elements. Whenever you’re having a complicated conversation, and you have to make a decision, you point back to these tenets. So these are the things that are informing our decisions. That’s how I’ve approached building a culture.

Tim: Fantastic articulation. It’s energizing to hear you talk about it, Craig. Of course, you referenced your fabulous co-founder, Luke Hinds, and he was a long-time senior engineer at RedHat, co-creator of the Sigstore project that Stacklok is drawing on significantly, and is also based in the UK. So you also have a bit of a geographic/ time zone that you’re working through, which in this world where companies are at least to some degree hybrid, this cultural blueprint and operationalizing it, I think is that much more critical, compared to when everyone can just be together in the same office.

Craig McLuckie: In this kind of remote first world that we find ourselves living in, canonizing that and expressing it and being very deliberate in your communications is so critical as a startup founder.

The Future of Stacklok and the Cloud-Native Ecosystem

Tim: Awesome. Let’s come back one more time to Stacklok. So you just formed a company and founded it earlier last year. An incredible pace of building this great team of predominantly engineers that you referenced, launched the first two products which you referenced Trusty and Minder. What’s up for 2024? Maybe without giving away anything confidential, what are you looking forward to from a company’s standpoint this year? If there’s a call for customers, who should be interested, what type of developer or what type of enterprise, the developers there should be looking to Stacklok for help in the areas that we were talking about earlier?

Craig McLuckie: There are a couple of things that we’re looking to bring to market. One is, there’s Minder, which is a system that helps you mind your software development practice. This year, we want to help organizations embrace and engage with Minder. We want to make it free to use from an open-source community perspective. We want to support and engage open-source communities that are looking to improve their own security posture by providing this technology. They should feel like it’s, hey, Stacklok’s bringing us something that’s really useful. So if you are an open-source project and you have, say, 500 repos, and you’re worried about licensing taint showing up for some purpose. Having something that you can run reconciliation loops across 500 repos and reason about the trans and dependency set and make sure that nothing’s showing up in your world that shouldn’t show up, is useful. We want to engage with communities to help support their use of this technology.

We think of this as a way to give to these communities to enable them to start operating securely but also to be able to show their work so that they can produce Sigstore-based signatures and generate the attestation signal that the people who are looking to use their projects are starting to expect. We’ve focused almost exclusively on GitHub integrations, but through the year, we’ll add other critical integration points, such as Bitbucket, GitLab, Kubernetes, and pipeline integration.

On the other hand, we wanted to start the flywheel going with some high-value information. So, while Sigstore’s gathering ahead of steam and these communities are taking time to start producing consumable attestation information, we wanted to precede the segment with some very high-value intelligence. We started by doing some statistical analysis using Trusty, which is data science against open-source packages, looking for signals that tend to indicate both vulnerabilities and health. You can expect us to continue to enrich that. I’m not ready to announce, but keep watching the space. We’ll start to introduce some very, very sophisticated and cool ways of thinking about open-source technology that complements a lot of what’s already out there in the ecosystem. We’ll make that all well integrated into the Minder capabilities so that you can start to define and enforce policies based on those signals.

Tim: Awesome, looking forward to this year and beyond. A lot of people wonder your point of view on Kubernetes and the Cloud Native ecosystem. You’re firmly focused on building the security company now. Of course, lots of interrelated work with development that takes place around Kubernetes, but what’s your view on the state of that community, Craig? Is it at a maturity phase? All the big companies have their hosted and managed services. Do you still see room for more innovation for startups broadly across the Cloud Native ecosystem? Are there any pain points that you’re continuing to hear about? Any advice for founders who are looking to continue to build in your old world?

Opportunities for Founders

Craig McLuckie: There’s tremendous opportunity in that space. Call me crazy. I don’t want to be that ’80s band with that one hit song, like singing at corporate events for money in my 70s. I want to branch out and try some new things, and the supply chain stuff has been something I’m very passionate about.

Honestly, I think one of the things I’ve observed is bringing platform sensibilities to the security space, it introduces a very novel way of thinking. Look at the security ecosystem, why isn’t everything just running reconciliation because they work so damn well for platform sensibilities. It’s like, why is this not just a Kubernetes-esque integration platform? Why doesn’t this exist? We should just build it.

I think there’s a lot of work to be done. I mean, obviously, generative AI is hot, and from firsthand experience of building and running large language models that we use behind the scenes to produce some of the value that Trusty offers, there’s a lot of fragility and brittleness there. I think there’s an almost unfettered demand for capabilities to simply operationalize and deliver as a service-like experience for some of these community-centric large language models. I think in the Gold Rush of generative AI, the operationalization pickaxes are going to sell very, very well. I think that’s something that I would certainly be interested to see where it goes. I would certainly be looking to consume that myself because right now, we just find that operationalizing those models is brittle and can be a bit of a challenge.

I think there’s still unfinished business in the gap between the platform as a service and the Kubernetes layer. So, the gap between Heroku’s and Google App engines and the pivotal cloud foundries and the world of Kubernetes still exists. We haven’t seen a lot of really great turnkey experiences. I think the work that we started doing with the Tanzu portfolio was a nod in the right direction, but I definitely think there’s a wonderful opportunity to continue to explore and play with the idea of the service abstraction and basically looking at service dependency injections for modern workloads.

I’m also acutely interested in the way that WebAssembly is going to start to shape up and represent ways to bring new atoms of computational resources into new form factors using and borrowing a lot of the distributed system sensibilities that Kubernetes created. I think there’s tons of opportunity. If I hadn’t met Luke Hinds and fallen in love with supply chain security, I can think of three or four great startups that I’d be happy to do tomorrow, but I’m very happy with the one that we’re doing.

Tim: We’re as well. Thanks so much, Craig, great insights. We could talk about these things for days, but I really want to thank you for spending time and sharing these insights. It’s just a great pleasure and fun to be able to work together here and do a little bit to help you and Luke and the team as you build Stacklok. So, thank you.

Craig McLuckie: Thanks, Tim, really appreciate it.

 

The Evolution of Enterprise AI: A Conversation with Arvind Jain

Madrona investor Palak Goel has the pleasure of chatting with Glean founder and CEO Arvind Jain on the evolution of enterprise AI

Listen on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today Madrona investor Palak Goel has the pleasure of chatting with Glean founder and CEO Arvind Jain. Glean provides an AI-powered work assistant that breaks down traditional data silos, making info from across a company more accessible. It should be no surprise that Glean is touted as the search engine for the enterprise because Arvind spent over a decade at Google as a distinguished engineer where he led teams in Google search. Glean has raised about $155 million since launching in 2019 and was named a 2023 IA40 winner. Palak and Arvind talk about Glean’s journey and the transformative power of enterprise AI on workflows, the challenges of building AI products, how AI should not be thought of as a product but rather as a building block to create a great product, the need for better AI models and tooling, and advice for founders in the AI space, including the importance of customer collaboration in product development, the need for entrepreneurs to be persistent – and so much more!

This transcript was automatically generated and edited for clarity.

Palak: Arvind, thank you so much for coming on and taking the time to talk about Glean.

Arvind: Thank you for having me. Excited to be here.

Palak: To kick things off, in 2023, VCs have invested over a billion dollars into generative AI companies, but I can probably count on one hand how many AI products have actually made it into production. As someone who’s been building these kinds of products for decades, why do you think that is and what should builders be doing better?

Arvind: People are doing a great job building products. I’ve seen a lot of really good ideas out there leveraging the power of large language models. Still, it takes time to build an enterprise-grade product that can be reliably used to solve specific business problems. When AI technology allowed people to create fantastic demos that you can amazingly solve problems, the expectation in the market went up very quickly. Here’s this magical technology, and it will solve all of our problems. That was one of the reasons we went through this phase of disappointment later on, which is it turns out that AI technology, while super powerful, is extremely hard to make work in your business.

One of the big things for enterprise AI to work is that you connect your enterprise knowledge and data with the power of large language models. It is hard, clunky, and takes effort, so I think we need more time. It’s great to see this investment in this space because you’re going to see fundamentally new types of products that are going to get built and which are going to add significant value to the enterprise. I expect we will see a lot of success in 2024.

Palak: What are some of those products or needs that you feel are extremely compelling or products you expect to see? I’m curious about the Glean platform. From that perspective, how are you enabling some of those applications to be built?

Arvind: If you think about enterprise AI applications, a common theme across all of them is that you connect your enterprise data and knowledge with the power of LLMs. Given a task that you’re trying to solve, you have to assemble the right pieces of information that live within your company and provide it to the LLMs so that the LLM can then do some work on it and solve the problem in the form of either providing you with an answer or creating some artifact that you’re looking for.

From a Glean perspective, that’s the value that we are adding. We make connecting your enterprise data easy with the power of large language model technology. We want to take all the work of building data, ETL pipelines, figuring out how to keep this data fresh and up to date, and setting up a retrieval system where you can put that data in so that you can retrieve it at the time a user wants to do something.
We want to remove all of that technical complexity of building an AI application from you and instead give you a no-code platform that you can use and focus more on your business or application logic and not worry about how to build a search engine or a data ETL pipeline. We will enable developers inside the enterprise to build amazing AI applications on top of our platform.

The use cases I always hear from our enterprises are, first, for developers, you already know how the code copilots add a lot of value. Statistics show that about 30% of all the code is written by copilots. You’re starting to see good productivity wins from AI for software development. You can use these models for test-case generation, and there are a lot of opportunities. We’re still improving that 10% of the time that developers spend.
Developers are not spending most of their time writing code. Most of the time, they’re figuring out how to solve a problem or a design for the solution. Most of the focus is on the scope of AI. AI focuses on 10 or 20% of the time a developer spends to bring some efficiencies there.

Next year, we will see many more sophisticated tools to increase productivity in that entire developer lifecycle. Similarly, for support and customer care teams, AI is starting to play a significant role in speeding up customer ticket resolution and answering tickets that customers have. These are some of the big areas in which we see a lot of traction today.

Palak: As developers move from prototyping to production, do you think it’s a lack of sophistication in the tooling around some of these models or is it the models themselves that need to get better?

Arvind: It’s both. AI models are smart, but everybody has seen how they can hallucinate. They can make things up because, fundamentally, they are probabilistic models. All the model companies are starting to see, whether in an open-domain or closed-domain models, that incredibly fast progress is happening there to make these models better, more predictable, and more accurate. At the same time, a lot of work is happening on the tooling side. For example, if I’m building an AI application, how do I ensure it’s doing a good job? The eval when we started earlier this year when we built an application, all the evaluation of that system was manual. Our developers were spending the majority of their time trying to evaluate whether the system was doing well because there was no way to sort of automate that process. Now, you’re seeing a lot of development in AI evaluation systems.

Similarly, there is infrastructure on models, making sure you’re not actually putting sensitive information into the models or taking sensitive output and showing it back to the user, so privacy and security filtering, that plumbing layer is getting built in the industry right now.

There’s a lot of work on post-processing of AI responses because when the models are unpredictable, how can you take the output of the models and then apply technologies to find out that something like that happened? If it hallucinates, then you suppress those responses. The entire toolkit is undergoing a lot of development. You can’t build a product in three or six months and expect it to solve real enterprise problems, which was an expectation in the market. You have to spend time on it. Our product has worked because we were on this journey for the last five years, so we didn’t start building it in November of 2022 after ChatGPT and suddenly expected it to work for our enterprises. This technology takes time to build.

Palak: As somebody who has gone on that journey from prototype to production, what advice do you have for founders that are starting similar journeys and are looking to be part of these conversations with big enterprise customers?

Arvind: A generic piece of advice for folks doing this for the first time is that building a product is always hard. There are a lot of times when you’ll feel that, oh, maybe this is a bad idea, and I should not pursue it, or it’s too difficult, or there’s a lot of other people doing it, and they may do it better than me. I constantly remind people that a startup journey is hard, and you’ll keep having these thoughts and just have to persist. The thing that you have to remember is that it’s hard for anybody to go and build a great product. It takes a lot of effort and time, and you also have that time. If you persist, you’re going to do a great job. That’s my number one advice to any startup or founder out there.

The second piece of advice concerning AI is if you start to think of AI as your product, then you will fail. We don’t see AI as fundamentally any different from other technologies we use. For example, we use infrastructure technologies from cloud providers, and that’s a big enabler for us. I have built products in the pre-cloud era, and I know how the cloud has fundamentally changed the quality of the products we built and how easy it is to build scalable products.
AI should be no more than one of the building blocks you will use, and you still have to innovate outside of the models. You have to innovate to make workflows better for people, but something beyond that, hey, yeah, I can do something better, and therefore, I’m going to build a product.

Palak: Yeah, I think that makes a lot of sense and I love the customer centricity there, really figuring out what their needs are and building a product to best serve those needs rather than taking more of a technology-first approach and taking a hammer and sort of looking for nails.

To keep on this AI trend a little bit more, I think every Fortune 500 CEO this year was asked by their board, what is your AI strategy? And we’ve seen companies spin up AI skunkworks projects and evaluate a lot of early offerings. Naturally, a lot of founders and builders want to be a part of those conversations. I’m curious how you approach that at Glean and if you have more advice for founders looking to be a part of those conversations.

Arvind:
In that sense, 2023 has been an excellent year for enterprise AI startups because you have this incredible push that you’re getting from the boardrooms where CIOs and leaders of these enterprises are motivated to experiment with AI technologies and see what they can do for their businesses. We have found it very helpful because it allows us to bring out technology. There’s more acceptance and urgency for somebody to try Glean and see what it does for them.
I’ve heard from enterprises that many of these experiments have yet to work out. A lot of POCs have failed, and so my advice to founders is to have an honest conversation with your customers, with the leaders that you’re trying to sell the product to. If you create an amazing demo, which is easy to create, and sell something you don’t have, you lose the opportunity and the credibility. It’s hard to bounce back from it. Even the enterprise leaders understand that, hey, this technology is new, it’s going to take time to mature, and they’re just looking for partners to work with who they feel have integrity and who have the ability to be on this journey with them and build products over time. That’s my advice to folks: be honest and share the vision, share the roadmap, and show some promise right now, and that’s enough. You don’t need to over-promise and under-deliver.

Palak: That will become the conversation in 2024: what’s the ROI on our AI strategy? From the enterprise leader’s perspective, who had mixed results trying out a POC, do they double down and stick with the effort? How do you think about that, and how are you seeing it from the Glean perspective?

Arvind: AI is a wave that you have to ride. For example, when cloud technology was just information, there was a lot of skepticism about it. “Okay, I’m not trusting my data is going into a data center that I don’t even have a key or to lock; I’m not going to do it.” Some companies were early adopters, and some companies adopted it late. Overall, the earlier you adopt these new technologies, the better you do as a company.

The AI technologies are now bigger in terms of the transformative impact they will have on enterprises or the industry as a whole than even the cloud. It’s an undeniable trend, and the technology is powerful. You have to invest in it, and you have to, as an enterprise leader, be willing to experiment. You don’t have to commit to spending a lot of money, but you have to see what’s out there, and put in that effort to embrace and adopt this technology.

Palak: Absolutely. I’d love to double-click on that a little bit more on AI being a bigger opportunity than cloud. I’d love to get a sense of where you think that is and what are some of these amazing experiences beyond our imaginations that you think will result out of this new wave of technology?

Arvind: A decade ago, it was about $350 billion in overall technology spend. It’s probably double that now. The cloud is worth 400 billion, which is more than all the tech spending that used to happen 10 or 15 years ago. AI impacts more than software; it impacts how services are delivered in the industry. For example, think about the legal industry and the kind of services that you get from them; what impact can AI have on those services? How can it make those services better? How can it make it more automated?

If you start to think about the overall scope of it, it feels much larger. It will fundamentally change how we work, and that’s our vision at Glean. The way we look at it today, take any enterprise: you have the executives, and they get the luxury of having a personal assistant who helps do half of their work. I have that luxury, too, where I get to tell somebody to do work for me, and they look at my calendar, they look at my email, they help me process all of it, and I have that help, and I feel like it’s unfair, I have it, but nobody else in our company has it. But AI is going to change all of that.
Five years from now, each one of us, regardless of what role we play in the company, how early we are in our career, we’re going to have a really smart personal assistant that’s going to help us do most of our work, most of the work that we do manually today. That’s our vision with Glean, that’s what we’re building with the Glean assistant.
Imagine a person in your company who’s been there from day one and has read every single document that has ever been written inside this company. They’ve been part of every conversation and meeting, and they remember everything, and then they’re ready for you 24/7. Whenever you have a question, they can answer using all of their company knowledge, and that’s the capability that computers have now. Of course, computers are always good at processing large amounts of information, but now they can also understand and synthesize it.

The impact of AI is going to be a lot more than what all of us are envisioning right now. We are trying to overestimate the impact in the next year, but we underestimate the impact that the technology will have in the next 10 or 20 years.

Palak: Just so you know, Arvind, when you were starting Glean, I remember because I was working on the enterprise search product at Microsoft, and I think you were cold messaging people on LinkedIn to try out Glean, and one of the people you happened to cold message was my dad who also went to IIT. And so it was just like a funny story.

Arvind: If I’m trying to solve a problem, I want to talk to the product’s users as much as possible. So, for example, even at Glean, I was the SDR No. 1. I spoke to hundreds and hundreds of people, whoever I could find, whoever had 10 minutes to spare for me, and asked them about, hey, I’m trying to build something like this. I’m trying to actually build a search product for companies. Does it make sense? Is it going to be useful to you?

The reason it’s so powerful when you do that exercise yourself, and you don’t stop, you don’t stop after you hire a couple of people in sales, you keep going is because it generates that immense sort of conviction for you in your mission. Talking to you earlier about how often the journey’s hard, and you start to question yourself on a bad day. But if you had done that research and talked to lots and lots of users, you can always go back to that and remember that, hey, no, I’ve talked to many people. This is the product that they want. This is a problem that needs to be solved. And so that’s what I always find very helpful for me.

Palak: Arvind, you’ve been in the technology industry for a very long time and have been a part of nearly every wave of technical disruption. What have you learned from each of these waves and how have you applied those learnings to AI?

Arvind: I think each one of these big technology advances that have happened over the last three decades, we’ve seen how that fundamentally creates opportunities for new companies to come in and bring new products that are better than the products that were built before when that technology was not available to them, to anybody. And so that’s one thing that I’ve always kept in mind. Whenever a big new technology wave comes, that’s the opportunity for any company, whether you are starting a new company or if you are a startup, you have to figure out how that is going to change things, how that is going to give you opportunities to build much better products than what was possible before and then go and work on it. My approach always has been to see these big technology advances as opportunities as opposed to thinking of them as being disruptions.

Palak: I’d love to get a sense of your personal journey as you’ve gone through each of those different waves. What are some of the products that you’ve innovated on and what are you so excited about building with Glean?

Arvind: I remember the first wave; I was in my second job, and we were starting to see the impact of the Internet on the tech industry and the business sector. I got to work on building videos on the web. It was incredible to allow people to watch video content directly on their laptops, machines, and the internet. We started with videos, and they would be so tiny because there was no bandwidth available on the internet for us to provide a full-screen experience. Regardless, it was still fundamental that there was no concept before that; hey, I can watch a video when I want.

Then, the next big thing we saw was mobile with the advent of smartphones and the iPhone, and it’s fundamentally changed again. At that time, I was working at Google, and it changed our ability, for example, to personalize results, personalize Google search, and personalize maps at a new level for our users than was possible before because now we knew where they are, we knew what they’re doing. Are they traveling, or are they steady at a place? Are they in a restaurant? And you can use that context to make products so much better.

We’re in the middle of this AI trend now, and our product is a search product for companies. The AI technology, especially the large language models, has given us this opportunity to make our product 10 times better. I think back to when somebody came and asked a question to Glean, we could point them to the most relevant documents inside the company that they could go and read and get the answer to whatever questions they had. But now we can use the LLM technology to take those documents, understand them using the power of LLMs, and quickly provide AI-generated answers to our users so they can save a lot more time.
It’s been really exciting to have that opportunity to use these big technology advances and quickly incorporate them in our products.

Palak: Yeah, I think that’s one thing that’s always really impressed me about Glean. As far back as 2019 or even before ChatGPT, Glean was probably everybody’s favorite AI product. I’m curious, how has Glean’s technical roadmap evolved alongside this rapid change of innovation over the last 12 months?

Arvind: We started in 2019, and in fact, we were using large language models in our product in 2019. The large language model technology got invented in some ways at Google, and the purpose of working on this was to make search better. So when we started, we had really good LLM models in open domain that we could use and then retrain them on enterprise corpuses of our customers and make it work for them to create a really good semantic search experience. But those models were not as good as you could put them in front of the users. And all of that started to happen in the last year where finally the models are so good that you can take the output of these models and put them right in front of the users.
So this has allowed us to first completely evolve our product. What used to feel like a really good search engine, something like Google inside your company now, it’s become a lot more conversational. It’s become an assistant that you can go and talk to and do a lot of things, so not just give it complicated instruction because we can follow complicated instruction using the power of large language model technology and then solve the problem that the user wants to solve. But we can go and really parse and comprehend knowledge at a level that we couldn’t ever before and also go beyond answering questions and actually do work for you.

One thing that we realized is that the AI technology is so powerful and there’s so many internal business processes that with the power of AI, you can make them much more efficient and better, and we’re not going to be, as a company, be able to actually fix all the things like go and solve all the AI problems, but our job has now become more of that, how can you use all of your company knowledge that we have at our disposal and then give tools to our customers so that they can bring AI into every single business workflow and generate efficiencies that have never been seen before.

Palak: I think that’s one of the things that from an outsider perspective, has made Glean such a great company, just how big the vision is and how you’re starting with the customer and working backward. I’m curious how you think about internalizing some of those philosophies within your company and how that sort of evolved and how the product’s evolving to this bigger, broader platform vision that you just alluded to.

Arvind: In an enterprise business, it’s very important for us to actually spend a lot of time with prospective customers and understanding different use cases that they have that they’re trying to actually solve as a business problem. All businesses are very different in some ways, and a big part of building products that can help a wide range of customers is actually spending the time at the forefront working with our customers and really understanding the scenarios there, understanding their data, and then trying to extract common patterns and common needs and then drive our product roadmap based on that.

Our product team spends a lot of time doing this process. Every quarter we would go and look at what are the top things we are hearing from our customers, and then of course, we do have our own vision of where we think the world is headed with all these new AI technologies. And so we combine those two things to actually come up with our quarterly roadmap and then execute on it.

The reason we started, for example, working on exposing our underlying technologies as a platform that businesses can go and build applications on was exactly that, that as we talked to so many, all these enterprises and they were all actually showing us that, hey, I want actually bring efficiency in my order workflow process, and this is how it works. And somebody else comes and tells us that, hey, I want, I’m getting a lot of requests for my HR service team, and I want help people build a self-server bot for all the employees in the company whenever they have HR questions. We start to listen to all of them and you realize that, oh, we can’t do, no idea what are all the things they want to go and solve.
Our job then became that, hey, can we give them the building blocks that can make it easy for them to then take this platform and build that value that they were looking for? Do a little bit of work on top of the product and platform that we provide to solve those specific business use cases.

We started building our AI platform for that reason, because AI is so broadly applicable across so many different businesses, so many different use cases, and it became very clear to us that we need that collaboration from our customers to really get full value.

Palak: Awesome, Arvind. So I have a few rapid fire questions as we wrap up. The first is, aside from your own, what intelligent application are you most excited about and why?

Arvind: GitHub Copilot is one of the applications. I see how our developers are able to use that, and there is a clear sign, from signal from them that this is a tool that’s truly improving the productivity.

Palak: Beyond AI, what trends, technical or non-technical are you most excited about?

Arvind: I’m really excited about the nature of work, how it is evolving rapidly and quickly, distributed work, the ability for people to work from wherever they are, so technologies that is helping us become more and more effective from working from our homes. That’s the thing that I’m really excited about and we’re going to see a lot more. Hopefully we’re going to see some things with telepresence, which makes working from anywhere the same as working from office.

Palak: Yeah, that’s a good one. Arvind, thank you so much for taking the time. It was really a pleasure to have you on.

Arvind: Thank you so much for having me.

Coral: Thank you for listening to this week’s IA40 spotlight episode of Founded & Funded. We’d love to have you rate and review us wherever you listen to your podcasts. If you’re interested in learning more about Glean, visit www.Glean.com. If you’re interested in learning more about the IA40, visit www.ia40.com. Thanks again for listening and tune in a couple of weeks for our next episode of Founded & Funded.

 

How To Not End Up In A Board Governance Situation Like OpenAI

Managing Director S. Somasegar and General Counsel Joanna Black discuss startup board governance and the role of a board in a startup's growth.

In this episode of Founded & Funded, Madrona Managing Director S. Somasegar and General Counsel Joanna Black discuss the fundamental role of a board in a startup’s growth and development with Madrona Digital Editor Coral Garnick Ducken. They touch upon the importance of aligning strategic views with board members, managing disagreements, and effective board governance to ensure that the organization is run efficiently. The duo offers numerous insights into the intricacies of board structure at different stages of a startup lifecycle, drawing parallels from recent events at OpenAI. The conversation covers the need for transparency, both in sharing good and bad news, and the necessity for a functional board reflecting a functional culture.

This transcript was automatically generated and edited for clarity.

Coral: What we’re trying to do here is set the stage before we dive into all the advice and startup board governance and structure and everything that we’re going to talk about here. So, to kick it off for us, Joanna, can you give us the quick and dirty with the problems surrounding the board structure that we saw at OpenAI not that long ago?

Differences between Nonprofit and For-profit Boards

Joanna: I think what’s interesting about OpenAI is that it is really a nonprofit. And the nonprofit owns a for-profit subsidiary. That for-profit subsidiary is fully controlled by the OpenAI nonprofit. Most people don’t realize that a nonprofit board is very different from a for-profit board. We all understand that a for-profit board tries to maximize shareholder interests. However, in a nonprofit board, they are guided by a stated mission. So that’s a very different output. The nonprofit board does have the same duty of care and duty of loyalty, but they have this separate duty that’s not found in a for-profit organization, which is a duty of obedience. They have an obedience to the laws and the purposes of the organization.

Coral: Before we unpack a lot of that and then apply it to startups and founders more broadly, why don’t we just define the term governance a little bit? I know that in a lot of the media reports and other things that we’ve all been hearing and reading, it’s gotten thrown around a lot, and I don’t know that people fully understand what that means. Okay, governance, yeah, okay, we all get it, but I don’t think a lot of people do.

Defining Governance and its Importance

Joanna: Governance, from a legal standpoint, really refers to how an organization is managed or governed. So, by who and how. For most entities, governance starts with their governing documents. These are the foundational rules for the entity. They’re typically a charter and the bylaws. The charter is where you will find the rules by which everyone must follow when it comes to governing the company. Now, the charter is typically what sets up a board of directors or whoever is going to manage and govern that organization. They have the ultimate management authority over the organization. And then, the authority of the leadership team, the CEO specifically, derives from the authority of the board.

Coral: So then, Soma, if we pivot to you a little bit, you’re obviously the guy with all of the board experience here. I’ve heard you say before that, basically, a board is half company building and half governance, especially in the early stages. So why don’t you break down for us? What’s the purpose of the board and some of the experiences you’ve had in early-stage board building?

The Role of a Board in Early-Stage Companies

Soma: In general, I think a board is all about how you provide the right level of checks and balances to ensure that an organization is being run or managed in the most appropriate way. You follow the laws. You ensure that the right things are happening. You also keep in mind the fiduciary responsibility that the board has in terms of putting the shareholder’s interests first. As much as I say that, you know, governance is important right from day one. Usually, what happens is in a very early-stage company, the responsibility of the board is twofold.

One: Focus on ensuring that the right startup board governance is in place. But it’s also being a trusted partner to the CEO and the founding team and helping with what I call the company building — whether it’s building the team. Whether it’s helping with product building, product strategy, go-to-market plans or go-to-market strategy.

There is a bunch of things that need to happen in early-stage companies where everybody is learning as they go along. And it’s the board’s responsibility to be a trusted adviser and a trusted partner to the founding team and to the CEO. In the early days, it’s actually probably more time spent on company building. And as the company scales, as the business scales, then more of the governance comes into existence. By the time a company reaches a level of scale and becomes a public company, then the board is predominantly about governance and about what I call strategic alignment.

Coral: You made a good point there. Startups are growing — it is basically their purpose. And the board needs to grow along with it, while making sure all of the missions and everything are aligning, which obviously is where we got into a little bit of trouble with the OpenAI stuff.

So why don’t we talk through the board at those different stages? When you bring in independent board members and that sort of thing, and I think you can, both of you can tackle that a little bit. Joanna, why don’t you start by setting that stage and tell us how things evolve as more funding comes in.

Board Evolution with Company Growth

Joanna: I do think, exactly what Soma is saying, although the ultimate role of the board is oversight, that oversight changes over time — just like when you’re a parent. Your parenting duty over a toddler is very different than your parenting duty over, say, a teenager or even an adult child.

I think very much that, at the end of the day, the board’s role is oversight. They oversee strategy. They monitor risks. So, when you launch a startup, you might have a sole director on the board. Like when you don’t even have any investors, you are literally having a board of maybe one, maybe two, maybe both of the founders, if there’s two founder. So at this stage, it’s literally the founders company, and the board directors and officers are often exactly the same people. Once you raise some money at this point, it’s often comprised of three people. One is the CEO. One would be what we’d call the preferred director, which is usually the representative from the lead investor at that Seed stage. And then a third really depends. It could be an independent member. It could be another founder. It could be another investor representative. Something to note here, Coral, and I think most people know this, but since decisions are made on a majority basis, for the most part, it is good practice to have an odd number of board members. Having five members at this stage is a bit unwieldy, so we usually don’t go quite a five. As a company raises more and more money and more investors, such investors may ask for a board seat. We might want to change the composition as they grow and tasks change. And so, the number of board members you will have might increase, but we usually keep it odd-numbered and balanced.

Coral: And Soma, what should founders and CEOs keep a lookout for? Any sort of markers or moments that, okay, maybe it’s time to reevaluate what the board looks like. Do we need independent directors? Are there moments that you look for specifically when it might be time for some of that?

Soma: I think it’s twofold, Coral. It’s a balance, right? You don’t want a nine-member board for a company that has two employees. You also don’t want to have a thousand people in a company and like two members on the board.

But having said that, they should focus initially on getting people on the board that are really, really aligned with you — both from a strategic perspective and from a creating value perspective. The reason the firm that leads your Seed round of financing wants a board seat is — they are putting their money where their mouth is, so to speak. And, they are responsible. They have fiduciary responsibilities to their LPs. And they want to make sure that they have the right level of oversight, governance, and visibility to ensure that all the right things happen.

So every founder has to think about which venture capital firm they’re taking money from and which partner is going to be on their board. And is that partner somebody that they’re excited to work with? Because there is a meeting of the minds, so to speak.

The closest story I can tell you, Coral, is the following. If you are sitting in Seattle, let’s say. And, let’s say you’re the board member, and you have a CEO who wants to take a road trip to San Diego. Both of you need to be aligned that you want to go from Seattle to San Diego. Then you can argue over whether to take I-5 South. Should I take 101? Should I take some other road? But let’s say the CEO wants to go to San Diego, and the board member wants to go on I-90 East. Then you’ve got a problem.

Coral: So, how do you navigate that? Obviously, you do your best to pick those people that are aligned with you and those best partners. But at some point, if you get into a place where things, you’re not aligned anymore, obviously, like we saw in OpenAI, what’s the actual route that should be taken to handle that sort of conflict?

Choosing the Right Board Members

Soma: That’s why it is important for you to know who you’re going to be working with. And you know that like, Hey, this is a person I’m ready to work with because it’s not like you’re always going to be in agreement. There are going to be disagreements. There are going to be what I call debates. There are going to be some ferocious arguments along the way. But as long as you know that you are aligned on doing the right things. And people have different perspectives and different points of view, then I think you can work it out. Sometimes a board and the CEO might just decide “This is one thing where we disagree.” But they have enough trust in the other person that they are going to disagree and commit.

That, I think, is a good way to solve things sometimes because it’s not like everybody’s going to say yes to everything all the time. That’s not what a board is all about. And that’s not what a CEO should be expecting. And if a board expects a CEO to behave that way, then probably they’re not the right CEO in the first place anyway, right? So having arguments and having different differing points of view is okay as long as you can work through them and get to a common place of understanding about how you’re going to move forward. But if you realize that the relationship is so strained for whatever reason and you can’t navigate through it, then some change needs to happen.

Coral: Is there a process by which that change has to happen whether in terms of what the rules are that have been put in place, Joanna, or otherwise?

Navigating Disagreements and Conflicts within the Board

Joanna: There’s usually rules in the charter and the bylaws that talk about how you would remove a board member or how you would add a board member or you would switch out the board members. But you know, again, all of those rules, like almost all legal documents, are there just to be a fail-safe for what to do if you have to figure things out. Ultimately, it’s about the relationship, like Soma is saying. You have to work things through. A very functional board, I think, reflects a very functional culture. And the startup does best when there’s a functional board that knows how to work with its management. Ultimately, it involves both sides figuring that out.

Things that I have seen in my past where it hasn’t worked out great is when the CEO or the management don’t want to bring things to the board. They don’t think the board will understand. They don’t think the board will consider it. And I think oftentimes that can cause some issues when the board does find out. I think it’s really helpful for the founders and the board to have a good trusting relationship and vice versa. The board having honest open conversations with the leadership and the management really go a long way toward having good startup board governance and not having sudden decisions that come out of nowhere and surprise either the leadership or the board.

The Importance of Transparency and Communication in Startup Board Governance

Soma: The thing I would add to that is that it is a two-way street. As much as you could argue that, you know, uh, the CEO reports into the board, the CEO and the board should really think of themselves as like sort of partners. Right. And partnership works when even both sides, like, you know, operate with the same level of, uh, uh, what I call integrity, transparency, communication, and willingness to work together. The other thing is — as a CEO, sometimes you have good news to share, you have bad news to share, you have lousy news to share with the board. Rather than try to be super nuanced about it, ensure that communication happens as real time as possible, whether it’s good or bad, doesn’t matter. Because if you want to build a trusted partnership, it’s really important that you communicate as and when something happens that you think the board needs to know. But things that you think ought to be shared with the board have a different velocity for good news versus bad news. In some sense, I would say, the velocity for bad news should be higher than the velocity for good news.

Coral: Never wait to tell any news, just keep it open and transparent, all lines of communication always.

Well, I think that this will be really helpful for people, you know, thinking about starting a company or those sitting in companies right now. And I thank you both so much for joining me today.

Joanna: Thank

Soma: Absolutely, Coral. Thanks for doing this. And, I think, this year has been particularly interesting with what happened with like FTX and what happened with OpenAI. And all these conversations about like, hey, how do we ensure that the right startup board governance is in place, until we realize it’s too late kind of thing. And so this is a good reminder for every founder and every CEO, no matter whether they are day one, or they are looking to go public tomorrow, or they’re a 1-year-old company — governance and oversight is really, really important. Have the right level of energy and focus and attention on that. Don’t go overboard, and don’t underinvest in that.

Coral: Perfect. Thank you guys both so much.

Karan Mehandru and Anna Baird on Navigating Sales, Growth, and Leadership

They dive into the evolution of the CRO role since COVID, building a high-performance sales teams, go-to-market strategy, and so much more.

This week, Madrona Managing Director Karan Mehandru hosts Madrona Operating Partner Anna Baird. The operating partner role is a new one for Madrona, and we’re excited to have her on the team to advise our founders. Before transitioning to board positions in the last couple of years, Anna was most recently the CRO at Outreach. Before that, she was the COO at Outreach, and before that, she was the CFO at several companies. That trifecta of C-Suite experience gives Anna a unique perspective to help founders navigate any blind spots they may have. Anna and Karan, who is on the board of Outreach, dive into how the sales profession has changed, how the role of the CRO has changed since COVID, how to talk to customers, how to build high-performance sales teams, go-to-market strategy, AI’s role in sales, and so much more. These two tackle it all, and it’s a must-listen.

TL;DR

  • Maintain focused consistency: When running a business, especially in the sales domain, avoid changing strategies too frequently. Find a core focus and maintain it for sustainable growth.
  • Adapt and stay close to customers: The market might change but always understand the pain points your product solves. Continuously evolve by staying close to the customers and their changing needs.
  • Balance growth and profitability: Understand the balance between growth and profitability. It’s crucial to stay focused, not just on expanding but also on maintaining a profitable approach.
  • Hire for traits, not just experience: When hiring for sales teams, seek traits like intellectual curiosity, a desire to win, and collaborative skills. These traits often matter more than specific experiences when navigating new or challenging markets.

This transcript was automatically generated and edited for clarity.

Karan: Hello everyone. My name is Karan Mehandru. I’m one of the managing directors here at Madrona out of the California office. It is my pleasure and privilege to welcome Anna Baird to this podcast, and Anna and I go back a long way. I’ve had the pleasure of calling her my friend for almost a decade, and I’ve had the privilege of working with her for almost five years out of the 10 at Outreach. So, super excited to have you with us, Anna. Thank you so much for making time.

Anna: Hey, Karan. Super glad to be here and part of the Madrona family.

Karan: Awesome. We’re excited to have you. We have a lot of great topics to cover with you. Maybe we start with how your journey and your career took off and how did you get into tech in the first place?

Anna: I’ve been in tech since I started. I was an accounting major in college and got into one of the big accounting firms, KPMG, but I wanted to do tech. I wanted to do the accounting side and help tech companies. So, I moved to Silicon Valley, San Jose, at the time and was with KPMG for 17-and-a-half years.

I started by helping startups go public. I worked with Google pre-IPO, and then Intuit, and a bunch of others for the next seven years. There’s probably not a boardroom in Silicon Valley I haven’t sat in at one point or another after 17-and-a-half years. I moved to the consulting side partway through that and really loved helping set those foundations to operate effectively.

I left and became a senior vice president at McAfee, running finance governance risk. When they were bought by Intel, I decided to be CFO. I like to do a lot of different things. I became CFO and then a COO and then a CRO. Those are all the C’s I’m covering. No more. I’m done. That was it.

Karan: Great. Not many people we encounter have taken the CFO, COO, and CRO roles, making you unique in and of itself. Tell me why you transitioned from being a CFO to wanting to be A CRO. Did you just wake up one day and say, well, I’m done counting the numbers, and now I’m going to start driving the numbers? How did that mental switch come about in your head?

Getting into tech from accounting

Anna: Well, it’s funny. People don’t realize this, but as a partner at KPMG, I had a 6 million quota a year, so I was already selling.

Karan: We’re all salespeople.

Anna: That is so true. As a CFO, you’re trying to get investors. You’re selling the company. I loved that aspect. I loved customers. I loved understanding what the customer pain was. My favorite thing, even when I was in accounting, was what is product building and how are they solving the customer pain?

I loved understanding those things. I think it’s what made me a good consultant and an accountant at that time. I think it’s also what made me a good CFO, but even as a CFO, I kept ending up taking on other things because I loved the operational side. Everybody’s like, oh, Anna, take this, and Anna, take that. Then I said, you know what? I’m going to do the COO side. I have more flexibility in the breadth.

Then, of course, in 2019, you, Manny, and others asked if I would take on the CRO role. When you are building a company, and it’s all hands on deck, and you’re growing like crazy, and there’s a change that you need to make sure stabilizes, it’s the right thing to do, right? Sometimes, there are things that are just the right thing to do for the business.

One thing I’ve been pretty good at in my career is always putting the business first. I was helping a lot with sales anyway as A COO, so it wasn’t like some crazy transition, but I loved the team, and I knew we needed to stabilize and make some different changes, so took that on from 2019 until 2022.

Karan: We’re so glad you did because you took Outreach. I remember, when you came in, we were still under 20 million, and you took it all the way to 250 million, so it was a wonderful journey. I’d love to ask you about that, but just to clarify for listeners, the decision for us as the board was simple because we just got you to ask for the budget, approve the budget and then drive the numbers, so we didn’t have to do anything. So that was awesome.

Anna: I was the COO and the CRO for too long. It took us a while to find a CFO. That was the easiest job I ever had. I was like, is this a good idea? Yes, it is. Let’s go do that.

Karan: That’s awesome. Let’s talk about how the sales profession has changed over the last decade. I mean, there’s so much that has happened, new tools in tech. There are so many changes in the actual way things are sold, whether it’s product-led growth or sales-led growth. Then, things changed even more when we went through COVID.

I’d love to talk to you about your experience in this profession. You’ve been around CROs, you’ve been a CRO, you’ve been a CFO, you’ve been a COO. So, from your purview, what are some of the biggest changes that you’ve seen play out in the market in the sales profession, and then how did that get either accelerated or exacerbated when COVID hit?

Changes in the sales profession since Covid

Anna: It’s been a fascinating journey, especially these last few years, right? Everybody says, oh, how did you lead through a pandemic? Well, no one’s ever done it, so there’s no playbook. We were all figuring it out as we went and trying to make the best decisions.

I think one of the things that changed, and I’ll start at the top, is the CRO role in general, as it expanded significantly to what you need to understand, the skillset you need to have. I always explain that CRO became part CHRO from a human resource side, part CIO, and part CFO.

CHRO to deal with the mental health and the remote work, and how did you make sure that you kept your team focused and effective and addressed issues quickly so that you could make sure that everybody was healthy and in a good place, CIO, to understand the technology available to you to work with remote employees and what tech was available and how did you think about utilization of that tech for your go-to-market strategy?

Then CFO, the data, I mean data is so key when you have the technology, you have to understand what data you need to be able to run the business and how you are viewing it and what is the rhythm of your reporting and operating so that you could make sure you were making data-centric decisions because you couldn’t see everything anymore.

That changed for CROs, obviously, but it also changed for the teams for their understanding of customer engagement, what was happening, and how they wanted to be educated. When you think about the world being so remote when we first hit COVID and still quite a bit remote today, there’s so much self-education happening with customers.

We’ve talked about the number of touchpoints you have to get a deal approved and went from 10 to 17 to 21. You know what I mean? It was insane. They’re all working in different locations, and they’re not sitting in some conference room talking about your deal, your product every day. So, how do you educate that group? How do you create central locations where they can access business cases and videos on the product and those sorts of things?

If they’re already a customer, how do you do some product-led growth with showing them what’s available? Those things became way more critical than they used to be. For the account executives and the go-to-market teams, people didn’t want to wait to have three calls to get their question answered anymore either. They didn’t have the time. They were back-to-back with scheduled meetings because that’s how we’ve had to operate and still do.

So, they can’t sit there and go, yeah, let me get back to you. Let me get back to you on that question. Let me come back. We used to do things like that, I think, wtih go-to-market strategy, and it was okay. It was accepted as part of the ecosystem, and it’s not anymore. How can you educate a customer before they show up on a call?

Can you send a video to give them some visibility into the product and talk about, “Hey, like you, here are the four things from leaders that we hear?” So you make it, understand their role, and you understand who they are. Here’s some things we hear from leaders like you. We’re experts in the industry, how could we help you and how can we make this next call the most effective? Here’s a video of what our product does. Those kinds of things are game-changing now.

Karan: It’s interesting. As I listened to you, it makes total sense, and it sounds coherent, it sounds logical. That said, put in the context of companies that are starting out and the founders and their experiences. There’s so many founders that we work with these days that are product founders or tech founders, and a lot of times go-to-market isn’t a very natural trait, that skill or experience that they’ve had.

A lot of times, they don’t have any go-to-market people around them, so help us understand, when you advise founders now in this role as an operating partner at Madrona and being on boards of companies and you’re looking at early-stage founders that just got their product out, how do you advise them to interact with their go-to-market teams?

How do you think about advising them about go-to-market strategy? At what point should they start thinking about it? If you build it, they will come, which is usually the model that a lot of them operate on. So, help us understand what do you advise them when you’re looking at a new founder or a founder of an early-stage company that’s starting to think about go-to-market strategy.

Go-to-market strategy

Anna: I think there’s a couple of things. Getting your go-to-market organization, getting them to have a foundation, a process that is consistent across the teams, you have to build it, right? You got to build it from that, again, that foundation up. Part of it is, you’ve got to make sure you let them focus. Don’t change strategy every quarter. You cannot have new messaging, new strategy. You need to make sure you’re getting the message out to the market and watching how the market responds, and that can be pretty fast, but don’t change strategy every quarter. I think, especially when you are a product founder, you can change product sometimes quickly, and nobody knows. They don’t see what’s happened in the background.

When you’re changing your messaging all the time, you definitely see it, right? Each of those go-to-market people is the best marketer you have. They’re the voice on the street every day, talking about who you are, what you stand for, and what pain you solve.

I think that is so critical because, especially as product founders, you get so engaged in the engineering and the tech, and you want to talk about the features that you built and the functionality that you have, and your customers don’t care about that. They care about the pain you solve for them, the problem that you make less painful. So, how do you make sure that you are creating the language and keeping that consistent?

Here’s why we exist as a company, here’s the pain we solve for you, and here’s why we know your pain. We’re totally empathetic because we see other leaders like you all the time because that’s who we talk to every day, and we know that these are the types of issues that you run into. Then you get that whole empathy, I’m in your shoes, you’re selling with them, not to them, because they’re trying to solve a problem in their organization, not buy a feature or functionality.

Karan: Love it. I love that framework — why we exist, what we do, and how we do it — which is a great framework for all of us to remember, and more so, I guess, in a time when there’s massive change, just to remind everybody to rinse and repeat that message. You used the word focus, and I want to come back to that.

Obviously, scarcity does breed focus, but then companies grow up, and then they raise a lot of money, then they become multi-product, and then they hire a lot of people, then they have multiple locations, and then they have the tyranny of choices in front of them.

We went from scarcity to abundance in the market, and now we’re going back to a little bit of scarcity because the market is correcting in front of us. One of the things I’d love to understand from you is, you manage the sales team through scarcity, then to abundance, then back into scarcity.

You’ve been a CRO, and you’ve been a COO, and you’ve been a CFO, so you understand the difference between growth and profitability more than most CROs who are just thinking about growth. It would be great to hear some of your anecdotes or lessons learned as you manage that massive growth engine within Outreach and Livongo and all these companies. How do you balance that growth and profitability? How do you breed focus when there’s so many choices in front of you?

Balancing growth and profitability

Anna: We say it all the time, and it’s like what you say no to, right? You do have to say no to things. You can’t have 20 key priorities for the business for a go-to-market team or engineering. There has to be three things that we are trying to accomplish in this 12 and 18 months, whatever your timeframe is, and you have to be maniacal about that focus.

That’s where it’s not changing every quarter because it takes time to build it. If you’re looking at building competitive moats and you’re looking at solving that customer pain, whether the market has an abundance or in scarcity, you control what you can control, which is why do you exist? It’s back to that again, right? Why do you exist as a company? Because you solved something, and don’t ever forget that.

Even when it’s abundant, you must stay focused on solving a pain. How do we solve it even better? How do we make it more effective? How do we make it faster? How do we solve the next piece of the pain, and building that roadmap of understanding? We talk about our product roadmaps and really needing something that is 12, 24, and 36 months out of what would we do next. If we said this is the customer’s pain, and here’s what we’re seeing, what would we take next?

Always stay close to the customers, always stay close to understanding what’s happening in the market, and make sure you are controlling what you control, which is what your company does, and what you build, what you put in the marketplace. That is always a recipe for success. What happens is people get scared, and I get it. When you watch your child suffer, it is hard to be objective.

You must also surround yourself with leaders who will help you. It’s too personal. So, when you have leaders around you who can help you go, let’s all breathe for a minute together and let’s talk this through. What will be the most impactful thing for our business and our team?

Always remember the most impactful thing for your customers is going to be the right strategy to go after, and obviously, based on the tech that you built and where you have the assets to take those major next steps.

Karan: That’s great advice. I want to talk about culture and as part of that, hiring in particular, and you mentioned one of the first things you can do is build leaders around you that embody the cultural values that you and the company espouses.

When you think about building a high-performance sales team and when you think about hiring your next-generation leaders, maybe even the first rep or the first five reps, what are some of the organizing principles or traits that you look for when you’re lighting up a sales team?

Building a high-performance sales team

Anna: That’s a great question. There’s something that a lot of people don’t do. Everybody’s like, I just hired a salesperson, and they were great, and they came from this other company and they did great there, so they’re going to do great here. Anybody who’s been in this long enough, you know that’s not true.

I still remember when I first came to Outreach. We hired some of the top salespeople from companies that had been crazy successful, but they’d never introduced a new category. They’d never done an educated sell because the company had a great product that people came to them, or they just needed to talk about it. When you’re trying to educate for something that never existed before, that is a whole different selling strategy.

So, it’s like what are the core attributes you’re looking for in your salespeople, and those understanding the business you have and what they need to do in the marketplace is the first step. What are the traits that you need? But I’ll say three traits that are always critical when you think about the skill sets you’re hiring for.

It’s intellectual curiosity. You want them to understand the customer’s pain. That’s critical. Understand the customer strategy. How do we align our product so that we’ll win every time? It is a desire to win, Which comes from a lot of different places: overcoming adversity in their background, big families is a good one. They’re one of seven children. They had to compete to stay ahead.

Not just athletics, but competitive arenas in general, like chess, and lots of different areas where people are like, I really like winning. That adrenaline is that desire to overcome and to get to that outcome. Then you also need somebody who knows how to quarterback but is an incredible collaborator because one of the things they know is how to bring the power of a team to a customer, not just themselves.

If they’re all about being the hero and winning every time on their own, that works okay in some of your early days sometimes. That is not a strategy for long-term success. When you bring the power of the team, you win faster; you win more efficiently; you win bigger.

So, somebody who knows how to get out of their own way and they don’t have to be the star. They know how to take a step back, bring the right people in, and prepare them for a great customer conversation. I think those three.

Karan: I remember you telling us all at one of our sales offsites at Outreach when you got up on stage and said if you want to go fast, go alone. If you want to go far, go together. That’s one of the African sayings you reminded everybody, and I still remember that.

Anna: Yeah, and it’s totally true. It, just like history, repeats itself over and over and over again. We see that. You forget sometimes. You have to remember the basics.

Karan: So, Anna, now let’s fast-forward to today. You are our first operating partner at Madrona. You’ve collected a wealth of experience, lived every role in a company that exists from early to late, worked in different industries, such as sales and healthcare. I would love to hear your reasoning for why you joined Madrona and why you’re an operating partner here today. And then secondly, to the next Anna that is just graduating from college and potentially getting out of KPMG and aspires to be a CEO, CRO, COO, what have you, what would you tell the 22, 23-year-old Anna that’s graduating today?

Why Madrona

Anna: Oh, great questions. I joined Madrona because I’m a big believer, and this was part of the stage of my career. I think it’s important, and you see this a lot at Madrona. How do we help fill in the skill sets and the gaps that some of our founders might have? They’re not all coming from 10 different backgrounds, so how do we bring to the table skills and capabilities to make them even more successful?

When you’re a founder, and you’re looking at, especially those ABC rounds, you need a partner who will bring to the table things that help you figure out those corners you can’t see around because you’ve never been around them.

This stage of my career was about giving back. It was about teaching all those things that I learned in all those roles you mentioned and how to avoid the mistakes and take advantage of the opportunities faster because you do learn a lot of that. It’s like getting a mortgage. By the time you get good at it, you don’t do it anymore. So, how do I make sure I help other people avoid that mistake or those mistakes?

In this marketplace in particular, when you are looking for a great partner, people like Madrona are thinking about bringing executives onto their teams, prior executives onto the teams, and we have quite a few at Madrona. Those people have your back. They’re there to make sure it’s one phone call away to say, hey, can I talk about X or Y?

At one of our CEO’s all-hands the other day, I talked about this maniacal focus on focus. Stop looking at the market and focus on the pain to solve, what you’re building, and how you’re taking that to market. If you do that, you will get through this. Getting distracted by all the noise is one of the challenges that you’ll deal with, and we’re learning it well right now. That’s really critical.

One of the other things that is key, and you hit on this earlier, is the culture you build. We were talking about performance culture, and I just wanted to hit on it for a second. One of the reasons I joined Madrona was because it had an incredible backbone of ethics, culture, and incredible skills on this team.

That is important as you think about the partners you’ll work with because part of something that your board and your investors will do for you is help be culture bearers and think about what’s going to make your company successful. When I look at founders and some of the founders that I’ve worked with and said, okay, what are the things that help lay some of that foundation?

It’s okay to say, I don’t know. As a leader to say, it’s okay to say I don’t know, but it is not okay not to try, not to come with a perspective, and not to come with a point of view. When you start to create that openness, you get the diversity of thought. I always try to make sure I say it. It’s, I don’t know, I don’t know the answer to this, but let’s talk it through. Let’s work this out.

As a founder, when you do that, it’s even more powerful because you are the one who created this product, this company, so bringing that openness and opening the floor to diversity of thought gives people the freedom to do that. Fear is a tactic, not a strategy. When you put fear in play, it’s because you are trying to emphasize how dire a situation might be.

Sometimes that’s okay, but you use it at a point in time, not as a strategy every day, because people who respect you and want to please you and want to make sure that they’re working hard for you work 10 times harder if they don’t have that fear because fear makes them do things that are not thoughtful and are not strategic and are the opposite of really what you want.

When they admire you, they are going to work that much harder, and I think it’s a foundation you have to build from the early days of who you want to be, what you want your company to be known for, and when you have those things, you also recruit incredible talent. That is something I see with some of our founders, and what would I tell the 22-year-old Anna?

Karan: Yeah, 23-year-old Anna. I mean, that’s only five years ago, so you should have no problem remembering.

Advice: don’t try to make failure look pretty

Anna: I wish it were only five years ago. I think I’ve been pretty good at this, but being an executive and a founder is about taking risks. It’s about saying no to things, but it’s also about what you say yes to. But don’t try to make failure look pretty. Call it a failure and move on.

It is one of the most critical things that we all can do, and it is easy for us to say it wasn’t that bad. It was okay. If we just do this with it, if we just do that with it, or maybe we just need to try harder. Sometimes, it is just failure, and that’s okay. You learn so much from that, but you don’t if you don’t pivot and take another direction quickly. If I’d known that earlier, it would’ve been super helpful.

The other thing that I just hit is, what pain do you solve in the marketplace? It’s why you exist. Don’t ever, ever forget that. I say this to our founders, too. You have to sit in front of customers, no matter what leader you have on the team. I’ve heard CROs go, oh, when you get to be a CRO, you don’t have to do customer meetings anymore. I was like, that’s insane. Everybody should be doing customer meetings, everybody.

Heads of product, heads of engineering, heads of marketing, obviously CEOs and CROs every day, right? That is how you make sure you are staying in touch with what you’re solving in the marketplace, and is the market changing on you because, if you miss that, you miss those cues, then it will cost you. It’s going to cost you time. It’s going to cost you money, so stay close to that pain, and don’t make failure look pretty.

Karan: I love it. Talking about failure reminds me of a wonderful speech that JK Rowling gave at the Harvard commencement a few years ago about the power of failure. Your comment around failure reminded me of that, and I would encourage everybody to listen to that, as well.

Anna: You are the king of commencement speeches. I know that is one of your things. It’s funny, I just read something the other day. Apple was talking about one of the things they look for in all their interviews, and I was like, okay, I’ll click on this. What do they look for in every interview? It’s for somebody to say, I don’t know.

Karan: That’s great. Awesome. Well, I have two other questions that’ll hopefully be short. I don’t think I can do a podcast in 2023 if I don’t talk about AI. So, is AI going to take all the sales jobs away, Anna?

Is AI going to take all the sales jobs away, Anna?

Anna: No. Hopefully, it just really helps. I think we’ve talked so much about how do I enable you to be faster, better, and smarter. That’s what customers want. They want you to be faster, better, and smarter, so how do we do that with AI? AI is going to be game-changing in really improving time-to-value for customers and in time-to-value for go-to-market teams, as well, right?

On both sides of that, that’s a win-win. Will there be changes? Absolutely, and you see it already, but I think there’s so much there that will be positive, and AI is not as sophisticated as we’d like it to be yet. We all want it to be the end-all-be-all.

As you all know, you have to tell it the 10 things to get to the answer you want, and it’s only as smart as what it can find out there. That will obviously improve. That will get better. It will learn, and so will we. There’s just, we are going to get into a culture where go-to-market stategy is a lot more effective than it used to be.

Karan: That’s great. Well, I’m sure many people will breathe a sigh of relief after hearing that. Are there any books you’ve read recently that you would recommend to our listeners?

Book recommendation: “Simplicity”

Anna: Oh, wow. The one I just started is called “Simplicity.” It’s by Edward De Bono, and it’s about how you boil things back down again? How do you get back to simplicity? It resonated for me, and I’m just starting it. Somebody had recommended it to me, so I’m going to recommend it out, but it was about the world being so complicated, and we are trying to do 25 things to impact, and sometimes it’s not 25 things, sometimes it’s three.

We over-engineer ourselves. We over-engineer the problem sometimes. I loved this concept because, as I step back and I look at my career and the not five years that it has been, it is so much. There are three key lessons. I come back to the same core. I come back to the same principles.

The problem is we forget to focus on them, and we don’t focus on them and weave that thread through everything we do, and that’s what creates more of the challenges. We over-complicate the environment versus focusing on what’s really critical here for us to win, to be successful, and to move our company to the next stage.

Karan: That’s great. I love it. Well, on that note, I want to thank you, Anna. This has been awesome. Obviously, I was sad when you left Outreach, and I figured we’d never get a chance to work again, but I’m so glad you chose to come to Madrona as an operating partner. Our founders are lucky to have you as an advisor.

We are lucky as investors to have your insight and experience guide us in our investment decision-making, and I’m just so happy that we get to have lunch right after this in the office. So, thank you again for your time. Thank you for sharing all these pearls of wisdom with our audience today.

Anna: It has been such a pleasure. I love working with this team. Thank you.

Coral: Thank you for listening to this week’s episode of Founded & Funded. We’d love you to rate and review us wherever you get your podcasts. Thanks again for listening, and tune in a couple of weeks for our next episode of Founded & Funded.

Tigera CEO Ratan Tipirneni on Commercializing an Open-Source Project

Today, Madrona Partner Aseem Datar hosts Tigera CEO Ratan Tipirneni. Tigera is in the business of preventing and detecting security breaches in cloud-native applications, and its open-source offering Calico is one of the most widely adopted container networking and security solutions out there. Aseem and Ratan dive into whether or not you should open source, the three business models founders should evaluate when thinking about commercializing an open-source project, how to compete with free, how product-led growth can really kickstart your business compared to traditional go-to-market, and so much more.

This transcript was automatically generated and edited for clarity.

Aseem: I’m incredibly excited today to talk about open-source in the context of one of our portfolio companies, and I’m excited to have the CEO of Tigera, Ratan, with me. So, Ratan, very excited to have you and a warm welcome. Would love to maybe just get a little bit of your background in your own words and tell us how you got started with Tigera.

Ratan: Thank you, Aseem. So first, I have to say I’m very excited to be here. Thank you for the opportunity. My background is — I’m a software entrepreneur. I’ve been working in the Valley for a while, and that’s what I enjoy doing: building businesses and software. At Tigera, we are in the business of preventing, detecting, and mitigating breaches in cloud-native applications. We started several years ago with some really smart engineers and architects who got together and decided to solve some pretty tricky and gnarly problems around container networking and container security. That was the company’s genesis, and since then, we’ve been on a really exciting ride.

Aseem: This is a common pattern we see today — really smart people hunkering down on a potential challenge they see in the ecosystem and going after it. The question that comes to my mind is how should founders, engineers, and those seeing these set of problems think about it in the open-source context? When do you start building something, when you decide whether to open-source it or not, like how do you think about real demand, and what are some of the early signals that folks should watch out for as they’re thinking about going down the open-source path?

Ratan: The best way to get started is to get out of the starting gate, build something, put it out there, and get some conversations going with the community. The conversations are super critical to building that sense of camaraderie within the community and, more importantly, to get some feedback and participation from the community where they feel like the community has a voice in terms of providing feedback, contributions, engaging the community, and really seeing where that goes and letting the community actually steer their project. That’s probably the best way to get started.

Aseem: Are there any specific metrics to look at? Like people track all sorts of things. They track stars, they track the number of contributors, and they talk about how many repos are getting created. What are some concrete metrics to sort of get behind and understand if there’s real demand?

Ratan: Sure. So, I’d say there are probably quantitative and qualitative metrics. In terms of the quantitative metrics, I heard you say the stars — this is a personal opinion — but it’s a vanity metric. I’ve literally seen that being gamed, where I saw one of the founders at KubeCon offer to hand out some gifts for users to go liking some stars.

But signs of daily usage of your software are probably one of the best metrics and most objective metrics you can get. Remember that the metrics are as much for you, as they are to help you build some credibility in the community. You really need to know that someone’s actually using it and getting some value from that.

In terms of the qualitative stuff, it is equally important to have the dialogue and get some feedback from users on how they’re feeling about it. That could give you plenty of clues on perhaps what you may want to build in open-source or perhaps what you want to build in your commercial products. But honestly, just being curious about it and engaging with users at every single opportunity is probably the easiest and fastest way to get some of the qualitative feedback.

Aseem: Continuing down the path of commercializing an open-source project, Ratan, what’s the point you arrive at when you really decide, “Hey, how do I commercialize that?” Is there a stage gate that one should think about? So, talk to us a little bit about that journey and how Tigera got to that stage.

Ratan: There are two parts to my response to that question. First is when you start to think about it, and second is when you actually pull the trigger on that.

When you start to think about it is — you start to think about it on day one, right? Even when you’re launching the open-source, you have to be thinking about your commercial offer also. It doesn’t mean that you’ll launch the commercial offer, and you should not launch it on day one. But you certainly have to be thinking about it because any decisions you make about product functionality and packaging in open-source will have an implication on your commercial product strategy. So you’ve got to do that right away.

The second part of the response is when you actually pull the trigger and when you start to launch a commercial product or monetize it, I’d say you have to wait until you have some minimum critical mass in your community where you feel like there’s enough momentum where you hit escape velocity before you actually think about launching and monetizing your commercial product.

The other interesting concept, at least my mental model, is you have to be thinking about building two separate and distinct operations or businesses from day one. You have to think about building that momentum in open-source, the open-source project, the community, the users, the value they get, the accelerators, and all that. That’s one part of the operation. You also have to simultaneously think about how you actually build a commercial business around that. From a timing, it’s difficult, but you have to be doing both in parallel because if you over-index on any one of them, it may not be an optimal outcome for the team and for the company.

Aseem: Pulling on that thread a little bit, at what point would you say a company is not ready to transition over? At what point do you say, “Hey, you’re not there yet, let’s not turn commercialization on?” Is there a moment like that, or is there a gate that you absolutely must cross, which could be a binary one? Or is this more art and a science to go figure out like, “Hey, do we turn commercialization or revenue on?”

Ratan: One simple heuristic I’ll offer is that if you’re struggling to build community and are not seeing acceleration in the number of users month-over-month, you really have to question whether this model is even worth pursuing. That’s a moment of truth where one may have aspirations of building an open-source-based company and a go-to-market model based on that, but if you aren’t able to get acceleration in the open-source adoption, perhaps it’s not even feasible, and maybe it’s time to go try something else. Be a little bit premature to launch a commercial product to layer on top of the open-source product.

A good time to be thinking about launching a commercial product or figuring out how to monetize your open-source is if you hit a point where you feel like the open-source community is growing. Also, if there’s some word of mouth and a level of acceleration, you see month-over-month in the open-source usage.

Aseem: Shifting gears to a topic relevant to the commercialization chasm is really around business models. What are some of the business models? What are some of the considerations that one must make to transition over to commercialization, especially in the open-source world, that the founder needs to be aware of? How should they think about monetizing?

Ratan: From a historical perspective, there have been three business models which have evolved over time. The first generation of business models in open-source was you give everything away for free, and then you try to monetize it through commercial support. There’s been one company that has done it extremely well, but it’s yet to be replicated since then. I’m not quite sure I would recommend that to anyone at this point. Although I do see some companies in open-source make the mistake of putting everything in the open-source bucket, and, at that point, they have no other option but to try to monetize it through commercial support. That’s pretty dangerous.

I’ve seen someone in our ecosystem do that recently, and that’s a one-way street where you’re eager to build community and momentum. You dump everything into open-source because you want to try and do one better than the other companies in the ecosystem, but you can’t recover from that. Your only option is providing technical support, and it’s not a very defensible way of building a great business. Even if you succeed initially, what you’ll end up seeing is you’ll end up seeing quite a bit of churn in later years. You just will not get that cumulative momentum.

Aseem: How do you compete with free, and how should you think about competing with free? Or is there something where you compete with free that’s viable?

Ratan: I think the broader uber issue behind your question is, “Hey, how is a company able to do this for free?” I would offer to say it goes back to capital markets. Suppose the capital markets are throwing money at companies, and there’s an endless supply of capital. In that case, that is when we saw a lot of these company adopt this approach of putting everything into open-source. Giving it away for free with no accountability, and some of them maybe had unicorn status.

It’s safe to say that those days are past, and we are back to fundamentals. Even if a company does something like that, it will only sustain for a short time. Trying to compete with free by offering more free stuff is a losing battle.

The other part is it also depends on the quality of customers you’re trying to see because the really high-quality logos that you’re trying to build a business on — they actually want to pay you money. They want to see you successful because they understand that open-source companies, they can’t really abuse the system. If they do that, it’s not sustainable for them, and they’re going to end up losing in the long term.

Even if you’re competing for free, I would say that you want to go find customers like that where, even though they may have a choice of something free versus someone else, an open-source company that’s offering a paid version, they know that the right long-term option is the paid version because that’s the only sustainable solution.

Aseem: Absolutely. I think that’s very helpful. Let’s get back on track. I think we talked about the first business model. I know you had a couple of others, so maybe let’s just continue on that path.

Ratan: The second business model, which was tried about a decade ago, was being able to layer on a commercial product on top of open-source. Essentially, you’re selling something open-source, and you have a project that you offer. Then you provide some additional commercial features on top of that that a customer can buy. Several companies have done a stellar job monetizing and using that approach. If I were to pick a few names, GitLab did a brilliant job. MongoDB built a very successful business model using that. I didn’t give you an example of the first generation, but Red Hat is the most obvious example of the first generation of those business models we talked about.

The third generation, which we have seen more recently and has had a lot of success, is offering a SaaS version to be able to offer a service as a commercial product. The best example in recent history that certainly had a big impact on our decisions was Databricks, where they started at the second-generation business model and then they switched to the third-generation business model I talked about and started to offer a service. They’ve seen really, really good success.

Tobany entrepreneurs who starting today, jump to the third generation right away out of the gate. It’s difficult to switch from the second-generation to the third-generation business model I described. If I were starting today, I’d switch directly to a SaaS model because it gives you so many levers and a better ability to experiment rapidly and really sets you up for success if you’ve got this motion of leading with open-source to create a wedge and following through with the SaaS offer.

Aseem: That’s very well articulated, and I’m sure a piece of gold right there for some of our founders as they think through commercializing an open-source project and which model to go back to. I think there’s a pretty obvious one in our discussion, but not a one-size-fits-all kind of thing.

What surprised you as you were building Tigera and going through the motions about commercialization, signing enterprise customers, maintaining the open-source, and working with partners? What’s the one thing that has been a little bit like, “Gosh, didn’t expect that?”

Ratan: There’s perhaps a little bit of a fallacy and assumption about open-source because it’s free that you assume that users will adopt it. That’s just something to rethink.

Another way of saying it is, what is the cost to a user to adopt your software, to install it, to deploy it, to derive value from it, to maintain it, to operate it? You got to think about that entire life cycle. What is the cost involved? And if that’s not a good equation, the odds of you building a great community that is and an adoption base that’s growing rapidly, the odds of that are going to be pretty low.

The kind of rigor any entrepreneur would apply to building a commercial business is you have to apply the same level of rigor to your open-source community. Just because you give it for free, you don’t get a free pass on all those other things.

What makes it challenging is that you’re going to do all that in a very cost-effective manner because, to state the obvious, you’re not getting paid for any of that stuff. Unless you’ve got an incredible amount of leverage in your operating model to accomplish all those things, it’s going to be very difficult to pull off.

Aseem: Well, that’s certainly very true, at least from the perspective of surprises that you want to avoid.

Tell us a little bit about getting scale. I think we all think about new motions around PLG, self-serve channel. Like from an open-source perspective, is there one go-to-market type that works better than the others? What’s your perspective on that?

Ratan: It depends on the space you’re working in, so I’d be very reticent to generalize a single playbook that would work for everyone. It depends on where you are in the stack or what domain you work in. The models work very differently, but since it touched upon PLG, let me talk about that.

I’m a big proponent and a big fan of PLG for a few different reasons. The first is, fundamentally the buying behavior compared to even five years ago has changed dramatically where most users really want to actually try out your product, use it, and test it in production before they decide to make a long-term commitment to that. The best way to make that happen is to offer them a free tier of product that they can activate, start to use, derive some value, and evangelize to others. So that’s the first thing.

The second aspect of buying behavior I feel that’s changed dramatically is that the traditional purchasing model of top-down, with the CIOs making all the decisions, has shifted quite a bit. The power is now in the hands of the people in the trenches: the DevOps engineers, the security engineers, the DevSecOps engineers. They are the first to have an intimate understanding of the challenges, and they are the ones out there looking and shopping for tools. They have a very tight-knit community of peers who they talk to to get information. They’re certainly not talking to famous analysts to get recommendations for any of the tools they use, and there’s a disproportionately large amount of power in the hands of those people.

And the third is a lot of these buyers, the new generation buyers, the DevOps engineers, DevSecOps engineers, security engineers, they would rather not talk to anyone. They would rather go self-serve, educate themselves on the product, and enjoy the process of getting to know new tools. They enjoy the process of going through tutorials, reading stuff, downloading stuff, playing with it, and using it. For all those reasons, PLG or product-led growth is a really natural motion that aligns with that buying behavior.

Now, that’s easy to say. It’s extremely hard to operationalize the PLG model. It is not incrementally harder. It is harder by a few orders of magnitude compared to a traditional go-to-market model. The reason is you may have a lot of warts in your go-to-market model that you can hide using people. In a PLG model, it’s all out there. There’s no place to hide, and so it’s extraordinarily difficult to do that. If you can pull it off, some of the fastest-growing businesses that we have seen — that’s the one common denominator. They’re all using a PLG motion, and if you actually nail that part, the operational part, the business is off to the races.

Aseem: Two interesting things to sort of draw out from that. One is this notion of — so much of PLG is of the essence of how open-source started. It’s community-led growth because you’re letting the community tell you more about what works and what’s not working in the product.

The second thing is in a traditional go-to-market, your limit or your bottleneck is going to be people. In a PLG type of motion, there is no physical bottleneck that you’re going to hit. For better or worse, I think that’s where the fastest growth is going to come from, or that’s where the fastest learning is going to come from.

Ratan: Absolutely. There’s a tremendous amount of learning. I’d flip it around and say there is no single way to do PLG. You have to think about it as a process, a process of learning, and you’re learning through some rapid experimentation. The only playbook is how rapidly you can learn with a series of small experiments, so the solution may manifest slightly differently for each company.

Aseem: So much of what you’re saying is resonating with signals that we see across the board. What is something you wish you knew going into all of this? Is there a fun story there? I always think that there are nuggets of brilliance that come out of this question, so I have to ask you that.

Ratan: You know, that’s an interesting question, so my response may not be what you’re looking for, Aseem. Being an entrepreneur, you have to be slightly insane to be an entrepreneur. So, let me just state that it’s not a path for rational people.

Aseem: So I am taking notes, Ratan.

Ratan: One of the reasons I think people like me chose this path is because what’s interesting on a day-to-day basis and the breadth of challenges that you have to deal with that keep you really in a state of flow and engagement. It’s been a lot of fun decoding everything that we have done here over the last several years. I don’t wake up any day and say, “Hey, I wish I knew this before.” Because if I knew it before, it wouldn’t be fun, and, honestly, I won’t be doing it.

Aseem: I am taking away a “no regrets” learning there, which is fascinating. Ratan, any parting thoughts or advice for founders who are playing in this space, who are thinking about open-source, who are along this journey of decoding, as you call it, like the challenges they’re faced with?

Ratan: Here are a few things I’ll share because we are seeing some of that play out in the ecosystem that we compete in.

The single most important thing is to stick to fundamentals. What I mean by fundamentals is to be clear about who your end-users are, what their pain points are, and serving them high-integrity manner, and everything else will take care of itself. And try to misrepresent things either to users or customers in the short term — it may feel like it’ll help some companies, but it really backfires in a big way in the long term.

For anyone trying to pursue open-source, my advice is to reiterate what I said earlier: they’re two businesses or two operations, so you’re building in parallel, and you don’t get bonus points for nailing one of them. You’ve got to nail both of them together. From a timing perspective, you’ve got to be thinking about both simultaneously.

The third part, which may sound a little redundant, but it is worth stating, is to not over-index on one aspect, especially the open-source. In the early days, especially if you’re trying to come from behind, we are seeing someone in our ecosystem do that where they’re trying to come from behind, and you throw everything into the open-source bucket, and it feels really good for a while. It’s like eating sugar; it’s going to feel really good for a while, and then you pay a heavy price afterward because you don’t really have a repeatable monetization strategy.

Other than that, it’s a fantastic time to experiment with this go-to-market model. If you are, I’d say jump to the third-generation business model, which is to take your open-source, layer it on with a SaaS service, and if you have a strong stomach, try PLG with it.

Aseem: That’s awesome. I think you summed it up really well. Ratan, thank you so much. I’m sure that our listeners will enjoy every moment of this conversation, and so much to learn from this. I’m taking away a few keywords that I will repeat: insane, have the appetite for having too much sugar, and don’t worry about commercializing an open-source project in the sense of being obsessed with it.

Ratan: Thanks, Aseem. It has been fun. I appreciate the opportunity. Thank you.

Coral: Thank you for listening to this week’s episode of Founded & Funded. We’d love you to rate and review us wherever you get your podcasts. Thanks again for listening, and tune in a couple of weeks for our next episode of Founded & Funded, where we’ll get to know Madrona Managing Director Tim Porter in one of our investor spotlights!

Troop Travel Founders on the Startup Journey and Relationships That Matter

Troop Travel - Founded-Funded

Today, managing director Steve Singh sits down with Troop Travel Fo-founders Dennis Vilovic and Leonard Cremer. Madrona invested in Troop and its corporate travel meeting management platform in 2021 and was particularly interested in how Dennis and Leonard were effectively creating a new technology for a market segment that was largely served through manual processes. These three talk about looking for the right problem to solve to launch a company, and how you decide if it’s the right time. They also discuss when you need to switch from bringing on generalists to bringing on experts, how to identify the right investors to work with, why working closely with customers and potential customers is so important, and so much more.

This transcript was autogenerated and edited for clarity.

Steve: Welcome to another episode of Founded & Funded. As a direct result of a global pandemic, the nature of the workforce has changed. Where we live and where we work has evolved. What has not changed is our desire to get together to collaborate and improve the quality of life, so how do globally distributed teams come together to collaborate? To explore that question, we’re going to be speaking with Dennis Vilovic and Leonard Cremer, the co-founders of Troop Travel, a next-generation meetings management travel platform.

Dennis, Leonard, why don’t we start with just speaking to your experiences prior to Troop? What was it, as far as a pain point, that inspired you to create Troop? I’d love it if you could explain how two people who were unlikely, perhaps as a pair, came together to solve this problem.

Dennis: A quick introduction about myself. I’m Dennis, one of the founders, German originally, economist by profession. I’m based in Spain, and I used to work my whole life as a consultant, driving efficiency and how government institutions, or organizations in general, operate. So introducing new business processes and technology, etc. And there were two key moments in the time before founding Troop, which basically led to identifying that there’s a big challenge we are solving here right now. One was in 2014 when actually I was trying to meet up with a friend who was living in a different country. I hadn’t seen this person for a year, and we said, “Hey, let’s just come together somewhere,” It didn’t really matter where. We just wanted to spend a weekend together.

So I remember I literally went into Google, and I put in, “Okay, I’m based in Madrid. My friend is based in Munich, where do we meet?” And I thought there would be this service to help us — to tell me, “Well, if you meet in Paris, that’s the cost, and if you meet in Lisbon, it would look like this, etc.” Well, it didn’t exist, so I manually gathered these data points in a Google Sheet. It took me about a week to figure out how to bring two people together, and in the end, we realized, Milan would be the best option. So we met in Milan, and it was great. Fast-forward a few years, and the second part of the story is that I started to work for a global organization, which is a membership organization, having a lot of members all over the world, bringing them together frequently, and I was part of a remote team at the time already. I was wondering, whenever we organize our internal team meetings, why would we always go to locations that, in my opinion, didn’t make any sense.

For example, we kept going to Paris. Paris is a beautiful city, but it’s very, very expensive. And I thought, why are we not going to Prague, for example? And that’s where I had this frustration in that organization, and that’s where I met Leonard, who was a member of that organization, that’s where we realized we should maybe try to solve this.

Steve: Fantastic. And Leonard, what’s your background, and how did you get interested in solving this problem?

Leonard: Now Steve, so my background is from a technology perspective. So, I studied engineering, and then I worked in telecoms for a number of years. Our previous business that we founded was integrating with mobile operators and doing high-volume transaction switching for them, and we grew that business across Africa and to a number of countries. But then that industry started to change, and I joined the organization EO, Entrepreneurial Organization, to network internationally and to look for other opportunities. I think it shows you also the power of in-person meetings where I met Dennis in person two times at some of these events organized by the organization. And it’s really through those events and discussions with Dennis that he introduced me to the idea.

Initially, I didn’t know if there was opportunity or not. But after the second time I came back to South Africa, I started doing some software around it to test the scenarios, and then we saw that actually there was opportunity to save quite a lot on the cost side of the travel, but also the travel time. And then, from there, we started exploring the idea properly and talking to funders, etc.

Steve: That’s incredible background. One of the things that is common across both of you is that you both worked at fairly large organizations in the past, had very comfortable jobs, and could meet your needs and your family’s needs easily. It’s risky. It’s challenging to go start a business. How did you get comfortable with that idea? What drove you to say, “Yeah, this is the right time to go do it, or this is the right thing for me to do?”

Dennis: For me, it was really, I was working in this organization which was working with entrepreneurs. I had a direct contact with these entrepreneurs who were kind of a different breed, and it was fascinating working with them, the energy and always solving problems. So I really got excited about being an entrepreneur myself, but I was looking for this opportunity, I was looking for this one idea, and I couldn’t find it. And the light bulb went on at an event in Bangkok in 2016, I think, where I was having a drink with the founder of Trivago, and I was talking about this frustration I had in that organization with how we are organizing our meetings. And I asked him, Rolf is his name, “Rolf, why does a service not exist where you bring in your starting location, it tells you where we should we meet?” And then he looked at me, and he said, “Well, what a fantastic idea.” I was like, “Wow, oh really?” So is that the idea I was looking for? And then, the next evening, I was having dinner with Leonard, and I was telling him that story.

Leonard got very excited, and then two weeks later, Leonard was back in South Africa. I was back in Kenya at that time. Leonard gave me a call and said, “Hey, I actually looked into that idea, and I think there is something.” So there were some key moments which led us into, well, at least spending our free time working on this concept, and then the real game changer came as then a few months later actually the founder of Trivago said hey, he wants to invest in that idea we have, and then I said, okay, if someone gives us money to build this business, then we have to go all in. And that was, for me, the reason then to say, okay, let’s actually become an entrepreneur, and have enjoyed it very much so far.

Leonard: For me, the journey was a little bit different. I think for me because I’ve gone through the journey myself, I understand what the adventure is of building a company. And I think it’s just, in my previous company I’d been there for 15 years, and it was an established company, and I think that excitement was gone, and I was really looking for something to get involved in again. So at that time when I met Dennis, I was actually exploring a few different ideas around where you can apply technology to make the business processes better. I think when we started exploring this idea, I was still continuing with a few of these ideas, but then as the idea crystallized and we saw there was a real opportunity, and then also I think when we raised the funding from Rolf from Trivago, that’s when I stopped all the other opportunities and went full time into this and then started the journey to where we are now.

Steve: And so far, I’m assuming it’s been a fantastic experience.

Leonard: I think for both of us, it’s really, every time you do it, you learn new things, and I think it’s just such a unique opportunity that we have here that we’re excited about the future.

Steve: So you gentlemen started the company in 2017, and a couple of years later, there was a global pandemic. Now, most founders, especially those operating in the travel industry, would’ve looked at the impact of the pandemic and said, boy, maybe we should rethink our strategy and maybe think about a different area to focus on, but you saw it as an opportunity. Tell me a little bit about why.

Dennis: So, actually, the time before the pandemic was quite exciting. 2019 was the first year where we had paying customers, and immediately we signed Fortune 100 companies. So it was quite exciting. So we went into 2020 really with a lot of expectation, a lot of hope. We were at the business travel show in London actually just two weeks before the pandemic started, and you could feel at that show already like, wow, something is happening. People heard some news from China and then some people got worried. But I don’t know, at least on my side, I never really thought that that would impact us so much because what I’ve always believed in is that we need in-person meetings, we need connection, so this will happen. So the pandemic is a temporary thing, and as human beings, we have to or we required to come together more often, and it’s way more complicated now to meet in person. That’s a bit of what we saw.

As the pandemic started, people stopped to travel, people stopped to have meetings, but people were thinking already, okay, how will the future look like and will it be different? There were some additional data points that no one thought about in pre-pandemic, like what entry requirements if you travel to a country, for example. And so those were all functionalities that we were able to bring into the platform. And then, as we were getting a bit out of the pandemic, that was one of the main reasons why we were able to grow the business in that time. So in 2020, we tripled the business, we tripled the customer base, and we continued to grow in 2021 as well.

Steve: Incredible. So when you think about the impact of the pandemic, it seems like that was actually obviously coming out of it, a potential growth vehicle for the company. Is that a fair way of thinking about it? Leonard, what do you think?

Leonard: Yeah, for sure, Steve. I think, as Dennis said, I think the data that we brought in about Covid restrictions, that was a big positive for us. So I think really it complicated the whole travel experience, especially for larger corporates where compliance and safety of the people are important. And I think the other thing that happened is during Covid, their employees couldn’t travel. So the people responsible for the travel program at times actually looked more strategically at the products that they used and their strategy going forward.

But I think a third thing maybe just to mention is that during that first lockdown period, we also called some of our customers and industry players to a strategy session around travel in the future. We’ve included people from the flight industry, hotel industry, and some of our customers, just for a workshop around it. So for everyone, it was so uncertain of where it was going to go that everyone was looking for answers and just people to discuss this with. I think some of the ideas actually came from that session on what created some thoughts on our side of how to move forward with the strategy of the business.

Dennis: And just to add maybe to what Leonard was saying, in pre-pandemic already, there was some remote work happening. Actually, in our case, I have been working remotely since 2016, but it wasn’t that common. But then, obviously, with the pandemic where within a couple of weeks companies had to move from working on-premise into working fully remote, people actually started to enjoy the benefits of it. Obviously, there are some implications to it, but it really showed that, well, as we’re coming out of the pandemic, there are certain things people don’t want to give up anymore. People don’t want to give up that flexibility of deciding, “Hey, do I go to the office, or can I work at home”? There are a lot of things we can do actually virtually, but then there are very key things we can only really achieve when we come together in person.

And one of the key things is building this connection, building culture within a company, which is the backbone of any organization, the connection between people. And I think a lot of companies have realized now that as they’re coming back on their, maybe turning into a hybrid work arrangement that hey, we have to facilitate building these connections between our people, and one way of doing it is by having way more internal in-person meetings.

Steve: So before Troop Travel, how did companies, whether you’re talking about some of the largest companies in the world or even small and medium-sized companies, how did they determine where to meet? And when you think about that now in the context of Troop, what’s the value that they’re getting out of it? Maybe you can give me some examples.

Dennis: So it’s quite interesting that when we talk to potential customers, and we tell them about the value of actually asking yourself where should we meet, that people have never thought about that that it would make sense to ask that question. And then we share some concrete data with them. And I think there’s a very interesting use case we have with a technology company based in San Francisco. They contacted us, and they wanted to do their first all-team meeting after the pandemic — 1,000 people roughly distributed over 76 different places from all over the world. And they said, okay, we want to bring everyone to San Francisco, and they asked to help them by basically letting them know the impact in terms of cost, etc. So we ran it through our technology, and we told them, okay, if you send 1,000 people to San Francisco, it’ll be roughly $2.5 million in cost. But then the real interesting data point no one ever thought about is that it would take the group roughly 14 years of working time — they would spend 14 years of working time on planes traveling to San Francisco and traveling back home. And that event would actually produce 2,500 tons of C02. So we said, hey, if you now do the same meeting in Paris, you would actually be able to reduce the costs by $400,000, reduce the travel time by four years, and avoid 800 tons of CO2.

Steve: That’s incredible. So massive economic savings, massive savings of time, and frankly, lower reproduction of CO2 or greenhouse gases. So one of the things that I find interesting about this market is that unlike some markets where there are existing technologies and cloud-native technologies are delivering oftentimes 10X better value props, here we’re talking about a business need that really wasn’t solved for. Why do you think that need was not solved for before? Obviously, it’s wonderful to be able to go after what effectively is a greenfield opportunity. Why do you think the need wasn’t addressed in the past?

Dennis: One key thing is really as well that technology was actually not able to manage all these different data points bringing together, and I think Leonard can talk to that, how much is really happening in the platform when we run those analyses and as well just simply, people in the past were going to their TMCs, to the travel management companies and ask them hey, we want to do a meeting, can you give us a location? So, in the end, then the people in the agency were doing a manual exercise, which is extremely time-consuming, and extremely difficult to do, and I think there are a couple of components why people haven’t addressed that in the past.

Leonard: Let me just add to that quickly, Steve. I think the one that Dennis mentioned about the technology, that’s a big point. I think it’s a massive amount of data that you need to manage and process and make sense of. And I think it’s just on both the back end and the front end side, the web browsers and the laptops that we use nowadays, it’s much more capable and can process that amount of data much easier where previously it was a challenge. And then secondly is access to the data. We saw that some APIs like Skyscanner, Booking.com, exposed all the APIs that weren’t previously accessible so easily. So I think there are a few factors that came together and just made the opportunity possible, maybe at the time when we started exploring that opportunity.

Steve: Yeah, what’s interesting, even in the legacy model, these are totally disconnected processes. Determining where to go was a very manual process to your question of where do we meet. Then, booking it was effectively re-keying a whole bunch of information into a booking tool, and then there’s no real itinerary management. Talk to me a little bit about what is the vision for Troop’s product suite. It’s obviously more than just answering the simple question of where do you go?

Leonard: Yeah, exactly. I think what we saw initially, our idea was more around finding the best meeting place, which was very much a manual process, and as soon as the meeting got too complex, it just didn’t happen because it was too much data processing manually. So people didn’t do it and they picked a spot, like Dennis mentioned, like Paris, and you just go there again and again, and it’s personal opinions, and I think we filling that piece of the need, we filling with our tool like Troop, but then I think the interesting thing is that when you do that exercise, we said right at the beginning of our process of executive meeting, so it’s exactly what you say, you need to book it, you need to manage it, there are expenses associated, etc, and because we right at the beginning of that process, you can then mold the product out to service the whole journey of a meeting planner and also attendees. So I think it’s just a very interesting position.

I think our strategy is really to be able to develop our product, to be able to cover all aspects of meeting planning, but not only the product or the platform side, I think for us, it’s important to build productivity tools for the ecosystem so that the different players within the system could collaborate. For instance, the person organizing the meeting should be able to collaborate with meeting planners in the city where they want to go and get proposals and collaborate with them around the proposals. Also, the data providers in the industry, like meeting space providers, visa providers, and local transport, we wabt to enable them to become part of this ecosystem through a marketplace that they could integrate into our platform and make their services available to either our customers that are planning the meeting or the local meeting planners that want to do proposals. So I think for us, it’s the platform, the whole journey, but also this whole ecosystem and to enable them all to be more productive and solve the meeting planning process in a better way.

Steve: So, in effect, what you’re doing is you’re building the booking and a full travel experience for meetings as opposed to individual travel, which obviously has been automated by companies like Concur in the past, but there’s really not been anything to automate the group travel experience.

Leonard: Exactly. I think that the group planning process is quite a bit different from individual travel. And I think the complexity for a meeting planner is that traditionally it was different people booking in different places and then collaborating all that and making sure who’s booked, who hasn’t booked, what the status is of the hotel booking, flight bookings, etc. So we enable all that with the future vision of Troop Travel, which just makes the whole process a lot easier.

Steve: And maybe Dennis, you can expand a little bit upon the meeting planner concept that Leonard just mentioned. Certainly, the way I interpret it is that, hey, if we’re picking a location for the group to meet, you want local experts to be able to advise not just on locations to use, but also maybe even coordinate as part of the trip. Is that the right way of thinking about it? And then how do you see that ecosystem expanding?

Dennis: So I think the big challenge when the difference of a meeting versus a normal business travel is that there are a lot of different components which are part of the things you have to consider. It’s not only the flight or the hotel, it’s the meeting room, it’s activities, restaurants, local transport, a lot of different things. So then when you pick or when you look at a hotel option for your meeting, it basically defines the other components around it depending a bit on the size, but let’s say if you have 30 people and you look at a hotel, well, ideally you don’t want to have dinner than on the other side of town because then you have to add local transport. So it’s the connections between these different data points, which really is almost like the secret sauce of the meeting, which makes a meeting successful.

And I think the way we look at local experts is that there are people in every city around the world who are focusing on organizing professional meetings in those cities, and they know exactly. If tomorrow I want to do an executive meeting in Seattle, they will tell me, well, there are really only these three hotels which will work for you and if you take this hotel, then you should take that restaurant and you should take that activity, etc. And I think that’s the fantastic knowledge we are bringing into the technology to actually help you to really make your meeting successful.

Steve: That’s incredible. So you end up getting a far better experience in each location than you could if it was centrally managed through an organization that may or may not know the details of that particular city or that particular town?

Dennis: Yeah. Just maybe to give you a concrete example, we are planning our next ALTI meeting. As I said, we are fully remote, we’re planning our next ALTI meeting for next year, and we start running it through Troop Travel, and we came up with a short list of destinations we want to actually get proposals in. So we shared them with one of our internal meeting planners, and then, she actually immediately said we were looking for example at Zanzibar, there’s one option in Tanzania, and then she said, “Hey, Zanzibar will not necessarily work for your meeting because there’s really only one hotel which can host let’s say a group of 60 people. When you do your next meeting, you might be around 80 people, so you really don’t have accommodation there. And secondly, the internet is very slow there.” So there is this knowledge, these gold nuggets, so to say, of knowledge that are crucial when you plan those meetings because it could really completely change the outcome of your meeting. =

Steve: So look, let’s shift gears a little bit here. So you’re effectively creating a new market or technology for a segment that was largely served through manual processes. One of the things that we’re seeing across other companies is how AI can be used to really aid in the customer service or customer experience part of the process. In the case of Troop Travel, post-booking or the in-trip experience, AI can be used to help. Maybe you can share some examples of how Troop is incorporating AI into that experience.

Leonard: Steve, I think it’s a very interesting position that, again, because we know the data from the planning side, we know the attendees are going to attend, we know what flights they’re going to be on, so I think there we see opportunities to use AI to monitor either the flight or the bookings or other flights, delays, etc., and it can be used there in both ways or two sides of the planning process. One is for the attendees. We could notify them of potential delays and other suggestions like flight options to make sure that the meeting can still go on. On the other side — the meeting planner — I think there are some interesting use cases there as well where they know about all the attendees, but they are not going to monitor it all the time, and I think there’s a service both ways in the offering that we’ve got to use AI around that type of servicing.

Steve: In fact, if you are a large global TMC that’s partnering with Troop Travel, your cost structure to service that group travel experience is, in effect, a lot lower than in today’s model.

Leonard: Yeah, exactly. I think the interesting thing there is because we are integrating a lot of data providers, it could either be flights or hotels or risk that comes up, the duty of care. I think there are so many scenarios where we could make this process more effective and, as you say, reduce cost and the human effort to service that.

Steve: Fantastic. Let’s maybe shift gears for just a second and let’s talk about the team. Dennis, you mentioned that Troop Travel is a completely distributed organization. How do you see the company continuing to evolve and frankly, how do you think about the team and what you need, and how you’ll be able to bring those individuals in in what is a very different model of building a company?

Dennis: Yeah, so over the last, well, 16-17 months, we have grown the team from literally 6 people to 60 now. So we did massive hiring efforts and it was very important for us to really find the right people and they have the right mindset and specifically, the right core values. And I think there are a couple of things which have worked for us. One thing I think is actually quite beneficial for us is that while we are basically a Spanish-South African startup, as I said I’m in Spain, Leonard in South Africa, and I think these kinds of opportunities we have here are not very common in our markets, while in the US we have more, let’s say maybe companies similar like us with this opportunity. So it’s quite interesting for us to capture talent because it is a very unique opportunity. People understand the value of it, and we are able to really get great talent here.

And at the very beginning, I think, we, not on let’s say on the technical side but let’s say on the business side, we were adding more generalists because there are so many different things you have to work on and you have to have this entrepreneurial mindset basically because you have to build it. You have to build the sales motion, we have to get the first customers, we have to figure out how do we manage those customers, etc. And that obviously has changed as we were growing and now, especially after our latest funding round, we really focus on bringing now those experts of those different areas into the company. Now, we have reached a certain level and now we bring these people in who have built sales motions, grown into millions of ARR, built marketing functions, built customer success to really then focus on those areas.

And the biggest thing I think for us is our core values. So a key value of us is trust, and I think that’s crucial. So I don’t want to monitor someone, how many hours are you working because I believe that you have an interest in doing the best for the company because it’s beneficial for you as well. And I think values and culture for us are really, really important, and that’s why we have a very strict recruitment process where we go really into those culture interviews and where people as well sometimes are not able to pass to the next level.

Steve: So your recruiting process is really optimized and designed around a distributed and global workforce. Is that different, do you think, than more traditional settings?

Dennis: I think so. I really think so. Again, not everyone likes to work fully remote and we understand that, obviously, in-person interactions are crucial, but luckily we have our own technology to help us figure out how can we come together. And that’s why once a year, we bring the full company together, but then even throughout the year, they have their smaller meetups. So it’s really a combination of things. It’s the moment when you have this in-person connection, that’s where the magic is happening, but then you go back home, and there’s a lot of things you can obviously do fully remotely, and it allows you actually as well to bring your life and work closer together. I feel it’s almost like we can’t look at it separately anymore. We shouldn’t think about, okay, this is my life, and this is work. No, it’s actually this is your life, and you should spend your time on things you’re excited about. So we are bringing people on board who are really excited about solving this problem we’re trying to solve.

Steve: Yeah, I love that. I love that. Hey, maybe let’s hit a couple more questions, and then we’ll go ahead and wrap up. You’ve obviously raised a few rounds of funding. For folks who are just starting out, as they think about capital raises, what advice would you give them?

Dennis: I always remember when actually we met the first time in person, Steve and I asked you, “Steve, why did you actually invest in us?” And I’ll never forget what you said and you said, “I invested in you guys because I see that you’re obsessed with solving the problem and I know that whenever there’s a challenge, you’re going to find a way to overcome that.” And that’s for me really, I think, the basis, to be honest, of any business. Build a business because you want to change or you want to solve a problem, not because you want to become rich or whatsoever. All these things will come if you solve a problem. That will be the side effects. But if you are not excited about what you’re building, what you’re solving, then you will see another opportunity and then you will follow that opportunity, but in the end, you will not be able to build what you want to build. So I think that’s for me one of the key things and then so as a starting point, the basis.

Secondly, then, work with your customers as closely as possible — it worked for us at the very beginning. We didn’t have any customers, but there were so many people in this industry who were helping us out and just talking to us and giving us input because they were excited about innovation. And all this feedback is very, very interesting, very important. Still today, for us it’s crucial. And our customers, again, are large corporations and they tell us that they laugh, that we are listening to what they say, and that actually when we have now our quarterly business reviews, we are bringing our developers into those meetings to actually listen to the comments of our customers, and people really appreciate that.

Leonard: So I think maybe just to add something to raising money from investors, I think Rolf’s investment really kicked off this journey for us, but I think really the way we raised money from a few different investors and I think for us what is really just unique with the investment from a donor is the active involvement from particularly Steve on the donor side. And I think for us as founders, that’s really incredible value add to the whole journey. But I think, in general, the decisions we make and the context — we’ve got the network … I think that raising money from investors, it’s not only the money that you should look for, it’s also the type of investor or the particular investor group that you raise from is very important in this journey.

Steve: Fantastic, fantastic. When you think about Troop Travel, when you think about it 10 years from now, what is it? What does it look like? What’s a measure of success?

Dennis: I think almost every company in the world has an empty spot available that we should fill because companies like us, small companies, we need our own technology to manage our meetings. So I would love to see that whenever someone thinks about organizing a meeting, they think about Troop Travel.

Leonard: Yeah, I agree with what Dennis said there. Maybe just to add, I think from a product perspective, I think what would be great for us is that, during this value chain, we’ve got the meeting planner, the attendee, the local expert, the data providers, the partners, that all of these players see value in what Troop Travel is and want to work with the Troop product and not be forced, for instance, like a Fortune 500 tell their local experts to work with it. You want all the different players in this ecosystem to love working with the product and to want to use it. And then, yeah, I think what Dennis said is the other side of it.

Steve: I’ll just add one thing to both your comments. So, Dennis, you’re a hundred percent right. What really drew me to both of you gentlemen was your obsession to solve this problem, and having built a company myself in the past, it’s that maniacal focus on solving the problem better than anyone else in the world can solve it. That really takes you through to the realization of that opportunity. That said, the other thing that I was truly amazed by is that this is an area that really hasn’t been addressed, the simplification of how you plan a meeting, how you book it, how the expenses are accounted for, the analytics, and then managing the itinerary all the way through the process and notifying everybody about where things are, where other travelers are, and when they’ll be there and coordinating across an entire group experience. It’s amazing to me that this problem hasn’t been solved.

And so we saw an opportunity, we at Madrona saw an opportunity to partner with two incredible founders to go build a multi-billion dollar company to solve this problem, and to be fair, it’s not so much that hey, we want to go build a multi-billion dollar company, which of course we certainly do, but solving a problem that becomes the fabric of how business is conducted, that’s compelling.

So we’re really honored to be on this journey with both of you and with the 58 other members of the Troop Travel team today. Gentlemen, thank you very much for being a part of Founded & Funded, and I look forward to seeing you in the near future at our next meeting location.

Leonard: Thank you, Steve.

Dennis: Thank you, Steve.

Coral: Thank you for listening to this episode of Founded & Funded. If you’re interested in learning more about Troop Travel, visit trooptravel.com. Thank you again for listening and tune in a couple of weeks for our next episode of Founded & Funded.

Statsig Founder Vijaye Raji on Product Building, Launching a Startup, and AI

Statsig's Vijaye Raji on Product Building, Launching a Startup, AI

Today, Madrona Managing Director S. Somasegar talks with Statsig founder and CEO Vijaye Raji, who spent 20 years at Microsoft and Facebook before launching Statsig as a way to bring the powerful tools he was used to using inside Facebook to all builders.

Statsig experimentation platform and automated AB testing helps companies make product decisions in a data-driven way at scale, which means shipping the right products faster. Vijaye and Soma go way back and take a little trip down memory lane, talking about Microsoft’s Small Basic. But then they dive into what it’s like transitioning from big tech to a founder, the importance of data-driven decision-making, integrating AI into those decisions, identifying the ideal customer profile, and learning how to sell your product. Something that did not come naturally to Vijaye. He and Soma talk about all of this and so much more.

The 2023 Intelligent Applications Summit is happening on October 10th and 11th. If you’re interested, request an invite here.

This transcript was automatically generated and edited for clarity.

Soma: Hello, everybody. I’m Soma, one of the managing directors here at Madrona, and today, I’m very excited to have Vijaye Raji, the founder and CEO of Statsig, here with me. Before I get started, I want to sort of go down the memory lane a little bit. I remember meeting Vijaye for the first time when both of us were at Microsoft. He was working on this tool called Small Basic, and he was really passionate and focused on figuring out like, Hey, how do I make programming a whole lot easier and simpler to get started for the next generation of developers? And since then, throughout whatever Vijaye has done, there’s always been a focus on how do I make developers and development teams successful and effective, and how do I make their jobs easier, and make them more agile in the process. Vijaye, you were at Microsoft for a while and then you went to Meta and now at Statsig. Can you shed light a little bit on the journey that you’ve taken so far leading up to Statsig?

Vijaye Raji: Yeah, absolutely, Soma. And thanks for having me on your podcast. Really excited to be here. You touched on Microsoft’s Small Basic, which is something near and dear to my heart, primarily because I was so passionate about building a tool for kids and early programmers to get their programming teeth cut. And particularly so right now because my 10-year-old son is also learning programming right now, and he’s using Small Basic to program, and it’s really awesome because on the weekends, we sit down and we jam together, come up with games, and then he codes it up and then shows it off. So really, really relevant. Thanks for bringing that up.

Yeah, so we met, I think, sometime in 2005 or so. I was working on Small Basic and then we were talking about how you wrote the first blog post of releasing Small Basic from Microsoft. I remember that. Thank you for doing that. I’ve always been passionate about the developer space and how we can get more productive tools out to the developers. And then, in 2011, I left Microsoft when I was working on Windows, actually developer tools on Windows 8. I joined Facebook, which at that time was still called a startup when it was only about 1300 people. Then, as an engineer, I continued to build various different products and, throughout the process, was also enamored with the tools that Facebook had invested in that helped all the engineers and the product builders inside Facebook to build and release products really fast and precisely.

And those are all the formative years. And I think about what I learned there, what perspective I gained there, and what I took away from that all led me to leave Facebook in 2021 to start Statsig, which is basically a culmination of the tools that I was so excited about that I wanted to bring and build for everyone outside. And so that’s the journey of back in the day, as you remember, Microsoft, Small Basic all the way down to Statsig.

Soma: Thanks Vijaye for sharing that. We at Madrona have been talking about intelligent applications for many, many, many years now. In fact, I would go so forth to say that we are strong believers in saying that every application that gets built today is an intelligent application. And by intelligent application, what I mean is it takes into account the data that the application has access to and uses AI and ML in some fundamental ways to be able to build a continuous learning system that, in turn, helps you deliver a better service to your customers.

So if I look at the world through that lens, I look at Statsig and say, Hey, Statsig is a classic example of what I would call an intelligent application enabler that allows product teams to experiment with different features and ensure that they create best-in-class experiences for their end users. But before we talk about Statsig and what Statsig is doing, particularly in the world of generative AI today, which you can’t have a conversation anymore without talking about generative AI in some way, shape, or form. Tell me the origin story of Statsig. How did you come up with this idea and how did you get together with your founding team?

Vijaye Raji: So if you trace back software development, all the way back to the ’80s and the ’90s, a lot of what the development process that led to software being shipped came from hardware. So you would have this waterfall model where you’ll go talk to your customers, come up with a set of requirements, and then that goes into a design document. The design document gets reviewed and then engineers pick up that design document, turn it into an architecture document, and then write code. And then, the coding is done during milestones, which will then be packaged and sent off to the QA team. The QA team will run the test and then validate it, it’ll get released, it’ll go to, I don’t know, back in the day, best Buy Circuit City and sit in boxes, and people will go buy that and take the CD and insert it.

And that’s how software development happened. Then V2 will happen, V3 will happen. And then over time, with the internet, a lot of that process got faster and faster. So people got into this build measure learn loop where the continuous deployment of new features enabled a lot of this fast iteration. And along with that came the data aspect of it. So, where you were instrumenting your product, your understanding of what users were using, which products or features they did not care about, and how much they used. And then it subsequently led to this really fast iteration that also led to this experimentation-based software development, which is, I think, the latest trend, where every product that you build, every feature that you release to your users, you want to understand the impact of that. Without understanding the impact of it, you don’t always get your product intuition right.

And that is an interesting progression over the last couple of decades. And so, I was in the middle of all of that when I went from Microsoft, which at that time in Windows was primarily following a waterfall model. When I went to Facebook, in my first six weeks of boot camp, I got to release a bunch of features that actually got to reach 600 million people at that time within days. And that was fascinating. First of all, I have to say that when I first got that experience, it was a little scary. I thought something was going to break all the time, and just knowing all my software engineering, this is not how software should be built. But over time, I started to understand and appreciate that kind of fast iteration and when I dug deeper, the tools that were underlying all of that software development process were fundamental to keeping everything going in the right direction.

They were fundamental in capturing data. They were also instrumental in understanding the impact of every single feature. Now what was interesting was that the tools shape the culture that came down. So the downstream culture of product development where everyone felt empowered to look at the data and make product decisions, both good and bad. If the numbers were not looking good, they decided to cut the feature. And there was no personal attachment. And so, the downstream culture was one of the very objective data-oriented decision-making, which was distributed among all teams.

That was fascinating. That was really, really powerful. And so the journey of Statsig is basically that so how do we bring that kind of a product-building culture to everyone? When I looked outside from Facebook, not every company has the luxury of building these tools or the team sizes or the engineers to go build all of these tools by themselves. So that was the opportunity that I thought would be useful to go and solve for the world. The mission really is like how do we improve software development using data-driven decision-making and bringing that to the masses.

So in 2021, I left Facebook. At that time, I was the head of Seattle, head of the entertainment division at Facebook, and I felt so strongly that this was an important problem to go solve, and so did seven other people; they all came with me on day one, we started building, and since then we’ve been building. It’s been two and a half years. It’s been a fascinating journey. I’ve learned a lot. So happy to share.

Soma: That’s fantastic, Vijaye. Hearing you tell that story reminded me of why I was so excited to be a part of the developer division at Microsoft for many years. Because when you end up building products for people like yourselves, there is a special excitement around that. And I always wondered for the last many years, as companies and developers have been building what I call analytic tools and AI tools for the rest of the world, I was wondering when are developers going to start building these tools for themselves. And I look at Statsig and Statsig is doing just that, right? Building tools for people like us who are building next-generation products, and that’s a whole lot of fun. But Vijaye, as you mentioned earlier, you were at Facebook for 10 years and at Microsoft for 10 years prior to that. Having worked at two large companies, two large technology companies, because though when you started Facebook, it might have still been called a startup, by the time you left, it was a fairly large company. But what was it that made you decide to leave a large company environment and move to start an 8-person startup? And related to that, what experiences, whether it’s the experience that you got at Microsoft or the experience that you got at Facebook, do you feel prepared you to launch your own company?

Vijaye Raji: I have to say that anyone who is starting a startup from scratch has to be a little bit crazy. Some amount of irrational decision-making has to be there because, look, if you are completely rational, every formula, every expected outcome that you calculate should indicate that you go join a large company. The only reason why you would start a company from scratch is something you feel so strongly about solving a particular problem in the world so that you’re willing to leave behind all the comforts and go do this thing.

So for me, that happened about three or four years before I started Statsig. For me, the journey was more around what are the necessary conditions for me to be able to be successful in a startup environment. What kind of skills would I need? Am I good at hiring people, coaching people, and growing people? Am I good at understanding business, the profit and loss for a business, how we sell, and how to market the product? Am I good at finding product market fit for a new idea? Am I good at getting it to market and actually having people understand the problems that the product is solving?

So those are all the kinds of criteria in my mind, I was like, okay, well, how do I learn all of that? And so the last three years that I spent at Facebook was basically figuring out or putting myself in positions where I would learn a bunch of that stuff. And then finally comes the point of, okay, who would want to leave great jobs and want to pursue this journey? A great entrepreneur would have to have a set of followers, people who are crazy enough to follow an entrepreneur who’s about to jump off a cliff. And so that’s the image that you want to think about. And if you can convince a set of folks to come and join you, that is already a validation. And those are the kinds of stuff that I prepared myself for.

Now, there’s another aspect of it — I am married, and we have two kids, a 10-year-old and a 7-year-old. Family is every single bit involved in your startup journey as you are. And so a lot of the commitment has to also come from the family side. So if I think about my wife and my kids — every day, we talk about the startup. I feel like they’re also bought into the idea of this startup — Statsig. Without their help, I wouldn’t be here.

Soma: So Vijaye, I know that you’ve been asked this question a ton by other people, whether it’s from Facebook or from other companies, but if I have to take a step back and ask you, Hey, what advice do you have for someone who’s thinking of maybe making a similar transition to what you did from a larger company environment to move off and say, Hey, I’m going to launch my own startup. Are there any pulls of wisdom or key learnings that you want to share?

Vijaye Raji: So I’ve only done this one time, and so take this with a huge grain of salt because there are a lot of biases here. The one thing that has helped me quite a bit is once I understood that my next journey is going to be a startup, it really comes down to what kind of skills can I pick up. What kind of connections can I create now? What kind of relationships could I build right now? And then what kind of team can I bring on day one to the startup? And if you walk backward from it, then it becomes very clear what all should be done in order for the startup to just get up and going as quickly as you can and start building.

So most companies won’t let you write code for your startup as you’re continuing to work in the other company, but you can always think about the problem. You can talk to other people about the problem. So there’s a lot of validation that you can do even before you start the startup, even before you embark on the journey. And so, that should all inform you on what is day 1, day 90, and day 180 going to look like.

And then the other one that most people don’t really talk about is your financial safety net. Because when you’re going into a startup, you’re kind of committing several years of your life to the startup, and during those periods, there’s going to be a lot of instability and a lot of ups and downs, primarily on the financial side. And if you have a family, you want to make sure that you build a good financial nest that supports you during that journey. And so those are all necessary components as you think about starting a startup. For me, it took me three years or so to ready myself to leave behind a comfortable job and get into Statsig. And sometimes, it could take longer.

Soma: In looking back at your origin story for Statsig, I think you were lucky enough to start with a founding team of eight people, all of whom you had known from before as you all had worked together in one way, shape, or form at Facebook. But some people have the luxury, some people don’t have the luxury, but every entrepreneur thinks about, Hey, I want to make sure that the founding team is the best team that I can put together, and I need to start thinking about culture from day one or all that fun stuff.

From your vantage point, how did you decide, hey, what your founding team should be and then, more importantly, what kind of a culture you want to set? On the one hand, you all had what I call the fortune to work together in a company like Meta with a particular culture, but here you are starting off on your own, and I’m sure you had some ideas of what you wanted the culture to be, and what you want to prioritize and what you wanted to be below the line. Talk to us a little bit about how you found the right people to instill the right set of culture from day one.

Vijaye Raji: I think culture is a very, very important aspect of a company, especially as you’re starting from scratch, because there’s only a few things that you can hold strongly as the company grows and as you add more and more people. So you want to hold onto the things that you really care about. And those could come in the form of what is the environment that you want to build, that you want to continue working on 10 years from now? And if you put that in that kind of lens, then that becomes really clear. What I’m doing is building a place that becomes exciting for me to come in every single day and work every day.

So early on, we set out to pick a set of values, and when we said we were going to pick values, we decided we’re going to pick a handful of values. Those are the ones that we really care about that we don’t want to compromise ever on. You don’t want to have 15 or 20 values because nobody can even remember that many. And so we picked four, and even when it came to the set of values, we decided these have to be trade-offs. These cannot be truisms like motherhood and apple pie. So you can’t say things like, oh, be honest or be transparent or be happy or be friendly because those things are all given — those are all table stakes. A value should have a downside and a trade-off. Every day you should use them in debates and conversations and decision-making processes, thereby doing the right trade-offs, and that sets the tone for the whole company.

I’ll give you one example. We have one value that says, “No sacred cats.” No sacred cats is actually a way to say, do not take anything for a given unless it’s logical, unless it’s reasonable. If you have a set of processes or traditions that people are following, and if nobody can explain why, then it’s okay to change it. But the trade-off here is there’s a lot of overhead in changing things, but the outcome is that you will never end up in a position where things are illogical. Nobody would ever come in, like why the heck are you doing this thing where it doesn’t make any sense? So that’s the trade-off. And so those are the kinds of values we ended up picking early on.

And I remember having exercises with the early teams of seven or eight people where we all sat down together, we all agreed, and these are the values that we’re going to adhere to the company, and we have continued to maintain that.

Now over time, that becomes your culture, that becomes your identity, and when you hire more people, they opt into the culture, and so you want to make sure that something that is attracting people or attracting good people. So yeah, I think those are all important stuff. Like I said, you want to pick a handful of things that you can hold onto because every time the company grows, culture evolves, culture shifts, and if you try to control too much, it doesn’t work.

Soma: If I take a look at the progress that you made, first of all, Vijaye, you guys have made some great progress in the last two and a half years that you’ve been around. More than anything else now, I continue to be impressed with the velocity of product development. It’s one of these things where you’re building a set of tools to enable the rest of the world to do that. And by virtue of that, being able to be in a position where I’m leading by example is a fantastic place to be, and that warms my heart when I think about Statsig. But in the last couple of years, you’ve sort of gone through a phase of learning about what does it mean to go identify the right customer, or people call it, ICP or the ideal customer profile. You now have what I call small companies and large enterprises as your customers. You’ve got companies that are delivering consumer-facing services. You’ve got companies that are delivering B2B services or enterprise-facing services. So you’ve got a wide array of customers today.

Can you share a little bit of your perspective on how has your thinking evolved in terms of who’s your ideal customer and anything that you’ve learned in the process of deciding, Hey, this one works well, that one doesn’t work well? How do you define yourself in terms of going after customers and landing customers for Statsig?

Vijaye Raji: This is a really, really good question because it was a journey. It wasn’t intuitive or it wasn’t very clear on day one. I come from an engineering background, and I had lots of exposure to product roles, but I had very little exposure to sales or marketing. So I don’t have intuition for sales or marketing, something that I had to learn during Statsig in the early days and something that I actually made a lot of mistakes in the early days of trying to sell a product and then learning how not to sell. So when it comes to product building, now you said the velocity, we take pride in the fact that we help other companies build fast, and it is important for us to actually be an example for that. If we cannot even build fast, then we can’t claim that we can help other people build fast.

So a part of that is what are the set of conditions that would make engineers, product managers and designers and all the creative people blossom and flourish? And so removing all kinds of friction, removing overhead, removing ability, one of the key elements is transparency, radical transparency. Everything is available for everyone. All the code is transparent, all the data points are transparent, so people don’t have to get stuck in the process in order to move fast. So creative people should be allowed to move as fast as they can. And so we’ve established an environment where product building can happen really fast. Now what happens after you build the product, you have to go and convince a set of customers to try and use the product. That took a long time.

So I think the first eight months, we couldn’t even convince people to use our product for free. How do we sell this product? We know there’s value in the product. We believe very strongly that this is an important platform that others should use, but we were not having very much success convincing people to try and use it. One day, we saw an ex-Facebook engineer start using our product and started sending a lot of events, and those events came from Singapore, Malaysia, and Indonesia. At first, we thought, oh, somebody’s DDoSing us from various different countries by sending lots and lots of events. And then later on, we realized nobody cares about Statsig, so they probably are not DDoSing. And then we figured out it’s the next Facebook engineer that is using the product.

That was when it hit us like, okay, there’s a set of people that left Facebook that all had access to the tools inside Facebook. They are probably all misusing those tools, and that should be the people that we should be talking to. Then we started talking to, I think Headspace was our first customer. We talked to an ex-Facebook product manager at Headspace, and it resonated very, very strongly, much stronger than what we were trying to do before.

And so that was our first ICP, really not the big companies or the small companies and not the stage of the company or what series they’re funded. It was really like the ex-Facebook folks that missed the tools that they had access to. And that’s how we figured our way, almost like in a stumbled out on our ICP. And then since then, we’ve had plenty of traction, and that was, for me, a pivotal moment in understanding the value of our own product.

Soma: That’s super, Vijaye. Two and a half years since you started the company, since you became an entrepreneur since you sort of went from zero to wherever you are today, I’m sure not everything has been fine and everything going well. Can you sort of highlight one or two unexpected challenges that you face? The things that you did not expect that you said, oh my God, what is this new thing? And more importantly, how did you deal with them?

Vijaye Raji: Every single day is a learning. There are hundreds of things that happen during the day, and for me, there are a bunch of new things that I learn and take away. If I were to think about the big ones, understanding how sales teams operate was key for me to be able to unlock. So one of the earliest hires we made, Sam, who came from Segment, who leads our sales — he’s the head of sales for us. He knows sales like I don’t. I learn a lot from Sam every single day. The way he thinks about problems. The way he thinks about customer’s problems. This is one of the things where founders make a mistake — customers don’t care about your vision. They care about their problems.

And so when I sit down and talk about it, I get so excited about sharing Statsig’s vision to this poor customer who’s like, I have a problem to solve and I’m not here to listen to your vision. And then you hear how Sam talks to the customer and there’s so much to be learned from there. And so that was a pretty large area, I would say, a huge area of figuring out, slowly making some mistakes, and then learning from the people around me who are so much better than me at what they do.

Then the next one was marketing. That’s another one of those things. So our tool is used generally by data scientists and engineers, and those are the two groups of people that don’t want to be sold to, don’t want to be marketed to, and how do you make sure that they understand the value of Statsig? And so the way you approach that is through communities or content where you actually make our product useful for them and then have them pick it up, learn what they can do with it, versus actually being out there and shouting.

And so those are nuances that I had to learn about marketing. And then here’s another one. It’s so naive at this point when I think about this, when everyone says marketing, they combine about 10 different types of marketing in one bucket. That’s another one of those revelations like, oh, there’s content marketing, performance marketing, demand gen marketing, product marketing, events marketing. Whenever I bucket everything into marketing, it actually turns out, there are like 10 different ways of marketing stuff. Even that aspect I didn’t know, and I had to learn. And so once I hire the marketing folks around me, now they’re doing a great job and I learn from them every single day. So yeah, it’s like every day is a day of learning and it’s been great.

Soma: That’s awesome, Vijaye. You’ve been working in the technology industry for a couple of decades now. As you know, we are in the midst of a huge platform wave. People call it AI, people call it generative AI, and there seems to be an announcement about a new generative model, a new AI application, or a new piece of innovation, literally every other day. It’s almost like a tsunami is over us in terms of the rate of innovation as far as generative AI goes. With the speed the market is evolving, how do you decide or how do you know what is the right set of features that you want to prioritize and focus on? And then, more importantly, what would you tell your customers? How should they prioritize?

Vijaye Raji: One of the things that Statsig was fortunate about is we built a set of tools, the primitives of experimentation, that was applicable for AI tools. So we have some of the biggest AI companies that use Statsig for optimizing their user experience that are based on AI. So the application of AI, when you present the experience to your users, you want to understand if it’s actually improving and if it is improving by how much that impacts quantification. And so, Statsig became a tool for AI producers and AI consumers to validate some of the AI applications. So when they bring a new model, when they bring new prompts, or when they tweak the parameters like temperature or frequency penalty, they use Statsig to validate whether this is positive or detrimental to their product. And so we found ourselves in that space and we started doubling down on it.

So we have partnered with companies like LangChain, where we provide tooling for everyone using AI for their product to just start running experiments. In the world of experiments for AI/ML, there are offline experiments and then there are online experiments. In offline experiments, you take a set of training data and you verify and you validate, okay, I’ve gotten it to a pretty good place. Now I want to put it in production. But production is an entirely different story. How the model operates at scale, and how it actually improves your business-level metrics is something that you need to measure, that you need to validate, and that’s where Statsig’s abilities are really shining.

And then the second aspect of that is like, okay, how can Statsig help using AI within our own product to maybe even bring a lot of insights to the customer? So, for example, one of the things that the Statsig experimentation platform provides is the new feature that you launched yesterday — maybe it’s a registration feature that is improving your conversion metrics by 4%. With AI/Ml, it’ll be easy for us to actually even dive deeper and tell you, okay, on average, you’re seeing a 4% improvement in your conversion. However, on Android phones that are in non-English speaking countries, you’re seeing a 20% drop. And that’s a question that you wouldn’t even ask in the beginning.

So, if AI is able to identify these anomalies and patterns and bring it back to you and provide insights, you can go fix that problem or the bug that is affecting all the Android, non-English speaking phones. Then after that, your 4% improvement can actually become a 5% improvement, which is a huge, huge win. And that’s the area that we’re kind of starting to investigate and invest in.

Soma: When I think about Statsig, Vijaye, I think about Statsig as sort of a product experimentation platform, a product observability platform, but really something that is going to be helping you build the right set of features and the right set of capabilities irrespective of whether you are building a generative AI application, you’re building a large language model, or you’re building what I call an intelligent application kind thing. So I think about Statsig as, Hey, you are an enabler for generative AI. Is there a good way to think about it, or do you think about how you position Statsig in a world of generative AI differently than that?

Vijaye Raji: So anybody who is using AI or ML today will need a set of tools to understand the impact of that. And so Statsig being an enabler of, in general, better products, better features, or even knowing which of the features or the products that we launched to the customers is doing well. So that, to me, is an enabler of better product building. So one of the things that at Facebook, at least we understood was unless you’re a Steve Jobs, you’re going to be about 33% correct in your product intuitions. It’s kind of like a humbling fact and brings a lot of humility in, like when we have product intuitions, it’s been validated 33% of the time you’re going to be absolutely right and about 33% of the time you’re neutral, and then the rest of the time you’re actually hurting the product. You may not think that is the case, but the data shows that.

But just by knowing which 33% of the products are actually hurting your company’s business metrics, and by turning those off, you’re going to get a bump up, but being able to know that is how you’re going to first start by measuring and then understanding and then actually removing your personal attachment to those features. Those are all the ways in which you end up building better and better features, and Statsig is one of those enablers of building better features.

Soma: In my conversations with you, particularly about generative AI and talking about large language morals and the like, at least it feels to me that hey, you have a little bit of a contrarian view on what many people think of as a hallucination problem that exists with generative models and large language models. Can you share your thoughts on that or your perspective on that? Because I think it’s a fascinating line of thinking.

Vijaye Raji: Yeah, absolutely. At first, it started out as, like, let me take a contrarian view, and then I started doing some research and then I started actually believing in it. When people talk about hallucinations as a big problem, I think it’s a little bit overblown and it’s actually not that bad of a problem. In fact, if you think about it, it’s a creative way that whenever there’s stuff that is made up when you give two inputs and then there’s stuff that is filled, the gap is filled. That’s what we do all the time. Humans do this all the time, and our brain does this all the time. When we do it, we call it creativity. So when AI does it, we call it hallucination, which is, over time, it’s going to get better and better.

If I already think about how AI is generating art, how AI is generating music, those are non-deterministic. So every time you run the same model, you don’t get the same output, you don’t get the same image, you don’t get the same music. It’s a little bit different. I think it’s a form of creativity, and a lot of times what happens is these creative ways, sometimes based on if we think about evolutionary practices — over time, these mutations start to get better results. And so if those are able to be absorbed back into the system and the system gets overall better, I think there’s a potential upside to all of these hallucinations. And so that’s my take on it. We’ll see where things go. Hallucinations are not that bad, and over time, they will get better. That’s my feeling.

Soma: No, I agree that they’re going to get better over time. I know that we are coming up on time here, Vijaye, but before we wrap up, I want to ask you about your own usage of generative AI, whether it is for personal reasons or for professional reasons. Do you use generative AI on a day-to-day basis? And if so, what do you use or how do you use that?

Vijaye Raji: I use it all the time now. In fact, if there are no temporal data points, I prefer using AI to ask questions, and over going and searching on Google or Bing. Just last week, one of the things that I was doing is I was trying to remember the name of this electrical thing and I couldn’t express what that is. And so I went to OpenAI ChatGPT and started typing like, Hey, look, I’m looking at this thing. It’s an aluminum box. It’s rectangular in shape, and it’s got conduits coming into it. I see electrical wires. I don’t know what it’s called. What is it called? And it said, “Oh, it’s a junction box.” Oh, thank you. And then I went to Home Depot. I searched for Junction Box and I found it and I bought it. So I think it’s really cool. It’s just crept into everyday usage. I love it.

Soma: Vijaye, thank you so much for spending the time with us today on this podcast session. I do want to take this opportunity to congratulate you again on all the success that you’ve seen so far, particularly the progress that you and the Statsig team have made in the last couple of years. And looking forward to seeing what is possible and potential with what you’re building. Thank you so much.

Vijaye Raji: Thanks, Soma. Thanks for having me here.

Coral: Thank you for listening to this week’s episode of Founded & Funded. If you want to learn more about Statsig, visit Statsig.com. If you’re interested in attending our intelligent applications summit, visit ia40.com/summit to request an invite. Thank you again for listening to Founded & Funded, please rate and review us wherever you get your podcasts, and tune in in a couple of weeks for a conversation with Troop Travel’s Co-founders

NYSE’s Lynn Martin on Capital Markets, IPO Trends, and the Role AI Is Playing in the Markets

NYSE President on IPO Trends, Capital Markets, and the Role of AI

In this week’s episode of Founded & Funded, Madrona Managing Director Matt McIlwain hosts NYSE Group President Lynn Martin ahead of our Intelligent Applications Summit on October 10th and 11th. Lynn will be speaking at the Summit, but we thought it would be great to have her on the show to talk more about her background, the NYSE, capital markets, IPO trends, and, of course, the role data, AI, and large language models are playing in the markets and in companies broadly! This conversation couldn’t come at a better time now that we’re seeing a number of tech companies teeing up IPO plans.

Lynn thinks we’ll be back to a more “normal” IPO environment in 2024 – but you have to listen to get her full take!

The 2023 Intelligent Applications Summit is happening on October 10th and 11th. If you’re interested, request an invite here.

This transcript was automatically generated and edited for clarity.

Matt: I’m Matt McIlwain, and I’m a managing director here at the Madrona Venture Group. I’m delighted to have Lynn Martin, who’s the president of the NYSE Group, which includes the New York Stock Exchange, and she’s also Chair of ICE Fixed Income and Data Services. Lynn, you started your career coding, you were a computer programmer back at IBM out of college, and now you’re leading the world’s largest global stock exchange and a bunch of related businesses. Before we dive into things like the capital markets and the businesses that you run, and of course, the topic of AI that’s on everybody’s mind, I’d love to jump into how you get from programming at IBM to the financial markets?

Lynn: So first, Matt, thanks so much for having me. It’s always good to spend time with you — great question as to how I wound up here. As you pointed out, I was a programmer. I was trained as a programmer, I have an undergraduate degree in computer science, and it was around the time of the dot-com revolution, and markets, in particular, were starting to have technology integrated into them.

I was at IBM, and I was really looking for something that spoke to my passions. My passions at the time were financial markets, financial market infrastructure, and the math that underpinned financial markets. It’s what drove me to get my master’s degree in statistics. I happened upon an opportunity in 2001 with a derivatives exchange, which was based in London called London Financial Futures Exchange. They had a tremendous amount of people in their organization that could speak about the products that they had listed, so really European short-term interest rate debt and equity index futures, but they didn’t have anyone who could talk to the new breed of trader, which was people who were writing to the API as markets started to go electronic.

I interviewed with them, and what really interested them about me was the fact that I was a programmer. I had conversational knowledge of some of the models and some of the products, but I could talk to a programmer who was writing to this thing called an API at the time.

Matt: Yeah, before APIs were cool.

Lynn: Exactly.

Matt: Yes.

Lynn: At the time, the whole management team of that exchange was like, “What’s an API?” So the right place and the right time, and the interesting skillset that led me to financial markets back in 2001.

Matt: No, that’s fantastic. To fast-forward a little bit, you ended up at NYSE Group and have had successive opportunities and broadening roles. I don’t think most people really have an appreciation for the breadth of the NYSE Group and then the related relationship with ICE. Can you paint that broader picture, and then we can talk specifically about the areas that you have responsibility for?

Lynn: Yeah, I’ll fast-forward through a little bit of history. In 2012, I was with NYSE. I was CEO of a futures exchange that NYSE was a 51% owner of, and 49% was owned by a variety of institutions on the street, and a company called ICE came along and acquired the New York Stock Exchange. ICE was an electronic group of exchanges, a company that was very much driven by the implementation of cutting-edge technology and the application of that technology to financial markets. That acquisition was completed in 2013, and I found myself in a very interesting position in that I was in a directly overlapping business unit.

I wound up taking a position being offered and then taking a position to run one of their clearing houses. I got to know the founder of ICE, a gentleman by the name of Jeff Sprecher, who is still our chair and CEO of ICE, who really got to know my background both from an academic standpoint and a business standpoint. He’s the type of guy who’s always thinking about two years, three years, five years down the road, and he’s been a great mentor to me in that regard.

At the time, he was thinking about, “Okay, a really important output of financial markets and equally an important input to the most liquid financial markets is data.” And he was already thinking at that time, “There’s something around data that I want this ICE group to be a part of.” It led us to form ICE Data Services in 2015, which I was named president of, and it rapidly grew into ICE’s largest single business unit through a variety of acquisitions. The way you should think about that group is pretty much what I just said. It’s all down to the premise that the most liquid markets have a tremendous amount of data as outputs. In order to make a market — and a market can be broadly defined — liquid and actionable, you need data to form what value is going to be.

That can apply to everything from the U.S. equity markets, it can apply to fixed income markets, it can apply to a lot less traditional markets such as where ICE Group is now entering the next generation of its evolution, the mortgage market, the U.S. mortgage market, and how we use data to make a more informed decision or allow for a more informed decision in the U.S. mortgage market.

Matt: You chair this whole group, the ICE Fixed Income and Data Services. Can you give us one tangible example of the more popular data services you provide today and how that works? We’ll come back later to talk about how AI is changing or giving you opportunities to enhance some of these areas.

Lynn: AI plays an incredibly important role in the Fixed Income and Data Services vertical that we have at ICE. You can take AI and large language models from an efficiency standpoint. How do you do more with less bodies? How do you cover a broader universe, as I like to think of in the fixed-income markets? In the fixed income markets, that manifests itself in us providing valuations for 2.7 million securities globally and also providing terms and conditions data, the underlying aspects of a bond where when it matures, what its coupon rate is, all those sorts of good things on 35 million securities globally.

We do that through the implementation of natural language processing, large language models, and all different forms of artificial intelligence. Still, importantly, we have a strong human overlay that knows if what those models are producing is good information or bad information. You don’t want the erroneous information to continue to perpetuate through a system because then that winds up polluting a system.

Matt: You go from a virtuous to a vicious cycle, and these capital markets are dynamic. Before we dive more into some ways you use ML and AI, what is the state of the capital markets today? I’m thinking about the public and the fixed-income markets generally, and how do you think about that market space, the context beyond all the services that you provide?

IPO Trends

Lynn: They’re operating as you would expect them to operate. So 2020, and 2021, if you think about the equity markets, you had a tremendous amount of IPOs coming out because the system was flush with cash, and people were taking advantage of high valuations. And then what you saw in 2022, early 2022, you saw volatility start to creep in, volatility instigated by multiple factors, the war in Ukraine, rising interest rates, monetary policy, how do we pay back some of that money that’s been injected into the system? That caused the IPO markets, the people who were tapping the U.S. equity markets, to grind to a halt, as we saw in 2022.

As we moved into 2023, you’re now starting to see the effects of the Fed implementation of rising interest rates start to work its way through the system. So effectively, you’re starting to see inflation come down, and you’re starting to see the desired effects that the Fed had with its monetary policy start to take hold. The way we’ve started to see that manifest itself in our markets is number one: the volatility has come way down. The key barometer that I always look at is the VIX. The VIX has been 13, 15, somewhere around there, definitely below 20 for a sustained period of time.

That’s really important for a company that’s thinking, “Okay, is it time for me to tap the public markets from a capital perspective?” What you’ve now seen is, at least in Q2, we saw the IPO markets start to reopen. So you saw Kenvue, which was a spin-out of Johnson and Johnson, raise a $3.8 billion price at the high end of the range upsize. You saw CAVA, you saw Savers, you saw Fidelis, you saw Kodiak more towards the end of Q2. What encouraged me about everything we saw in Q2 is you saw both U.S. domestic deals get done, you saw international deals get done, and you saw multiple sectors come to market, and that’s the first time that you’ve really seen a good healthy cross-section of activity tap the public markets since 2021, certainly.

Matt: Well, I’m sensing some cautious optimism in the return of the IPO market. It’s been a year and a half since Samsara listed on the NYSE back in December of 2021 — since you’ve really had a technology IPO and the sense of how we think about it in the tech venture business, how are you seeing the end of this back half of this year in terms of technology IPOs and then also the trade-off between the specific situation that some of these IPOs might be at a lower price in the public markets than the last private round?

Lynn: I think the investors have gotten their heads around the fact that the valuations are going to be different from 2021. 2021 was a bit of an anomaly in terms of the valuations. Very optimistic that we’re going to start to see deals come out in the second half, particularly in technology. Ultimately someone’s got to test the waters, though, so there’ll be a couple of deals that will probably come out in the second half of 2023, which are going to do that. And we’re working with a variety of companies in our backlog who are now saying, “Yeah, we’re going to go in probably early Q4 timeframe and there may actually be one or two that go prior to that.”

Very optimistic that those deals are going to come out, those deals are going to get done, and then that’s going to really set the stage for 2024, which is when I think we’re going to be back to a more normal IPO environment.

Matt: That’s encouraging. With that in mind, we work with private companies at all different stages. Some of them are getting close to being IPO-ready. What kind of advice do you and your teams give to these kinds of private, rapidly growing companies about what it means to actually be IPO-ready?

Lynn: Yeah, and I use the term IPO-ready a lot. The bit that 2021 showed was that 2021 was giving value to growth at all costs. It was for every dollar of growth, the market was rewarding you. The companies that are coming out now have balanced growth as well as profitability and how to not sacrifice growth but also come out with some discipline on the expense side. Ultimately, an investor’s going to ask you about expenses during your earnings call, they’re going to ask you about growth during an earnings call. I think being already of that mindset that you’re balancing growth with profitability is a good philosophy to have going into your IPO because if you’re a company that’s a year old, five years old, 10 years old, 100 years old, you’re going to get asked those questions on your earnings call.

Matt: It’s a tricky balance, and things got a little out of kilter in the 2020 back half of 2021. I agree that we’re coming back to more of a balanced outlook there. Maybe just turn just a little bit, you’ve got this whole area of fixed income you’re responsible for. Just give us a view of the fixed-income markets. Treasury prices have been volatile. I think they’ve stayed more volatile in general here and are trending downward. The rates on the ten years have popped up a good bit again. What’s your outlook on the fixed income side, and how does that influence the timing and this return to the IPO market?

Lynn: It’s all ultimately tied together. When the equity markets are a bit more volatile, there could be a flight to quality, and it could be more of the treasuries that people invest in. We’ve seen a lot of volatility not just in treasuries, we’ve seen it also in the muni markets, the U.S. municipal markets. Also, investment grade and high-yield corporate debt have been bouncing around a bit as well. The treasury market, though, has been incredibly volatile. Typically when the treasury market is volatile, the muni markets are volatile too because those two trade at a spread to each other. I think to the extent that there is continuing to be treasury market volatility, you’ll see it manifest itself across those two asset classes in particular.

Matt: There’s just so much input data and output data in all these different capital markets and related data services areas. Tell us about the groups that you are responsible for and how you guide and direct them around what more can we do with this data? What can we do with AI in general, generative AI in particular?

Lynn: One of the things that I’ve always been focused on the data side is observable data. Good data is gold, and bad data, when it gets into a system, is really challenging to expunge, so I tend to focus on how we harness the good data and build the right models off of that good data to extrapolate other pieces of information. One of the areas that we’ve been super excited about is how we can take data and how we can take these large language models and add efficiency to the trader’s workflow.

One of the big mantras I have is to always stay close to your customers and talk to your customers about what their pain points are, so one of the pieces of functionality that we’re incredibly proud of that we built almost a decade ago in our data services business is our ICE Chat platform. That chat platform connects the entirety of the energy markets ecosystem, everything from a drilling company to a producer, to an investor, and even to people who carry the cargo from different ports for delivery. Within that chat platform, we had developed a proprietary large language model, which knows if I’m talking to you about what day of the week today is, or it knows if I’m talking to you about the price of crude oil.

If it senses that we’re talking about the price of crude oil, it’ll pop in some analytics alongside it, including a fair value for what it detects we’re talking about. Then if it further detects information, it underlines the price and quantity that we’re exchanging information about potentially transacting and allows a seamless submission to our exchanges and our clearing houses. It’s adding a tremendous amount of efficiency to a trader’s workflow, and we’ve actually seen an uptick in volume on our benchmark energy markets on the ICE side of the business during the first half of this year by 60% using this mechanism.

I think that’s because people are becoming more and more conversant in large language models, how to use them, how to have them add efficiencies to their workflows. We just see it manifests itself in our energy markets and then, more recently, in our utility markets.

Matt: Now, it’s very interesting. It’s sort of a capital markets chatbot copilot, as it were. We’re having a conversation, we’re trying to understand things, and you are providing relevant, real-time information that facilitates the exchange of ideas, which ultimately can facilitate the exchange of financial transactions.

Lynn: Yep, that’s absolutely right. And it all comes back to our north star, which is data adds transparency to markets, and information adds transparency. You think about what all of these large language models are doing, they’re adding transparency, they’re giving you pieces of information. That’s why the underlying data that underpins them is so critically important to have correct.

Matt: Yes, I agree. It seems like much the whole world agrees now, whereas even nine months ago, I think the dots had not been connected for people, but of course, we had ChatGPT, and that was kind of a moment we’ll all look back on. We recently did some analysis here at Madrona, working with some AlphaSense data about just how many people are talking about generative AI and specifically looked at Q2 earnings, we’re not quite through the Q2 earnings season, but we’re getting close, and what was amazing was that we looked at both software as a service companies and then other or non-SaaS, and there was about a hundred of these software-as-a-service companies. They mentioned generative AI or AI on average seven times per company, with over 700 mentions on their earnings calls just this quarter.

And what was maybe an even more amazing statistic was that the non-GenAI companies, in this case, over 300 of them had mentioned on average five times in their earnings call generative AI or AI-related topics. And this is companies, Warby Parker, Lululemon, Warner Bros. I mean, you can go down the list. A whole bunch of amazing companies that you all, of course, have helped and trade on the New York Stock Exchange. What are you hearing from CEOs and CFOs from tech and non-tech companies in this GenAI area?

Lynn: Well, I agree with you. AI seems to be the letters of the year. I mean, I grew up in a generation where we had letter of the day, these are the letters of the year. From every company I talk to, they talk about the importance of data. And unsurprisingly, it’s a topic that I love to talk about. Now, there is so much data floating around in this world. It is really hard as a human to process all of the information that you have at your fingertips. I mean, petabytes of data at your fingertips on any one given day. You need things like AI, you need things like large language models to help parse through the data in an efficient fashion that’s going to give you additional insights, it’s going to give you good insights on some of the names you just mentioned on what your customers are doing, what your customers are saying about your products, and what your customers are doing in terms of purchase behavior.

That’s going to help drive your own investments, it’s going to help you drive where your next investment dollar is going to be. In the case of Warby Parker, it’s going to be on a pair of sunglasses. They recently rolled out sunglasses that flip from two different colors. What are people saying about that? Has that been a hit? If it’s been a hit, should we make more of those types of products? So it really helps the investment flywheel in a company accelerate because you’re getting that piece of information a lot quicker than you normally would.

Matt: I think that’s well said, and it makes me think back to the FinTech market specifically and 10, 12 years ago, just the rise of cloud computing and computing at massive scale. I think, as you’ve mentioned, part of what ICE took advantage of, I think architecturally as a business, how are you seeing within FinTech the potential for leveraging GenAI and LLMs? And of course, I know you’re already doing things, but how are you all strategically thinking about where this might go and evolve your business over time?

Lynn: We’re looking at ways within our own business across ICE to make processes much more efficient that are ripe for efficiency plays. One business, I mentioned it earlier, that ICE is keenly focused on is our growing mortgage technology business and how to make the process of homeownership in the U.S. much more streamlined and, therefore, more efficient for the consumer. And how do we put good data in the consumer’s hands so they’re making a more informed purchase or getting a more informed rate on an interest rate?

If you look at the tremendous amount of data that’s available within that vertical in particular, we think there’s a tremendous amount of insights that can be unleashed to make the process more efficient in terms of buying a home, securing a mortgage, refinancing a mortgage, even the securitization process. It’s an area that we’re keenly focused on at the moment.

Matt: Given the leading-edge work that you all at the NYSE Group are doing, as well as the fact that you’re really looked to from all these public companies or companies that are considering going public, are they also asking you for your perspective and advice in these areas? And how so?

Lynn: Yeah, and we like to be a good partner to our firms. I always describe companies the first time we meet, either early on the startup side or we’re pitching them to go public on us. We’re in it to be your business partner. And as part of being your business partner, we’re here to share our experiences. We’re here to share our learnings, we’re here to share our tools. I think it’s important to zoom out on issues because there might be a micro-type issue that someone is struggling with around data privacy, for example.

But if you zoom out just a little bit, and while you may think a consumer business doesn’t have anything to learn from a financial services business, they wind up having a lot to learn from each other because of customer information, for example, and the way people handle customer privacy information. There are a lot of learnings to be shared across our companies, and we’re sort of the clearing house, if you will, for that type of information. I sometimes feel like my job is to help issue spot and to spot macro trends for our companies and make connections within our community to allow companies to have those conversations on topics that they may think they have nothing in common on, but they actually do.

Matt: It brings to mind your own experience now having been president of the NYSE group, it’ll be approaching two years, what has been a bigger learning or two for you in terms of this leadership role and something that you’d want to share with others that are in similar leadership roles? I’m curious what you’ve been learning over the last couple of years.

Lynn: The thing that excites me the most about this job is just the platform that NYSE has and the role that we play in financial markets. Certainly, thought leadership and financial markets globally, but the role we can play is to be a convener of people to have really important in-depth conversations on a variety of topics. The topic of this year is clearly the two letters we’ve been talking about, AI, and that’s an area, but AI is one thing, and if you double-click on AI, you wind up with a variety of sub-issues, data privacy being one, what the application of AI to different industries is too? It’s such a meaty topic that I think, given the role that we have in financial markets, we have a very good opportunity to bring people together to discuss these topics.

Matt: It’s a topic that we’re going to be unpacking more, and we’re just delighted that you’re going to be coming out to join us for the Intelligent Applications Summit on October 11th in Seattle. It’s going to be a very special event again this year, and we’re looking forward to you sharing your perspective and learnings on what you’re doing in the AI area, as well as what’s happening in the capital markets.

Thinking back again to where we started in this conversation, the wonderful rise from being a computer science major and a programmer and working at IBM to being president of the NYSE Group and all these other responsibilities with ICE, tell us what advice you would give to college students today and college students that are trying to start off on the right foot with their career journey.

Lynn: I was fortunate in that I had the opportunity to start in a company, IBM, which gave me the ability to interact with multiple different companies. I started in a consulting role at IBM, and that gave me the opportunity to see a variety of different industries. The one thing that I think was incredibly important in my first few years was that I fell in love with a specific area, so the advice I always give to college students is you’ve got to love the field you’re in. You have to wake up every day wanting to learn more about it. I wake up every day and I come to 11 Wall Street, and I learn something new because that’s just what you should be doing. The world is constantly changing, constantly evolving, and if you have the opportunity to join an organization where you could love the subject matter and continuously learn, you’re in the right place.

Matt: It’s such good advice. Having that feeling that you wake up every day wanting to learn more, being at a place where you love the subject matter, you enjoy the people, you respect the people, completely inspiring to me as well, and I’ve been out of college for a few years.

Lynn: So have I.

Matt: I can’t wait to spend more time together here and it’s incredibly impressive what you’ve done in your rise through NYSE and then this broader role and what you and your teams are doing on their front foot to innovate in areas like AI and generative AI in particular. So Lynn, thank you very much for joining us on the podcast, and look forward to seeing you soon.

Lynn: Thanks for having me.

Coral: Thank you for listening to this week’s episode of Founded & Funded. We’d love you to rate and review us wherever you get your podcasts. If you’re interested in learning more about the New York Stock Exchange and what they can do for companies, visit their site at www.nyse.com. And if you’re interested in attending our IA Summit and hearing more about IPO trends from Lynn Martin, visit ia40.com/summit. Thank you again for listening, and tune in a couple of weeks for our next episode of Founded & Funded with Statsig Founder and CEO Vijaye Raji.