Fireside Chat | Nathan Lile (CEO, Synth Labs), Michael Stewart (Managing Partner, M12)

    6:35PM May 30, 2024

    Speakers:

    Keywords:

    ai

    models

    applications

    data

    technology

    generative

    startups

    investor

    microsoft

    tooling

    systems

    question

    customer

    enterprise

    investment

    people

    companies

    nathan

    large

    alignment

    Side chat with M 12 i,

    i Okay, think we'll get started.

    Test, test, awesome. Ready to get started.

    Let's do it.

    Well. Good morning, everybody. Thank you for coming. My name is Nathan Lyle, and I am the CEO and co founder of synth labs. We're doing research into frontier AI alignment and building continuous evaluation alignment and customization systems. And pleased to be joined today with Dr, Dr. Michael Stewart. He is an investor at M 12. He focuses on disruptive tech, the future of AI computing. And he also is an inventor with over 40, I believe, patents, which is just phenomenal. So thank you for joining me today, Michael,

    thank you for the kind introduction. Nathan. I'm Michael Stewart. I'm a managing partner at M 12. We're Microsoft's venture fund. The areas I cover are thesis development and generative AI, gaming and deep tech.

    So I guess we should start with AI. It's not new, right? It's been in here for quite some time, but it's definitely evolved as a concept in waves, and I imagine that your investment themes and the way you think about AI investing at M 12 has had to evolve with these waves.

    Yes, exactly. Nathan so M 12, being Microsoft's venture fund, has a mission to find signal in the terms of new commercialization of new technology, so kind of a junction between emerging technologies and disruptive business models that can grow quickly and be potential partners to Microsoft sometime after we invest so, long before I joined M 12, there had been a very strong thesis around AI, investing in a prior era of AI, really convolutional neural networks, computer vision, predictive analytics, and this was really about finding new ways for AI to improve human workflows, to add to the insights that humans were were doing using software, but maybe doing calculations that were more complex, synthesizing from large amounts of data that literally the Big Data era. And for us, the theme started to shift around the time when we were confronting this inflection of the metaverse. And I know everybody remembers the metaverse, right? The metaverse, to us, really is still something that is needing technology and a great experience to make it happen. So those of you are gamers, if you if you spend a lot of time in the video game world, we've experienced realities in the virtual sense that are expansive. They may not be as deep as they are wide, they oftentimes can have a procedural generation aspect to them that is interesting for some period of time, and then it quickly saturates or runs out. And to me, one of the theories I had was that technology can really active. Technology can really make a difference in this type of presence by providing a bottomless, surprising, but also integrated experience in creating and then also serving games. And that was really the insight that led us to invest in two different companies in our portfolio. One of them is D matrix, which serves transformer inference at unprecedented efficiency. The other is inworld, the leading gaming provider of NPCs for video games. And at the time, I just want to say these investments were made toward the end of 2021 this was around the time that GPT three was in general availability, and there was not, at that time, really a strong pull, a strong motion around what generative AI might mean in the startups and investing sense. But it took startups. It took entrepreneurs to kind of connect, connect the technology to an application that customers wanted. And to me, that's the central observation we made, that there is probably much more to come. There's many more unions like this, of the technology in the use case that would be mutually beneficial. Great applications drive provide the direction for new technology, the progress of the new technology improves the experience of the application. And that was really the theory behind the investment in inworld, and it led to a great partnership with Microsoft. But during the period of time when we saw these new generative AI startups emerge. We have seen a couple of different waves of understanding how the go to market motion, the business model, instantiation matters tremendously in the technology architecture underneath. So in short, Nathan, the main thing we came away with from that rising of generative AI was that you need both components to make a new market. If you want to make this large Tam with generative AI, we need the people out exploring and nice window sound effect there. We need the people out exploring and building great business models in tandem with the technology development in order to sustain the the investor motion.

    It's incredibly fascinating that generative AI was kind of a necessary solution to supercharge the metaverse. And think that's one sort of category. Would you call that a application, a consumer facing type category, the metaverse, or is its own category?

    Yeah. And I, you know, great point. Nathan, because, because, I think for the large part, is generative, AI, really a, b to c, first motion. Is it enterprise first motion. This is still a key question that is not answered, so I'll tell another story. And I think some people might know this at the time of the introduction of GPT, or, sorry, at the time of the introduction of chatgpt as like a product that people could play around with using an interactive, free to use. You know, chat window. There was no indication that there could be 100 million plus Maus waiting out there to adopt this. It was seen as more of a progressive ramp to getting it easier and easier to use. But what we found is we actually don't know that much, to some degree about the public's enthusiasm about generative AI, all we can see is every step forward in making it easier to access, easier to use, there's another large group of users waiting to be activated, and that's why, at times, our thoughts as investors still gravitate towards strong enterprise aligned business models, because risk of customer churn attachment is lower. These are things we learned during the rise of SaaS vertical SaaS, once you have product market fit with enterprise customers, the risk of churning those customers is lower than it is in A, B to C setting, where there's obviously a much larger numbers game, but it's harder sometimes to retain the customers that you acquired. So I think We the jury is still out as to which is the larger and enduring piece of this, but certainly we as M 12 are very enthusiastic about the B to C opportunity.

    Awesome. Well, I think that's a great segue to dig in a little bit more into your investment themes and some of the categories that we can define here. And I think you've got tooling, you've got application layer, and you've got infrastructure. And do you have a investment theme that's consistent across all three or do you like to break these down into sub components?

    So in the beat, in the beginning, I think

    this is what happens when you talk for a living. You lose your voice. Sometimes, you know, in the beginning, the one of the natural inclinations to to look for investment opportunities in such a rapidly emerging space where there's a lot of entry of startups, there's a lot of startups building new applications. Sometimes the common theme for investors is to go back to the picks and shovels model, which focuses on backing the best infrastructure that enables many, many, many applications to be built. So if you guys track stats from CV insights or PitchBook, it's something like 10 to 12,000 new startups formed in the in the time frame since we've had this generative AI, a Cambrian explosion. So is picks. Are picks and shovels, a the strongest good lawyer,

    yeah, do picks and shovels have a greater chance of finding product market fit? Do they have a sustainable advantage? I think those are more technical questions that as M 12 being backed by Microsoft as an LP, we're usually pretty well equipped to assess that and in the tooling space, whether it's to build the largest foundation models for generative AI or to assist application developers in improving them or even helping to integrate them into applications, what we've learned is that it's easier than ever to get developers to try new tools, because they are hungrily looking for what gets them an advantage as an Application Builder over their competition. So this era of picks and shovels, substitutability is probably higher than what we may have seen in the app development era or before that, and for that reason, we're careful to invest in tooling companies, AI tooling companies where we can see a sustainable competitive advantage, especially where we can bring in Microsoft as an eventual partner to integrate into tooling platforms to go, sorry.

    So there's a big appetite to try new things. And as part of that, in my perspective, a lot of these companies at that layer are also trying to invent new form factors for how developers should interact with these brand new systems. And I think we see something very similar happening at the lower level infrastructure as well, that we're having to rethink the very building like the most primitive components of what We've abstracted away for so long. Are we in a Are we entering a brand new era of fundamental building blocks here?

    Yeah, I think that type of level of abstraction is where people should be thinking. Because if you are focused down on the day to day advancements that are occurring in generative AI, whether it's in tooling, applications or intra you could go crazy trying to keep up with who is in the lead. And it's true, not for just people who are buyers of this technology or users of this technology, but investors who are doing this as their everyday jobs. There is simply no way to back and guarantee one horse will win the race again, especially in tooling. The other thing to say about, you know, working on tooling as a startup is you need to be conscious of the market served, the size of the market served. And in some of these cases, we see tooling that is rapidly embraced by developers. And this is, you know, like in your GitHub, stars count or other other visible traction metrics. Yet the thing you don't want to be doing, and again, as a as an investor, I look for this in our opportunity cost. Thing you don't want to be doing is 10% better, 50% better, even than the previous leader, it needs to be going in a different direction, where you can move quickly and establish a lead of your own. And for that piece of the puzzle, it's certainly something where we hope Microsoft can make a difference for our portfolio companies, but for the whole ecosystem, it's where we see the role of AI influencers greatly affecting how these recommendations are adopted, evaluated and then dropped at times for alternatives. So these are these are risks that both entrepreneurs and investors should take into account.

    The space is moving incredibly quickly, and just the past three months, I think a lot of us have had to adjust some of our priors based on changing form factors, based on innovation and based on entrepreneurs taking some big risks to try to bring new products to market. It's a really exciting time. I imagine it's a little bit stressful as an investor to try to keep up that type of pace. I know you have invented an interesting framework for how you think about potential investments and how to grabbing some microphone. Issues here we all set. Is

    this a little bit louder? Okay,

    perfect, perfect. So within that space, I know you've, you've come up with a framework that helps to generalize that you like to categorize the way that you look at principles of investing in these areas.

    Yeah, and let me just close out on some of the categories too, because investors that are again focused focused on tooling. Sometimes it's it's in the persona of the investor themselves. We sometimes, as technical personnel, identify most strongly with other engineering like tools. But I think really, if you look at the unit economics of these of the situation, um applications garner the most value. It has to be the case, right? You cannot pay off the large investments in model development unless applications are more profitable, thereby driving usage of the models. Yet that is the to the point you're just asking about. That is the area with the most competition, you know, so, so the ease of building on top of these APIs leads to more startups being formed, leading to more competition. What are the elements that could help an application survive? Because you could be first to the market in a vertical. And this these verticals could be anything from healthcare to legal services to creating code. There's four elements that we recommend people take into consideration for application, competitive a sustainable competitive advantage, and the first one is, is clearly really in the data decision. So enough has been said about data being important. Data is the new oil. There's no reason to belabor it further. But data specific, accurate data takes on a new, almost asymptotic importance in this era that can't be stated enough, because in the architecture that we work in with multi head attention, you're largely unable to escape the ID, the intelligence or the ideological scope of the data that you're using to train. Now there's, there's ways to improve that kind of post training, but architecturally, with the current paradigm, it's going to be really, really important to do a lot of curating of the data that goes into training the model. When you're doing fine tuning, there's a lot of work and thought that needs to go into what synthetic data goes into fine tuning the model, on and on and on to the point where you're really starting to circle back and think, can I redesign the models in the first place to have more relevant data as a training corpus. And this is the this is the basis of these families of models, like five, three that we've seen Microsoft develop. But even large language models will adopt this. Even much, much larger scale LMS will adopt the same kinds of practices because of the overhead costs. So I think this of this is an inevitability, but it will lead to both a necessity to have strong data acquisition and management strategies for any model builders and startups that incorporate models into their into their business, and the variety and diversity of these kinds of companies is where we're really the most interested in a couple of our recent investments in the space, like unstructured and detology, speak to the direction we think they'll go into beyond that, the next feature we really look for is in the dividends that pay off for using the AI application. So this is like the four DS framework we've talked about in public a couple times. This payoff is also essential, because back to this question of the investment and creating the model, the investment in incorporating the APIs into a business model. If the customer is not seeing an immediate financial payoff from using AI, they will be on the fence as to whether to keep using an application, or just to keep hunting for something new, that's an important moment to not lose. If you're an application developer that has captured the customer, you want to be at this early stage of the learning curve to build this data flywheel, but also capture those customers preferences, because that will further feed into the valuable data that's far more important than the generic, bulk data that has been talked about, but I think maybe too much compared to the accurate data. The conversation over the next year is going to shift dramatically toward what is the actual data that you have? What is it worth? What does it relate to in the end and of the model's use the last two factors for an application developer to consider are distribution and delight. I just want to mention them both, because distribution is a factor. I think we can help with tremendously. At m 12, we have Microsoft as a partner for our startups to reach enterprise customers at a much earlier stage that they might otherwise be able to and we can keep them in the right conversations that allow those enterprise buyers who are saturated with choice they're unable to make decisions these days because there is too much noise out there. That is something we want to take out of the equation for any of our portfolio companies, any of the platforms that we collaborate with. And then last it's kind of up to you. It's up to you, Nathan, and it's up to any of you who are founders. Are you really generating this delight that the customer wants to stay where they are? This is, in the case, an enterprise or B to C, because, again, the tailwind behind us all is a continued curve of technology development to improve performance and drive down costs. That will always be a factor to substitute new technology. It will come down to customer preference every time, and that's not a role unique to AI. It's an old rule of business,

    but I think AI has a unique capability to elicit new types of Delights. Is that something you've maybe noticed that the types of delight, the flavor of delight, is different with generative AI.

    I was just entranced with these demos a few weeks ago for the speech to speech. Don't want to mention the companies, but some of them are partners to us. Some of them not. This is a this is a moment I've been waiting for for a long time, because I think it's one of the times when we think about our relationship to computers and how old some of the structures we have are about, Do I need a programming language? Do I need a mouse and keyboard? Do I need this kind of peripheral technology to interface with the computer? Or could it be entirely natural? This is a tantalizing possibility that's been out there for quite some time, and I think it might go beyond just the technology developers focus to really show an easier and easier pathway to I can make the computer do what I want without really knowing anything about what's operating underneath, to unlock something quite disruptive.

    Yeah, it's delights. Almost blurs the line between what is magic here, and these systems do feel very magical. And the consequence, in some ways, of a magical, delightful system, is that you may not know exactly how it's working, and the security, the safety. How do you think about those topics within the scope of maybe alignment, of being confident that these systems are actually behaving the ways you want them to, with behaviors that are aligned with your expectations when they're doing these delightful, magical things, part

    of our obligation and responsibility when we're investing cash on behalf of Microsoft, they give us money to invest in things that are a bit ahead of the strategy, And we need to return more than we were given and also show them something they didn't know about. It's a pretty fun but difficult proposition in today's world, but it comes along with expectations that everything that we invest in, everything that we bring into the portfolio, must be within guardrails of safety, security and alignment. So some of these case, in some of these cases, we've made direct investments in startups building application security, building LLM security. Hidden layer is a great example of this. As we move further into the many AIS at scale, and being able to to do this type of alignment work on many different AIS customized to different people. That's that's absolutely where we feel there's a great opportunity to to grow new markets and new technologies. And I wanted to say that is the basis of investing in synth labs. And I'd like to invite the audience to ask questions after this, but I want to give you a chance to explain exactly what this type of RL, AIF, RL at scale could mean for a world of many AIS, but also steerable using the technology we develop in Our platforms.

    Absolutely, I think we fall into that forward looking category when we think about alignment. We also think about alignment with respect to systems that are becoming more and more capable, and as those systems begin to take on more and more cognitive workloads off of our plate and carry them out independently. How can we continue to be confident in those systems behaviors, but also, how can those systems learn from the human in the loop? How can they benefit from learning about our preferences? How do we bridge the gap between really large models and the consumer, the member of the organization. And so bridging that is really a continual problem, because our preferences change, our objectives change, all of these things drift, evolve and become more mature over time, and these AI systems need to be able to capture that and learn from that. And so this does come back to data in many ways. This is a continuous data problem. It's a filtering problem. It's an evaluation problem. How can we scalably run evaluations in parallel to these massive systems that then align it with the objectives that you care the most about? And so these are both deep research questions that we're working on from an algorithm standpoint as well as a fundamental architecture standpoint, but we see a lot of immediate alignment concerns that we can build systems for today and so And on that note, we are actively building systems that increase the AI's alignment. I believe we have some questions from the audience now,

    yeah, so we wanted to make sure the audience had a chance to ask questions. I'm not sure if they get a microphone or how does it work.

    Maybe raise your hand if you have a

    question. Please raise your hand if you have questions for speakers.

    I Hi,

    a bit of a question for both of you guys. So looking into kind of how AI has developed over, like, the last couple of years, last couple of months, and how it's exponentially, like, grown. Do you guys have anything that you would want to see in AI, or want to be, like, cautious of as it continues to develop?

    Was the question, are there some things we would like to see in some things we're cautious about,

    yeah, so looking at, like, I guess, how it's grown in, like, all of the things that we want to be careful about, is there anything that you guys would consider being like, more cautious about as AI continues to develop? Yeah,

    let me ask our answer from the investor point of view, and they'll let Nathan comment too from simple ads. But certainly the most important thing I see developing over the next couple of years is is a continued decline in the cost of serving tokens. If, if those of you are like financial model builders, I've just been delighted to see essentially what is like a hype or Moore's Law develop around the cost of serving tokens through the variety and different not just different hardware that's available, but inference engines, decentralized networks. One of our investments is in foundry, which provides a decentralized AI cloud. All of these provided tailwinds to application developers who will have an emerging price discrimination, which is exactly what you should expect to happen in a in an efficient market, those users who have a very high value, high willingness to pay, should start to get access to more highly specialized and more expensive AI services. And they'll choose those those services for their applications. And then on the other, you know, end of the spectrum you would go to lower and lower cost. So that is actually one of the most important things we all must be aligned to, because the the moment you stop being able to see a line of sight to lower costs, that's actually worse than anything about the technology process progress. And the you know, the flip side of that, and you know what we'd be concerned about, or proactive about, would be trying to ProAct, or trying to prescribe, or prescriptively, say, what the AI system of the future will be like? What is the application be? Is it going to be more chat bots? Is it going to be more information retrieval, log summarization? These are all applications that are making money right now. And I can see at times a thought that, okay, what we what we imagine is, like those applications that are making money now, but much, much larger. It would be missing, you know, to use some analogies, the Ubers and the Lyfts of this era. It'd be missing that by focusing on phone calls, voice mail and SMS. So it's, you know, the thing I'm really worried about is, is applying the clamp too early to what the space of this is, because I I'm far more convicted that it will reach this human interaction piece sooner, that that will be more explosive.

    So I'm very optimistic about the future, because I don't know how to live in the world and not be optimistic that we can solve the problems that we sometimes create for ourselves when we're inventing brand new technologies. I think on a lighter note, I think a lot of times we talk about some existential problems, but on a lighter note, I think a very real concern of mine is that as these systems become more and more agentic, and they're creating more and more synthetic outputs if they sort of run off the guardrails right now, we're at risk of just creating a huge amount of low quality data pollution that leads to continual bloats in broader systems, in in data warehousing, as well as in GitHub issues as well as in code bases. And so the quality of these agentic systems output is very important, and it's very important that you can evaluate the quality of that to decide whether or not to throw it away or actually keep it I think that's a somewhat under addressed problem. And I've already seen this happening with some of the AI powered workflows, specifically with code is that you do have this bloat in total volume. And the only real solution for this, in my opinion, is probably more AI, because no human can actually go through all of that data. I know we had a couple other hands up,

    great conversation for some of us that are building AI native enterprise applications on top of foundation models. What are some of the legal challenges that we should be aware of, and how are some of your startups thinking about commercializing these apps while they are built on foundation models and some of the legalese around it is not very clear.

    Yeah, so this is a very important question. It's just to paraphrase what you're saying, if I got it right. You know, what are some of the elements of the playbook that win in enterprise Gen AI, because in some ways, these are the most prized customers, but also the most saturated. So what can a startup do to incorporate elements of sustainable advantage there? You know, I back to the data question. This is, this is essentially the essence of our investment in typeface. It's very important to design your products that think end to end at the time when the technology is appropriate for it. So if you remember the first waves of just like, here's a way to generate text, here's a way to generate images. In a sense, these are extremely flat applications that apple that occupy one part of the you know, of the experience, of the technology stack, the experience alone, but you need to really reach back to the relevant data to ensure the output is high quality. And for that, you have to have experience building end to end. In other words, you can't just say that like I'm saying it out of a microphone or on a slide deck. You need to have the experience of how those file systems, those secure file systems, that enterprise uses, work, because you're going to need to convince this enterprise buyer that in order to use generative AI and the output isn't like going to risk your brand equity, risk, security risk, getting fired, sued, prosecuted, what have you this? These are no joke. This isn't like the image with seven fingers on one hand. This is potentially crossing regulatory, legal or ethical boundaries of a corporation. So right now is the time you start to see more of these, end to end. Some some people are kind of workflow oriented applications, because you're going to have to convince the customer that the data that you're taking, whether you're training model, fine tuning or just prompting, is secure from the minute you touch it to the output. And there is no excuse. There's nothing allowed. It's fine to wait, if you can't show that right. There's no pressure to immediately jump in and use any of these applications without that understanding being set up front. And for that reason, I think some of the technically oriented teams I've met in Gen AI that want to tackle enterprise but have done, you know, more or less slide where, you know, like, Well, we were doing B to C, and now we're going to do B to B. You need to staff your teams with experience enterprise, go to market and sales personnel. And this is a, this is a non starter. Without them, you can't even bring that kind of company into a room and get a serious sales conversation going without that kind of talent. So that's my advice, is balance your technical, technical acumen with experience in selling an enterprise as a minimum requirement to getting the conversation started.

    Hey, Michael, thank you for this chat. Mike, my question is, on the foundation model side, how do you think about bringing or making a sustainable competitive advantage in an open source world? Microsoft, obviously is invested in open AI, but also is invested in other open source companies. And second part, kind of related question is, how do you see competition between large language models and small language models there? It looks like the open AI view is large is better, but, but how do you what is the M 12 kind of view on on that conversation?

    Yeah, and I'm talking, I'm sitting next to somebody who can definitely comment on open source and open science. So let me just say, just at a high level as M 12, we are totally interested and free to invest both in companies with closed source, open source frameworks. In fact, we have a dedicated GitHub fund for the purpose of backing very hot, promising, but early stage open source repository based startups. And our ethos has been think again to where the largest profit pools are, which is in the application layer. As you get to the right product, Market Fit volume and the customers start to understand the total cost of ownership that they need, open source provides a viable alternative to using closed source models and their API costs. It's up to the application developer to make that choice. The more choice, the better, and frankly, this provides a pushing force on the closed source model companies to stay ahead. There's no other way to get the premiums that they get without it. So from we want to make, we want to magnify and amplify the means to use open source tooling, models, frameworks, maximally to provide that choice to application developers. I got very loud and Nathan, talk about your background. Nathan, I think he's got a better perspective than me.

    Well, my co founders and I have been involved with open science for quite some time, and we define open science as doing research and extending research artifacts and research questions to the broader community, so that these cutting edge AI research projects that often happen behind closed doors can become a conversation and can have community input. And we think that that's incredibly important, specifically for black box systems, specifically for alignment. We think that's a conversation that should be extended to a global community. On the other hand, these are these models are like engines, and a frontier GPT four, for example, is a really powerful engine, and it doesn't necessarily need to be applied to every single problem. And so there's very specialized use cases for very specialized models. What we see with the open source models that you can do an enormous amount of customization with them, and you can distill them down and use them even more efficiently. And so I think the industry just needs to continue to mature, and I think that these use cases will become more and more nuanced.

    Let's take Nan's question.

    Thank you, Mike, and really appreciate Maya. Nathan. You guys did a really great sharing. So I have a question regarding data. So you know that all the model scholars are racing towards AGI, and it's not surprising that next generation model the publicly available data will be exhausted for the pre training right, and in the meanwhile, there is a petabytes of data in the enterprise side for the proprietary data. For example, JP Morgan had about 150 petabyte of data of proprietary that is not being utilized yet. So how can the hyperscalers such as Microsoft being able to absorb this proprietary data into the NASA generation of a model training. I guess this is a question to both Mike and also Nathan.

    Boy, you asked the easy question. Yeah. So I've actually said a number of occasions we should probably like, from a business and investing point of view, we should be careful about worrying too much about these chinchilla ratios, or, you know, like chain of thought reasoning, or they need to become part of the mix at the right time. And certainly, you know, the future of this is probably going to be some alternative to multi attention, to transformers emerging when you have the scale and whatever data that needs to make it viable. And we're definitely going to embark on this new data capture strategy for the built world, for our everyday environment that's going to require sensors, robots, like there's a robot walking around the show to be able to just see things and make sense of them, without having to do the, you know, the rlhf, to do the labeling at these gigantic jobs over and over. And I think we are, you know, that is definitely thinking ahead, but it's certainly, it's a wall where we can already see, with regard to text for sure. Now, what about the 3d environment? What about modes of of data that we don't consider, you know, like text, like I'm talking about things like physical data, vibrations of the Earth, swings in temperature, and combining them all together in a multimodal way. That's that's like a that's a problem that we need to approach slowly, but we're going to get there with text first, just because, even though we haven't reached the we haven't. We haven't maximally reached the amount of text that we see. The differences are going to become less and less, and it'll become more and more important to look for these asperities and the quality of data that lead to great quality models. Automation is going to be needed to do that. And again, the which sub quadratic model, whatever it is that's going to be like the next mixture to do that. I think it's this is back to the two prescriptive comment I made earlier. It's too early to just say right now what it'll be. And I don't want to make a I don't want to make a bet, maybe Connor knows. Okay, I think that's it. But thank you very much for your attention. It's been nice chatting here on stage. A round of applause for Michael,

    we have one more question for you, but thank you. Thank you for your time. Thank you. You asked very good questions. Good so before we close thing up today, we have some question for you, and a lot of people here looking for sponsors or investors. As for you now, the AI evolving very fast. How do you identify opportunities?

    Yeah, so we look for for our own fund, we look for companies that do have a sustainable advantage, that it's not it's not a major problem if there's like dozens of startups in the space. It could be technological that helps them succeed. It could be simply executing faster. It's always been a combination of all those factors. But for our fund, we hope that an affiliation with Microsoft, partnering and understanding what enterprise co sale go to market could mean coming to us first with that idea in your head is a giant step forward in how you would make use of tools like that. And we think that's going to be a durable recipe for success, no matter how much competition is out there.

    I see, and already you talk about responsibilities. So being entrepreneur here, an engineer here, what is the responsibility that they must have?

    Certainly, because we're talking about data quite a lot. What I said about getting access to this valuable data means being respectful, careful and secure with that data that also from our criteria, is something we look for proactively, and then the use of the AI, is it really solving a customer need? Is this within legal, ethical and moral boundaries? Because there's a lot of choice of what we can invest in, but it needs to abide by our responsible AI principle. You're backed up by, again, what you might think would be out of scope or not cool for Microsoft is likely not going to be investable by us.

    Thank you very much. And I know a lot of people have a pattern, like yourself, and you have 40 patents in your life. So what's your advice for people, how to protect their patent and monetize their patent.

    Inventing. Inventing, for me was more, how do I capture some of the usefulness, the utility and what I was working on in a technical setting? It remains to be seen how much IP thickets are a part of this new reality, because, you know, they're never, or they're rarely, one by one, super valuable. It's, it's generally, you know, when we say thicket, you know what I mean, many patents that kind of interlock and provide a competitive mode against against others using the same technology. I think it's, it's a TBD as to how much that matters in the world, in the atoms world, where I was before, it was a little bit more clear. You had a reduction to practice. You had an analyzed ability to analyze and reduce to the Physical Reality of Things that may arrive for AI. Let's see.

    Thank you. Okay, so last question, but this question not asked by me, but I would like to give it to you. If you want to ask question for people here, what question you want to bring home today?

    What's the most impactful thing AI could do for your life? So if you're a busy person trying to juggle many tasks throughout the day. Is there one thing that AI could save a lot of time, money, or both, that would make a big difference in your life? Probably the technology is going to meet you there, if not fully, then at least halfway, something that is a really big burden on you and your daily life is probably true for many, many people. I would say the technology will meet you, meet you along the way, if you find the market first. So market discovery, in this case, is the gift to all of us that have different individual lives. Think deeply about whether AI is serving something that is already known in the world of it, or if it's something that is serving a need that people have that is unaddressed.

    Thank you very much. Thank you everyone. So thank you for sharing your wisdom with us. I hope to see you again. Thank you, everybody coming. Thank you. David Morris from Bloomberg, and then also, I want to thank you one person here, very important. I want to thank Winnie. Please come upstage so she has helping from beginning and to the morning. I actually was very impressed when I see her, and I find out she's just not regular, a volunteer here, running around, helping everybody, and I found out she actually is a professor. Thank you.

    Thank you everybody for coming here.

    So I know you suppose a speaker, but they did not arrange her as a speaker. So now I want to give you this opportunity to you. What are you going to speak to the student and people here? So I want to give the microphone to you. Let's give him a round of applause. Thank her. She's a story touching me the most today. I actually thinking she actually just an intern, and then calling her around, do this, do that, and actually find, oh, she's professor teaching science. So your great story. Thank you very much, because you're the hero that Gen AI is the future. Thank you.

    Thank you very much for bringing me up. I'm not ready for it. I'm just a normal volunteer myself. I'm from I'm a computer science professor from Cal State University, so I teach computer science. So many of my students went to industry and become professor as well. But today I it's very I'm happy, happy and pleased to see now so many innovations in here and many new grad students join the innovation and build their own startup. So I think everyone here, we have the same task. So we are to inspire our next generation and make ourselves better than the one of oneself of yesterday. So we make ourselves better and better every day. So thank you everybody for coming year, so we will have our afternoon session starting at 1pm I hope to hope to see you everyone here again. Thank you.

    Thank you so much. Thank you for contribute every possibility so ladies and gentlemen, as we close for the section today, so please enjoy your lunch, and tomorrow I will continue to host in the Alliance section, please come and enjoy your lunch. I hope we continue, inspire, make some new friends. Thank you. Thank you.