I just want to record my source about the champ GPP. And all these improvements now making in the recent few weeks especially the partnership between open AI and Microsoft. Just a recap of what happens that in the past two weeks three to three weeks, the GPT OPI launched the GPD four API and Microsoft launched Microsoft copilot, which is now applications that integrate GPD form into your office with and then Microsoft also announced to have copilot x, which uses also GPD four in our GitHub plugin to help people to write code and also know how I launched the GPT parking systems, which essentially become the App Store. So there's a lot of movements the past two or three weeks, that potentially has some impacts the whole industry, so the next few years since sorry, cow is also in this space, the development of the technology and the ecosystems in large language model space. We're having a huge impact on storytelling. And here's my source.
about the future of storytelling and how we will find our positions in the current space. Before we start to answer that question, the first several terminology I would like to clarify is that what are the different players in AI GC ecosystems? At a bottom of the layer is cloud infrastructure. This is like go Cloud, Microsoft Azure zoo and ate up my Amazon AWS. So this provides the computational power that is required with training, the art language models, as well as the larger models. There's also other players in addition to cloud infrastructure, which is the compute AI computation, a Hollowell complete like a Vidya, who is producing the GPU which can be potentially used for training, as well as inference. But over walk up, like Nvidia is very likely will be wrapped behind cloud infrastructure so we can see, the foundational idea is cloud infrastructure. And on top of that, you know, well be some complex that language, those cloud infrastructures, to train models to help to serve models. And these are traditionally called ml ops. And those complaints are normally used to train different type of models, and has been also open your API previously different complaint because in the previous paradigm, the assumption people have is that every company needs to train their own model. And because of that, it's very hard to standardize, the one model is going to be used. The best way to do the standardization is to provide a set of ml ops tools in which the engineers in each complaint would be able to leverage and use to train our own model. API. And because of that, there's a new generation of complex instead of providing
ML ops, they provide the pre trained models and expose it as API. So open AI is an example for that. SRP, Google's Paul M. So they're all important players. So this is very important call a foundational model layer and on top of the foundation of Model layer, a lot of companies who are trying to figure out applications. This example Microsoft is leveraging a lot of
And store is how he's also in that layer is building applications. And right now because the whole ecosystem is very, very early, there's a lot of new players in each of the area. What we know so far is that cloud infrastructure layer is likely to be the most stable one is still likely Google, Amazon and Microsoft will place in this role. Okay. On the other hand, the fun.
of model complaint also is very eager to go get application. because the application name is user facing and the user money is well, inherent with the money flows into it. And as many new startups and most of startups also in application, so that was like the number of complaints in the application that yet is the most at this point. And I trying to leverage that interest by also providing like Obi you session I had to do that one of the first killer app, which is the challenge up, and they're also open up locking systems to. leverage all the interest and accountability in the application that year, so that they can solidify them and holding the values in the foundationally. That's how they can work to protect themselves from cloud infrastructures to come into them. And as we develop more and more in the ecosystem, there might be some foundational layer compliance will go to killer applications. Now might also be application is who started to build the only foundation layer. So this is a fast changing thing. My take on this is that is very natural that early stages with like people are all figuring out positions. But there's a very clear values flowing is sorting users find applications that is hard and hard language models, and then values planning. And those application health plans were paid API to go to the foundation API complex needs to pay their cloud infrastructure which goes to foundation. So this is kind of like the key things that how, like how this was evolved, and how the values are beholding this layer. My belief is that eventually there will be companies who can do both the models both on the model side, as well as the user side as well. People can keep strains in the positions. So eventually we'll be in applications, extended to a JSON application and building around platform around Anahola. I think that's likely what happens here.