nikolai_subs

    7:59PM Jul 9, 2025

    Speakers:

    Razib Khan

    Nikolai Yakovenko

    Keywords:

    AI development

    OpenAI

    ChatGPT

    thinking models

    data labeling

    Meta acquisition

    AI hype

    AI efficiency

    multimedia understanding

    AI in business

    AI in education

    AI models

    AI innovation

    AI training

    AI applications.

    AI

    self-driving cars

    Tesla

    Waymo

    LiDAR

    fleet turnover

    quantum takeoff

    genetic research

    human driving

    automation

    job transformation

    Uber

    AI chat therapists

    brain rot

    new products.

    This podcast is brought to you by the Albany public library main branch and the generosity of listeners like you. What is a podcast?

    God, daddy, these people talk as much as you.

    Razib Khan's Unsupervised Learning.

    Hey everybody, this Razib with the Unsupervised Learning podcast, and I'm back with a returning guest, Nikolai Yakovenko. He is the CEO and founder of Deep News, which is a news website that uses artificial intelligence to get you at the ground truth, and he's been doing that for a while. He also has experiences with Twitter, now called X. He worked at Nvidia, or Nvidia, however you want to say it. He's got a lot of experience in ML and that sort of field. He was also a professional poker player. Those of you who've listened to this podcast know about his various peregrinations and journeys, career wise, in his life and but mostly we talked about AI, Artificial Intelligence. And before we got on, I told Nikolai, we've been talking about this for three years, which is basically, you know, it's a generation in artificial intelligence time in terms of what's happened, the expectations, the hype, the bursting of the hype potentially. And so I just want to say, you know, there were crazy people. Nikolai definitely thought they were crazy that I knew in various group chats who were like, Oh yeah, the world is not going to exist in one year, because the singularity is going to happen, and God will be born, and and we will all be turned into, like, super slurry, or like, I don't know what they thought. So, there have been these ai do MERS that have basically been arguing that the intelligence explosion is happening, and all that stuff that Ray Kurzweil predicted, that doesn't look like that has happened yet. AI is much, much better than it was when Nikolai and I first started talking, but my personal impression is it is starting to become ubiquitous to certain professions. I think a lot of coders use it for some basic low level things, but vibe coding can't really create a whole application for you very easily, at least robust one. And then there's a few jobs, like, maybe translators, I don't know what they do now. Maybe they're still around, but translations got really good. I do have to say, you know, when it comes to video and music, AI makes a, which is a somewhat different thing than the LLMs with text based makes a makes a splash every six months with some new release. But we still don't have our AI feature films yet, so I think we're still mostly on the potential stage. We're seeing change. We're seeing the newest version of open AI is really good.I think it's o3 now, they're noticeably better. But you know, when ChatGPT first came out, it was pretty amazing to people, because that was their first exposure. They went from zero to something, and now we're in the more incremental stage. And we're waiting to be wowed by the by the new gods, and we haven't been yet. I think we're still in the early stages. Yet the end of the world is not eminent. What do you what do you think? You What do you think Nikolai of all that?

    Yeah, that's sort of a lot saying a lot of stuff there. But

    As usual, yeah, I start off that way

    Yeah, it's hard to get into AI without sort of opening up a bunch of, you know, a bunch of parallel threats. But look, the big thing that's going on, obviously, is that you see the Battle of the foundation, model companies. There's more of them than I expected to be here, but they're there. And I think a few interesting things happened. One, OpenAI is clearly winning. Similar web has really good stats that they release about this. I think every week of what share of of LLM traffic they're getting, and they made it something like 85-87% below 90 but everyone else is stuck in the low single digits. So, you know, open AI is despite losing all of their early people, other than Sam and Brockman, I guess everyone has left, you know. And yet, what is it, The Ship of Theseus, how do you what's the, what's the analogy? You know, it's no longer the same ship, but it's -

    The Ship of Theseus yeah, yeah,

    It's a big, beautiful ship, it's been rebuilt. The lizard has, you know, shed its skin and grown larger. Whatever analogy you want to use, and open AI is absolutely crushing it. I'm not saying,

    But I thought open AI was people. No, I'm just joking. Those of you who don't know

    That's right, yes. And well, and then, you know, I was having this conversation, you know, earlier today, with a mutual friend. And I think that if Open AI, for example, fixed their corporate infrastructure, which is a normal company, and got floated in the stock market. I mean, I think it would go to a trillion right away. I mean, you. Just giving an idea of the impact, you know?

    Well, okay, so we've talked about this multiple times, but, you know, I do want to reiterate. So I pay for, what do I pay for? I pay for OpenAI, I seek with, you know, a X premium, Twitter premium. I pay for Grok, I pay for Gemini, I pay for Claude, and I use them all. You know, they all have their strengths and weaknesses. Open AI is probably the one that I use the most, their chatgpt interface, and at this point, they are still the only one that has the distinctive brand. So when you're talking about artificial intelligence, they're the McDonald's, but Gemini, which is Google's that's being deployed within Google search, right?

    That's why these numbers are confusing, is because it's like, well, you know, yes, we're all exposed to Gemini because Google sort of rams on down our throat with search. And that sometimes is good, you know, in theory, we're exposed to Llama because, you know, because you know you're on Instagram and is giving you like, Llama results. But you don't think about that. You don't think, hey, wow-

    It's not a brand.

    No, there's no brand that you don't think about it that way, you know, whereas, definitely, you know, open AI is the McDonalds, the Chipotle, or whatever you think about it like, yes, it's a burrito as a hamburger, but it's branded. You're using it. And I think a lot of people are finding themselves to what you're saying, which is that even for the advanced users that sort of tried them all, even people like that. You know, it's not just the people who've only heard of one company. I mean, a lot of people are coming back to Open AI, it's good stuff. And doesn't mean the other models aren't good. But I don't think, other than Claude for coding and somewhat for API usage, I don't think any of them have really found a strong, you know, strong, consistent user base, but it doesn't mean they're not good. And some of these things haven't even come out yet, you know, like, you have the Thinking Machines, right? You know, Mira Murati's company that's supposed to come out with great stuff. And, you know, who knows? Like, there's Deep Seek, you know, I think that the interesting thing is, of course, Deep Seek is the only one of these companies that aren't sort of on the Deep Mind to Open AI, sort of coaching trees, so to speak, which is another interesting thing we should probably get into later. Yeah, look Open AI is dominating and I think also technology wise, you pointed out the o3 model, the thinking models, are also dominating. They are, you know,early on, they were kind of slow and they're a bit expensive, and that's still the case. But I think people are, you know, people are who are using a high quality model, or, generally speaking, using some sort of thinking model, unless they're using GPT 4.1 that's probably the one high quality non thinking model that's still really good. Obviously you can use a smaller model for something else, and there's video generation, which is a separate issue. But -

    What is the thinking model versus the non thinking model. I know what it looks like, the AI interface, but how does it in the guts, how does it work?

    Yeah. So, like, the, the, the simple idea there is, it's kind of, if you're running it a little wonky, it's the bidirectionality of it. So, for the original GPT3, right? The one that sort of started this whole show. It's just left to right thinking, so just generating the next token.. And maybe there's like, a little bit of look ahead, there's like, you know, there's tricks, but basically you're writing, and that's it. You're writing in one direction, like you're having a conversation. Whereas the thinking models, you know, and what people like us would do at Deep News early on, everyone sort of ended up adding their own thinking prompt. It's like, well, actually, you know, write a draft, write down the facts, you know, try to rewrite it, you know, we were basically all right, our own prompt. Like, hey, you know, don't just write the first thing you're going to say if you're going to write a poem, right? Think of what it's going to be and draft and edit it, right? So at a high level that's what the thinking models are doing. It's generalizing that on that. So it gives you an output that it quote, unquote, "thought about" But what it really means is that it like, writes a draft and edits it and thinks about it and tries to fix it other inconsistencies, etc, without you having to specify that. And the benefit of that is, one, you don't have to specify it. And two, it's like trained to do that natively, you know, as opposed to just giving them model instructions, if that makes sense, because -

    My perception as the end users is less hand holding,

    Say again?

    In my perception as the end user is that there's less hand holding like the model, just kind of like will run the thinking models, you know. And sometimes inkind of in creative ways, actually, whereas the older ones were, I don't know, you know, it's like, when you the old Google search, like, You got to be very concrete and specific in terms to kind of, like, push it in the direction and get, get the results that you want, you know? Whereas the newer thinking models, it's almost like people are having - I don't do this too much, but people are basically having conversations, you know. So, yeah, they're

    having conversations, although, for that one, I would argue you don't really need a thinking model, because the conversation will tend to be kind of short response, and that's fair, but I think, look, I mean, most of you know, and that is a big part of the usage as well. But I think most of the stuff, when we think about llms, is going to be like, Hey, do some research for me. You know, edit this resume, answer this question. Solve this problem, look up these facts, you know, give me, you know, what are the 10 most popular X, Y, Z, and I think for those you could write it at the first shot. You could, sort of one shot it right, but, but gathering the information, thinking about it, drafting something. And I think it makes sense. Like I said, the one issue with the thinking models is because they already are doing, like a chain of thought kind of prompt. If you try to give it your own chain of thought, it often fails. So an ironic thing which makes sense is, if you do want to tell it how to think, you may be better off with like something like GPT 4.1 that you know that is more susceptible to that sort of guiding because it's a little bit more of a directed model, whereas here it's sort of like, you know, part of thinking is not necessarily following your ideas directly. You know, if you give it information, it's trying to, sort of like round it. And you see that a lot, right?

    Yeah. And I do have to say, you know, me thinking and, like what it does, okay? I do have to say, like, in academia, which is not like everything, but, you know, a lot of people go to college and stuff that has really been kind of punched in the face by AI, because the whole idea of writing a research paper. Because, so what I do and I mentioned this before, and Nikolai you know this, , I do ask these models to write an essay like me. They're all like, C+ maybe you know, but you know what that's that's fine for an undergraduate essay for someone that wants to pass. And so, you know, academics are having to bring back blue books and, like, you know, rethink their rubrics and all these things. So I do want to mention that, because I think I poo pooed a little bit about, like, the transformative impact. But, you know, academia, higher education is a big deal. I mean, lower education too.

    It's a huge problem. Yes. I mean, we have the system of like, testing, and the point wasn't the test. The point, I think, was do you understand the subject? Can you answer questions about it? And that was something that worked really, really well for decades, if not centuries, and now it doesn't work anymore, because you can just plug it in. You can use Cluely, right? You can do the cheat on everything thing, and that makes sense. I mean, that makes sense to me, like I remember when for a while, some years ago, I was playing a lot of backgammon against the computer. And you could play backgammon two ways. You could play or you could have it sort of just suggest you the best moves or fix your mistakes. And the truth is, it's really hard to learn, if it's just constantly, immediately fixing your mistakes. Like, because you're not training your brain to, like, actually, really think hard about the answer and figure out why. Like, there's something good about making the move and then analyzing it, as opposed to just being suggested the answer. But anyway, look, I think let's, let's also like, not devolve this into our personal AI experiences. Because I think there's a lot of interesting, sort of new, interesting sort of news and information about this that, like, you know, and I hear this a lot, even, like with Ben Thompson, you know, people will just get into their own personal use cases. And, you know, it's a little, yeah, you know, telling personal stories - well it is what it is. I think the interesting thing is, again, is that these thinking models are winning, Open AI is winning. And then, obviously the big news is Zuckerberg spending just billions on like, Hey, we're gonna have the best models like, Llama 4 sucks. You know, it was kind of an embarrassment, including internally, and they just decided that they don't want to do that anymore.

    Yeah, talk about what's going on with Meta. Meta, you know, used to be Facebook. Zuckerberg, he's like raining cash. Like, there's just green storms all around him right now. he's dropping, like, 10s of millions, 100 millions, on people, right? Is what I'm hearing. He's poached. You know, if open AI is people - Open AI? Is that meta now? No, just, you know, talk about what's going on there, why this is happening. Also, is this even rational?

    Well, I think it certainly makes sense, right? So one thing is, you see, you see, in these big build outs, l whatever is being done with the with the AI, with the LLM models, it definitely keeps going up. Now you're talking about multi trillion dollar build outs. It's going to something. We can have another interesting discussion - Is it productivity? Is it brain rot? Is it conversations? Who knows, but it's happening, right? And and they want a piece of it. So I think to be competitive, they want a good model, right? Open AI has the best model, but, you know, Claude has a good model, Gemini has a good model. X AI has a good model, though, you know, word on the street is Grok 3.5 had a lot of issues and probably isn't going to come out. So, I think what's kind of interesting about it. I mean, I do think that Okay, so 14 billion for Alexander Wang still doesn't make sense to me. I think the way we got there. I think the story was basically that they tried to bring in Ilya for 32 and he said, No,

    Talk about, Alexander Wang and Scale AI. Because it's a big acquisition, you know, now he's like, the chief AI officer, I think at Meta now

    That's right you still have the chief AI scientist, Yann LeCun is still technically there as Chief AI officer, and then you also have Matt Friedman, who's like, I guess the manager in charge. I think there'll be more of a shakeout, but, I mean, the Alexander Wang one was strange, because again, why would you pay 14 billion for a person? You know, it's like, well, you're not paying for their ideas. You're paying because they're a recruiter, they're going to get their people, and you know what exactly does Scale have that's valuable, right? I mean, okay, like anybody would have been happy to have Scale. It's worth it. They get the data labeling, which is incredibly important for post training for these models. They did a lot of organizations, but at the end of the day, I mean, they had, what, a billion in ARR which now is quickly going to zero, now that they're in Facebook, and people don't want to work with them, you know,

    Why don't people want to work with them? With Meta?

    Well if you're Google, if you're training Gemini and labeling your data as Facebook, then - They just don't want that, right? Yeah, they don't want them to know their tricks. But the bottom line is, it's like, still - I mean, I think it was, it was like 1 billion on the path to 2 billion ARR, it's not that much like you're paying a huge, multiple, uneven revenue. But the point is, for 14 billion you could have done your own data labeling for, like 7 to 10 years, as opposed to getting the company. So it's a little bit, - that one is confusing to me.

    Yeah. So you mentioned really great Ilya Sutskever who was the guy - he was the chief scientist at OpenAI. He had a big falling out. He was the guy that actually helped push Altman out before he un pushed him out and all that stuff. Okay? And he's got, like kind of a distinctive hairline, shall we say? Okay, so he turned down all that money. What is your hypothesis for why he turned out all that money? That's a lot of money.

    Yes, so that's interesting, because their company's Super Safe Super Intelligence, something like that. Super Safe Super Intelligence. I don't know, there's a lot of supers in there, but, yeah, they got offered, the story was 32 billion. And again, they've not released anything publicly, because their whole claim is that we're just, we're not going to get distracted by releasing models and APIs. We're just going to go for super intelligence, straight shot. Um, so maybe, look, maybe he doesn't want to work for Zuck. Maybe they think that they have something that they really like, you know. I mean, I think obviously, going there would be sort of against the thesis of what they were trying to build, right? I mean, Zuckerberg very much wants a model for Facebook to do Facebook's needs.I mean, they could also just be a sponsor, and sponsor Ilya to get, to get the best model but, you know, that's not what they wanted. They wanted the people. And in fact, I mean, Ilya's partner for that was Matt Friedman, who I would imagine wanted of the deal. So then he just went directly to Facebook and left and left. I mean, he was Ilya's co founder, so there was clearly some disagreement there, you know. And the rumor was, I mean, I don't know if it was public or not, you read it on the news, is that he got like, a billion, or something like that. Well, also buying out the fund.

    Wait, Matt Freeman got a billion?

    Probably. I mean, also they're also buying out the funds. So he's a share in that.

    Well, I mean good for him. Good for him. Because, uh, I've heard that he's been pretty butthurt that he is in the billion dollar club. Well, I mean, look, not. There's all these smart people in Silicon Valley. And, you know, as you guys know, there's like a stochastic random element on who's worth, like, 100 million versus 1.5 billion and all this stuff. And anyway, I'm just like, I've learned that.

    And that's the thing that's sort of interesting about the Alexander Wang thing. And then we can sort of move off the billionaire discussion. But, I mean, yeah, he's a super young guy who, like, kind of does these obnoxious interviews. And I know a lot of people in the in this space don't really like him very much, but how much of that is just jealousy, right? How much of it's like, Hey, I've been doing this for 10 years. I build things. You don't even know how to train a model and you're getting billions. So it is very weird. I mean, Zuckerberg has the money, and it's not his money, it's the company's money. But, yeah paying 14 billion for a person or a small organization does seem perplexing to me, because, I mean, you could spend that on - that could buy a lot of GPUs, a lot of people, a lot of data, you know. But again, it goes back to the question, Why you do this? And actually, I think some of the researchers they hired were a great deal because, hey, before they were struggling to build the models, and, you know, we kind of -

    Yeah what happened to Llama 4?

    It just wasn't good. Um, look, it's a weird thing, because on a high level, everyone's sort of doing the same thing, right? It's, there's, like, a lot of sports analogies that I think kind of tend to apply here where, you know, someone comes up with, hey we're gonna shoot more three pointers, and then everyone starts like, hiring, you know, players who shoot a lot of three pointers. And how do we do it? You know, there's a lot of imitation. So I think that Open AI mostly - But the people have this idea of the of the of the thinking stuff and how to do post training a couple years ago, you know, just took a while to launch. So I think that if you're Zuckerberg, you don't necessarily need the innovation. It's not like you need to be better than Open AI. You're just like, hey, can we take their systems and replicate it internally at a high level. Because it's all in the AI papers. You can read about it. But there's still a risk, if you haven't done it before, you know, like, they're not open source, and they're not giving you all the secrets. But at a high level, everyone kind of knows what they're doing, you know. So one thing that's notable is that, in particular, most of the people that they hired from Open AI, like, worked on the Omni part of the project, you know, the O, like in the 4o and things like that, you know. So there were, like, obviously some tricks for combining a model that operates on text and images and video. And that's actually maybe something that open AI has backed off of a little bit. A lot of people like you who are just having a text conversation. Why do you need this model to understand video? But at Facebook, that's something you really want, right? But, yeah, the multimedia understanding. So, they just wanted the good model, and they have a path to get to data. And I guess they brought in Alexander Wang, who, you know, who has the most knowledge, I suppose, about, you know, how to actually deploy a team of a million Raiders. Like what are they labeling? How do you check that the work is good, etc. So, it's somewhat of a, you know, it's it's, like, I said, it's very much like a sports analogy where you're like, hey, you know, the 40 Niners are winning by running this West Coast offense. Okay, can we bring in their GM and their, like, best coaches and, basically, and sign their best players and, and now we're, you know, now we're the 49ers, you know.

    Yeah, , we just switch the jerseys.

    Exactly, you know, I mean, the big difference between sports and and business is that, you know, sports, you know, when the limitation is, you can't have 10 quarterbacks, right? You got to have one quarterback. It's very like the roster. The roster is limited. You have all these artificial constraints. You know, in business, I mean, you can have 1000 people, you can have 1000 people playing quarterback. You can . 1000 people trading models. But at the top you still sort of generally speaking - people prefer single decision makers. So you still have, like a coach, I guess, but it's a coach who could deploy, more players on the field, I suppose, if you can afford it.

    Well, okay, so in terms of Open AI versus Meta right now they got a lot of the great, you know, a lot of people involved in building these models and all this stuff. Is this gonna have, like, do you believe that it's gonna have a big impact on Open AI, even though it has this great brand in terms of, as, as technologygoing forward?

    Good question. Probably not. I mean, I think I've been surprised. I mean, I wouldn't have bet against them, but I have been impressed how much OpenAI has maintained, you know, not just a high market share in the brand, but really, you know, they continue to have led most, if not just about all, of the innovation in the space, and that is, you know, going back to the previous analogy, the Ship of Theseus. I mean, all of the early people, other than Sam are gone. you know. All the people who started it, including people whose names are not as famous, like, they all left and build GPT2. They build GPT3. So they've managed to replace and keep building new stuff. Maybe there's less innovation, who knows? But, it's not - it's a little bit like, you know, like the brain drain thing. I mean, as people leave, other people can sort of step up. It's still interesting, you know, it has an impact on some level. I mean, the same thing happened before with Google, all of these great people left from from DeepMind to go start, Open AI, and at the end, there were just more innovation.

    Yeah, I mean, Anthropic was, you know, Dario and Daniela left Open AI too,

    Yeah, that's right. That's right. They came from the open AI tree, and then once they raised money from Amazon, they hired a bunch of Google people. So it really, it really is, is a little bit of a kind of everyone comes from the same tree and is doing the same thing, you know, and an interesting idea comes up, and people do it, and then sometimes it works, sometimes it doesn't. So I think it's worth it for Zuckerberg, I think we just want to be a player. You know, they were not a player. They were losing their lead, their models were not good. Like, they lost that skill, that ability. Yeah? Again, like, there's that sports team that, all of a sudden, the Cowboys are finishing 3 and 15 and Jerry Jones can't have that, you know, you can't have the Cowboys be in last place. That's not good.

    Yeah, yeah, for sure,

    Put a winning team on the field, you know. And that's, that's what they're doing here, which is smart.

    So I, I saw something, by the way, Hachi, or what is it? The guy that the X engineer that got fired recently, he was talking about. He was tweeting, huh,

    Yacine?

    Yacine, whatever. Yeah. So he was talking, this is a guy on social media. For those who don't know he's, I don't know why he's huge, but he's huge on Twitter.

    Roon made him famous.

    Oh was it Roon? Oh, he's the part of the Roon tree. I knew Roon when he was back in Clubhouse four years ago, when he was, yeah, whatever. But so he was saying that he thinks that ultimately Google will win because they just have the most resources. Do you think that's true?

    Yeah, no, I know what you're talking about. He's got really good tweets about this. So basically, like, Yacine's whole point is that, at the end of the day, it's all just data. That's basically what he's saying, which I think is to a large extent true. I would agree with that. I mean, look, everyone in AI sort of discovers or rediscovers, sort of like the bitter lesson, which is that people try to do basically, I'll, I'll recap it, and you can still read it. It's great. But basically the bitter lesson, basically, the better lesson is that people try to do clever things in AI and then they're beaten by people who just build things like with more scale and put the investment into scaling. You know, whether that's hardware, data processing, and that's the thing about this, like getting big models to even converge is difficult. I mean, I was a small part of that in Nvidia when they first started really doing model parallelism, where the models got so big you couldn't fit a single layer, you know, on a single GPU. So you have to have the communication and blah, blah blah between multiple GPUs. That was a lot of work, and that was very specialized work. And that's the kind of work that Deep Seek did really well. And that's the kind of work that OpenAI did well, that's some of this, you know, like, let's put it this way, the people that Zuckerberg is hiring, they know how to do that. So, so that's not easy. No, the bitter lesson isn't that it's easy, it's that you do all of that work to scale things, and what are we scaling? Well, at first it was web scale, internet data, right? And it's like this thinking thing, okay, you can have the model try harder and sort of verify its outputs, and then train the model to internalize. Right? Like you can imagine a little bit of a feedback loop, you know, for something like code or math, like you can sort of, you know, we have a lot of these, like they're called one way functions, where it's hard to get a it's hard to get the answer, but once you get it, you can verify it, right? Yeah, basically, or, like, doing this kind of stuff. I mean, a lot of these solve similar things, you know. And then the other thing is just scaling the human labelers. Yeah, like, we need, we need a bunch of, you know, good responses to this cool write them, or have the AI models try it and choose the best ones. And little by little, you sort of push it in that direction. So I think his point is that, yeah, at the end of the day, it's still data and scale. And, Google has all the GPUs and the TPUs and, you know, the money to do it, and the people, and it's hard to beat. I mean, I think that - I don't think he's absolutely right. Like I said, I think OpenAI is clearly winning by any metric. You know, they would be a trillion dollar company if they would free flow the fastest ever to get there, obviously. I mean, me and you're old enough to know when there weren't any trillion dollar companies

    Yeah, yeah. That's weird. Trillion dollar

    I mean Open AI has been technically around for 10 years, but not really, not the open AI that we know, yeah,

    Yeah. It was, like, it was not a profit, you know,

    That's like saying Toyota started as, like, a fishing village, whatever. Like, no, that's not worth talking about, you know, yeah, talking about the pre, you know, the pre World War - Like Open AI is a modern company, it's growing really fast and done a really good job. So, you know, I don't think they're going anywhere, you know. And so in that way, I don't know. I mean, like, Can Google, can Google actually build something that's better than open AI, I mean, maybe, maybe not. I mean, there's, you know, and we'll see. I mean, I think at the end, I mean, I think the more likely thing is that these things get slightly commoditized, although in the short term, we're not seeing that these models have different personality, different use cases, you know, like something about Gemini just feels different, you know.

    I know people at open AI like Roon and Sam Altman have been talking about, oh, they think they have AGI internally. And, like, now in the media, they're talking about, this is a race for for super intelligence and all this stuff. And it's like, what does that mean? I mean, we've talked about this multiple times. What is that? What are they talking about here? What is Super intelligence?

    Well, definitions keep changing, and I think part of the problem is that, I mean, now I'm gonna repeat myself where, you know, the issue is you can try to compare humans and machines in terms of intelligence, and it's worthwhile. You know, I'm not against giving it IQ tests or giving it PhD problems or whatever. But the problem is with people, you just have these these correlations that really break down with machines, you know, like, it would be hard to imagine a human who's a chess grandmaster who, like, can't add numbers properly. But with machines, we've had that for 20 years.

    They're an alien intelligence in a way

    Yeah, and also some of the tool usage stuff was weird, right? Like, in many cases, the I forgot who said this, but it's been said by many people, maybe been times it doesn't matter, like, the AI will be, like, surprisingly good at higher level thinking, and associations. Like, we're kind of shocked by how good it is, but then it'll still fail on Excel type stuff, you know. So they basically train this machine to actually be worse at computer stuff and, like, better at human things, you know. I mean, I think in his very interesting rant Alex Good talked about these machines still being really bad and specifically, like, hey, extract this specific number from this specific PDF. And it's still like, randomly bad at it. Like I asked stock market questions to Grok and o3 and it still gets them wrong a lot, you know, like, it confuses time, you know, It's not embedded by a world experience. So yes, your point is alien, you know. But also look at the v. o3 videos that. I mean, maybe that's another reason to be bullshit about Google. I mean, those videos are definitely, like, winning the brain rotOlympics at the moment. I mean, those videos are incredible, and yet, like, they get really basic physics wrong, and people just don't care, you know

    Yeah. No, no, they don't care. So, you know, Alex Good, brain rot. Like, do you want to talk a little bit about that? Like, how you feel about that?

    Yeah. So he put out this rant that I think is kind of a somewhat something he's been talking in for a while, which is this sort of hot take that, you know, okay, we were promised this AI is going to be really good for productivity, you know, like GDP is going to go up by a lot. People are going to be more much more efficient. You see all these companies claiming they're laying people off because of AI. And you know, is that just an excuse? Who knows. But that stuff is like coming along somewhat slowly, like you and I and everyone we know, use AI as a productivity tool. But I can't say it's really changed our lives that much, and it still struggles at very basic things, you know. We get used to it. It's better, you know, but where is AI, like, actual use really going? And his hot take is that, you know, Nvidia is still a gaming company that it's basically building the people, these, these brain rot videos, chat bots, interaction, like, basically just mindless time online. And that makes sense. I mean, the AI is not going to be very good at writing like Razib, to your point, but I absolutely believe the AI would be quite, quite good at writing like sort of, you know, dinosaur erotic fiction.

    All right, is that a genre? I mean,

    Yes, that's a genre,

    Okay, I thought you were joking, but Okay, all right, all right. Well, you know, can I ask you a question? Like, you know, I'm, I'm not paying that much right now, you know, like $20 - $30 I think I'm not paying, like, for, like, the $200 a month plans. How much money are they losing on Compute.

    I mean, it's a good question. I mean, before, I would have said yes. Now I don't even know. It's very complicated, right? Because, you know, what are the prices of these things? What do they cost? What's the depreciation? You know, how much are they getting it? You know, where are they getting capacity from? Yeah. I mean, Dylan, Dylan mentioned this in his interview, but like,

    Dylan Patel.

    Yeah, Dylan Patel. Like, for example,the GPU prices that AWS charges you are like, a joke. It's ridiculous how much they're charging, but then you can negotiate with them. You can pay a lot less. So it's hard to know how much compute is really costing people. I mean, it is weird. I mean, open AI dropped a price of o3 by 80% a couple weeks ago, which is great, for you and me. They claim it's the same model. I don't know if I 100% believe them, but I think it's like mostly the same model.

    Wait, how much does o3cost now

    It's basically the same price as - So all their models basically cost the same now, but they're using 4.1 o3 4.0, it's all about the same. It's kind of interesting. I mean, it's complicated because it's, you know, this, or cashed or whatever. I mean, they'll never give you a clear price. It's complicated. It's confusing. But they dropped it by by 80% so, you know, was there a lot of profit margin, or were they, they optimized it a little bit, and they just didn't want, you know, Google has a very generous free tier now, yeah. I mean, yeah. I mean, yes.

    I mean, I feel like they're subsidizing us to get us hooked onto AI as part of our daily lives, which is fine,

    For sure. I mean, they're competing for market share, right? So, I mean, definitely, it's still true that the only company that's really making money here is Nvidia, you know, and then everyone else is running thin, or, you know, like, you know, it's not like coreweave is making huge profits either. I mean, they're doing these contracts and they're offering people really competitive deals, so they're just growing. But, yeah,

    Well, I mean in terms of the energy. I mean, in terms of cost, like, a lot of that's energy input and stuff like that. From a technical perspective, I mean,

    Oracle, their stock went up a lot. They had really good results. I think they're doingdoing well. I mean, they're always taking advantage of the fact that NVIDIA doesn't want a single player, like Amazon, to dominate the space. So they make sure that sort of second and third tier players are able to have access to the GPUs just allocation and better pricing. And, you know, they're 100% rented out. X AI both buys capacity from Oracle and built their own data centers. Oracle is doing really well in this space supposedly,

    So, I mean, a question that I have from a technical perspective is, can you do the models in such a way that you use a lot less, like, I don't know, say an order of magnitude less energy. I mean, I'm just wondering, because, like, people are saying that, Oh, like, the spread of AI is going to mean that our energy needs are going to increase, but like, what if AI just gets more efficient?

    I don't see how. I don't, I mean, I think it's not like, one of those things where they're not being that inefficient to begin with. That's when someone's like, hey, I can take an airplane and I can, you know, make it three times more fuel efficient. I'm like, I'm pretty sure that physics isn't gonna allow for that, so I doubt it. But, you know, the thing about energy is the energy is priced very different in very different places in the world, you know. So, Iceland or some other places with a lot of hydro have cheap energy. I mean, we know about this because the Bitcoin, the Bitcoin miners, used to go to these places and and a lot of them, they do the closest thing to Bitcoin, which is aluminum smelting, right? Like something that uses a huge amount of energy and sort of be anywhere. So, I guess I'm surprised that more data centers aren't going to these places. But, I mean, I think in the past, data centers were usually set up with places with, like, you know, good connection, but also good weather. And usually, I mean, even in my day at Google and Facebook, I mean, they were building data centers as sort of like decommissioned power plant kind of stuff, and you're seeing a lot of that now, I guess people are more willing to put them in hot places. Now, generally speaking, cooling is a big deal, but you don't want it too cold so they you know, we've talked about this before, but an ideal place that Facebook used to love putting their data centers was, like, Eastern Oregon.

    Yes, I remember that,

    And, like, apparently, which is great for cooling, and

    There's a lot of energy, there's a lot of energy - the hydro,

    That's right, yes, and it's good weather. But apparently now, I mean, they're building giant data centers in the Middle East as well, and in Texas, yeah, there, they seem to be more willing to spend because that's the thing about this, yeah. Like, apparently, the vast majority, maybe not the vast, but a big percentage of the energy is on the cooling, right? It's not even on the it's not an actually, like running the chip. It's supplying enough, you know, cold air and for the newer model, is all liquid cooling. So,

    Yeah, that makes sense. So I guess I was gonna ask you about

    To answer your question, like, can you run air conditioning 10 times more efficiently? Probably not.

    Yeah, well, I mean, so basically, what you're saying is these predictions that we are going to need to, you know, you know, do more renewable, do more nuclear. We just need to produce more energy because, -

    Or maybe if we do a deal with Canada. I mean, I know, in in Quebec, they have a lot of hydro. I mean, in Toronto, they have a big Cold Lake, you know, you can use that for cooling. I don't know. Like, people willfigured things out. I think, you know, I think maybe you should have an energy person on for that. But, yeah, I think, you know, what are they saying? Was saying something like 10% of US energy is being used at data centers right now, something like that.

    I've heard that, yeah, I've heard that,

    Yeah, it's still not that much, right?

    Yeah. But, I mean, some people are saying to me, like, 30% in like 20 years or so, I don't know - of the world, that's a lot,

    I guess. So. I mean, I mean, like, you know where I'm in, I'm in Florida. I mean, I think, I think, like, I mean, there's a few nuclear power plants that, like, run the whole thing. I mean, there's, so there's still whole Eastern European countries that basically run on one nuclear power plant.

    Yeah, yeah. So, I mean, that's one of the reasons for nuclear and diversified grid. So you wanted to actually talk about something decentralized training. Tell us what that is and why important.

    Yeah I wanted to get to that too. So, yeah. So this is, like, the exact opposite. So this is opposite. So this is not an attempt to save energy, but people are, people are like, building out -they have these old these old GPUs that are sort of like, not cutting edge, or they're disconnected from the grid and blah, blah, blah, from the network thing. There's actually even deeper things where actually Nvidia has different rules. If you're in a data center or not, there's different prices. They actually like very much don't want people to use consumer hardware inside the data center. But that's a whole separate story. So anyway, the point is, I think as people have these like assets, you know, they're people have been - and really, like the Bitcoin mining, people are trying to figure out, hey, can we basically get this like network together to train a model, you know, so you need a lot of collective compute, but can you sort of pull that together to train something? It's still early. I mean, people have produced like different demos, but it's kind of a fascinating space that I think it's just like catnip for certain people, you know, because people like the mining kind of concept, people like training something, you know, you know, gets into open source, obviously, you know. But also you get into interesting problems of verification, you know, can be, can you be sure that someone actually did their part of the job? You know? What happens if a node is, like, bad or lazy or can't be trusted, you know? So, it's making people work on things like that, like, basically verification without redoing it themselves, you know, I mean, obviously, these models are probabilistic, like, you can't really reproduce a full training run actually, like they're just too many RNGs that you can't do it.

    Okay, so, you know, I did this one podcast with Nicholas Cassimatis, you know, Dry.ai and he was talking about, like, and he was a, he was a professor at RPI for a while, one of Margaret Marvin Minsky's last students, and how AI has gone through multiple hype cycles. And I don't feel like really, the hype cycle has, like, broken if Meta is, like, you know, throwing 10s of millions of dollars at, like, single engineers, and, you know, Scale AI is being acquired for that much. It seems like, you know, these guys really believe in the promise of AI, and they think it's going to, like, transform things, and whoever is the last company standing is going to make amends. I mean, is that? Is that what I think is going on here?

    I think so. I mean, I think the part that's kind of interesting is on the application side. So, you know, the most of the struggle so far has been for, you know, this whole system around training giant models, right?Even sort of the companies you've seen do well, like Scale, you know, they were really like part of the cog of the system of collecting the data and training giant models, and even the decentralized training thing we talked about do that. So, what I think has been kind of like slow to take up is actual new businesses built around AI. I mean, you see these things by, I'm certainly not saying they're fake, but they - I mean, Perplexity is kind of probably dying, and you would have a hard time listing six, much less 10 companies, you know,

    So was Meta approaching Perplexity. Is that what I heard?

    Yeah, so they were one of the companies they talked to, and Apple also reported that they're talking with them. I mean, I guess they got offers, but it sounds like their offer they got was not big enough, you know.It'd be shocking if there isn't a number, like, I can absolutely see Ilya being like, Hey, we're building AI here. You can't buy us for any number. Perplexity is a business that loses money and, like, competes with Google and is clearly not winning. So, like, there is a price. So, why they couldn't reach that price, I couldn't tell you, but I'm surprised by that. I mean, I've maintained for two years now that they should just get bought by Apple, and that's a good deal for both sides. And maybe they'll come to terms. But again, they're not training their own models, so perplexity is not trading their own models. They're not on that tree and yeah, just like standalone AI features like that. I mean, you've just not really seen a lot of great new businesses, honestly. We'll see. OpenSea, you know, a lot of other companies that used to train their own foundational models, especially before OpenAI, really, really, really got their shit together, you know, like Cohere and things like that, that basically just became consultancies and deploying AI and, you know, like, they've kind of moved into this, you know, you hear about things like Harvey for like, law and things like that, but I don't use it. I don't know, specialty search engines. Like, there's things that sound great, but I'm not using it so I can't tell you. I mean, what new, what new company or brand are you using that is AI deeply built in? I mean, you're not. I mean, even companies like, like notion and Evernote, like, you know, and Asana, like, they went really hard into AI, but didn't really change the product.t? Few things have been built with AI, but these things will happen. So yes,

    I feel like AI, because that makes sense, is a gimmick for a lot of companies.

    It is a gimmick, and it was definitely something you can slap onto your logo and do well in the space. What I mean is like, okay, likein the previous thing, you talk about the self driving thing, right? And that's a long time, and that's barely here. But you had companies that launched, and, you know, Zoox, and, you know, the truck rolling downhill and things like that, like you had people trying to build a new car, or new car company around this concept. And it certainly worked for EVs, right? Like, EVs are very different from gasoline cars. It is a different kind of thing, still mostly made by the same companies, but not in China. BYD is a completely new company, right? That's taking over Europe, right? So, so for that kind of technology transformation, you would expect new businesses, and not just new products from the same players. And I don't think we've really seen that yet. So two things, I think. One, it is early. But two, I mean, I think it's just like, maybe the foundational model and Zuckerberg checks are just sucking the air out of the industry. If you are Perplexity, and you can raise a lot of money, but you, but you're also a competing with these guys, which is bad. Like, direct competition, and also being offered billions to sell out, and you should be sold to Apple and then, you know, so,

    So my understanding, just like reading and hearing from people, basically in terms of AI, it's the US and China, like everybody else is, like, not really doing anything.

    So, like, you look at a place like Japan, who, like, loves technology and love AI, but they're not building anything. It's weird.

    Yeah, yeah. I mean, so, I mean, is China comparable to America? Is it close?

    Yes, yeah. And you see it in the US. I mean, so even when I was starting in the industry, you know, almost 10 years ago - I guess the question would be, just look around your AI labs, people who work full time in machine learning and neural networks, you know, pre transformer, right? The question you would ask is, what percentage of the professionals are, you know, are Chinese nationals, people who had who at least their undergrad education, at least in China, and it was over 50% back then. And I would say it was, it was actually pretty low in NLP, you know. But it was, it was a little bit and it was quite low in systems. It was very high in computer vision, very high in some of these

    other NLP is natural language processing.

    But also, I mean, the other place that people from Chinese universities went to, of course, was finance, like, on, you know, many, many, you know, like the Chinese quant is sort of like, you know, as sort of as a trope, you know that? I mean, this is, again, an older thing isyou just had a system of people going from top universities in China and Singapore, who've been getting into not just computer science, but but, but data, you know, big data, you know, AI, machine learning, quant. I mean, this has been going on for at least 20 years.

    Well, let me just say these are Meta's recent AI hires like these are like the it's not just Open AI John Shalkwyk Sesame AI, Alexander Wang, obviously Scale. Daniel Gross, Safe Superintelligence, Trapit Bansal Open AI, Suchao Bi Open AI, Wen Chang Open AI, Ji Lin Open AI, Hongyu Ren Open AI, Jiahui Yu Open AI, Zhao Shengjii Open AI, Lucas Buyer OpenAI, Alexander Kolesnikov Open AI, Zhao Hua Jai Open AI, Jack Ray Google Deep Mind, Pei Sun Google Deep Mind, Matt Friedman, still says GitHub, and then Joel Pobar Anthropic. So that's a lot of there's a lot of Chinese names, so

    Yeah, and it's been that way for a while. Again, in NLP and text, it was less because obviously the language differences, people just gravitated, especially in grad school, towards visual things. But], you take something like, when FAIR, when Meta AI was some of the best, like, the video models, where they were the best. By far. They were outstanding. Video understanding, video generation, for obvious reasons, it's Facebook. I mean, probably 90% of the people in that space were born in China, so, so you're talking about people who've been very good in this space for a very long time. And it's, and it's not surprising that that pipeline, you know, Baidu research, had a lab in America that some of my colleagues from Nvidia used to work at so, you know, this is not that you'd have every reason to think that the Chinese, sort of like educational industrial system, universities and companies would be very good at this stuff.

    Yeah, yeah, they were pre trained or pre adopted, I don't know,

    So, I mean, yeah. I mean, it's true in Americ and it's true over there, yeah, but no, but I think the Chinese Baidu keeps releasing good models in open source. I mean, as everyone points out, the reason they're open sourcing is because they sort of, they're behind and they have to, but they're good models. You know, DeepSeek is definitely, you know, next level. But,

    Yeah, you're right. Is DeepSeek the OpenAI of China?

    So, it's different, um, you know, again, it's - sometimes a student can sort of become the master, right? But, yeah, as OpenAI did from, you know, they, they obviously exceeded DeepMind, even though you can, arguably, that's what, that's where they came from. I mean, it's not fair to say that these companies are still playing catch up, but I think there's some truth to that, right. So DeepSeek, they did really, really good innovation for making things more efficient, but they're still talking about more efficiently training, less communication, blah, blah, blah. They weren't training. They weren't talking about new things or training better models, just like copying, you know, in a good way, I don't mean in a bad way, replicating what other people have done. And there's also, like, very strong suggestions that the DeepSeek reinforcement learning model, they collected their own data, but they also trained in Gemini. There's just, like a lot of, there's a lot of interesting coincidences there. So that's, you know, that's And why wouldn't you, you know, if you were able to get some examples from a better model to help you guide your training, why wouldn't you do that? So I think that, you know, that's that's still not the same as Open AI. That's like, Open AI has no one to copy. They're not copying Gemini, you know,

    Yeah, yeah. They are in a class of their own, basically, is what you're saying, yeah,

    Yeah. It's a very interesting thing. I think about this a lot because it's just so curious, right? But, you know. - Everyone's sort of doing the same thing. But then in two years, they, they may be doing this other new thing that's also the same, like, like some like, for example, the Omni stuff didn't really work out. I wouldn't say it's a failure, but you know, the initial idea of Omni would be say, hey, if we're going to train on everything, we're going to train on images and videos and sound and music and all of these things will make each other better. And that just didn't work. Like, instead, you ended up with this four row model, which wastes a huge percentage of its parameters on modalities you don't experience. You know, this is like, also the issue with multilingual models, why a lot of these models are less multilingual than you would think. It's like, Hey, that's a lot of tokens to invest into, you know, learning Amharic or whatever, and that's great. So if you're training a model in Ethiopia, and you're benefiting a lot from the fact that it speaks English and Chinese and French, like, it's starting with a very, very good place to learn, you know, sort of low resource languages. But just from the character set, right, just to bite encoding. It's just so many alphabets, and it's like Georgian and Armenian. Like, how the hell did all these people have these alphabets and and words and things like that? So, you know, it actually, you know, if you're using a model especially efficiently, like, as an English speaker, you don't really want that actually, like you actually prefer a model that only speaks English and, like a little bit of Spanish and French, and you're good to go.

    So your company deep news, um, you know, you've been in the AI space for a while. Your company DeepNews uses some of these models right to kind of, like, as an engine, right? And so, you know, you've been going for a little while. I mean over a year, I think now

    Closer to two, yeah

    And so, have you seen so we were talking about, how do we use it, like, how has it changed our lives? Have you seen, like, the models getting better have allowed you to do more things with your product?

    Yeah, for sure, yeah, definitely. Like one thing that, I mean, I have an amazing researcher who works with us and one thing we've noticed is that every time we have, like, an idea for some intermediate thing we want to do, like, it's gotten easier and easier. It's still not, like, trivial. Obviously, if it was trivial, we wouldn't even think about it. We would, we would, we wouldn't consider it. But, you know, we started with GPT three and a half, and we've tried every model along the way. We didn't use Claude that much. We use everything, not just the Open AI ones, and it just like, you know, you'll have some problems, like, hey, like, right now a big thing, we wanted to really improve our coverage of stocks, of stock news, you know, what's up the stock market? Basically, look at one of our pages, you know, explain hat's going on with the stock, how we got here, like, in a snapshot within a minute or two, or sometimes faster, you can figure out exactly what's going on. Put the news in the timeline, give the summary of the whole nine yards, right? Related companies, and stories. And where did that start? It's like, well, you first need to, like, do ticker tagging, like, tag, like, the top whatever, couple 1000 companies, and crypto and a few private companies, right? And it's still kind of a pain in the ass, but yeah, with modern models, you can do that. It's pretty fast. You can use a cheaper model for that. Like, yes, to your point, you don't want a thinking model doing all that thinking and you still have to sort of design the prompt and try things and figure it out and add common sense. You know, you have a situation where, like, hey, like, Nike gets tagged for all these sports stories, you know, like, yeah, you still have these corner cases you have to sort of think about and deal with. But every time you have a sort of a mid sized problem, you can just try it, you know. And that started a while ago. I mean, at the beginning, at the beginning, we still train a little bit of our own models, but more and more of these tasks, including pretty complicated tasks, can be offloaded to, like a prompt chain, you know,

    Yeah, one thing that I would, one thingthat I always think of is, you know, friend of mine said, well, machine learning, but things like to get the most out of machine learning, you need to have humans like, you know, working with it. And with it and optimizing and stuff as well. You know, when I talk to you about DeepNews and, like, what you're doing, yeah, it's artificial intelligence, but it's not just, like, you just sit there and you're like, Okay, artificial intelligence, like, do your thing. Like, there's a lot of, like, tuning and, you know, checking the parameters and running the prompts and all these things. So it's artificial, but it's not really autonomous. I don't know if that's the right word, but there's a lot of human interaction that's still going on to improve these things.

    Yes, but, but, but less and less but yes. But, look, it reminds me when I was a kid. So, you know, it happens to be, you know, we're early Google users because our family was friends with the Bbrins, you know, like his father was a professor at the same university as my father was professor. So remember, you know, using Google in the late 90s,

    I do. I mean, I started using Google, like, November 98

    Yeah, probably around the same time, yeah, looking at it. And then, you know, my mom tells me that, hey, well, you know, they've got like, hundreds, or maybe even 1000s of employees at Google. And my question was, like, why? Like, Google just works. Because it's like, I understand why, like, a coffee shop has employees. You need people to, you know, make the coffee, you know, like, if you're if you're UPS, it's like, because they have people driving around, delivering the trucks. But I'm like why would Google need employees like, you know, they already, they crawl the thing, you know, I mean, they have the search results like,

    It's an algorithm, page rank, or it was.

    Exactly so I think, I don't know, maybe, again, we're falling into the same, like, personal stories thing, but I think about that a lot, because like, the joke now - Like the memes you'll see on Twitter, on X, is that, you know, it's like, hey, you know, a billion dollar company can be run by like one person. That's why we need to hire, you know, 70 of the best people and like and to raise all this money. But I think both can be true at the same time. I think the actual like work of Google has been mostly autonomous for a long time, right? Like the managing of the data centers, you know, yes, there's employees that have to actually, like, or contractors or whatever, that need to install the air conditioning and unplug the servers and put them in, you know, take them in and out of the rack. But I think that the nature of work in that way does become a bit more meta, redesigning things, changing them, designing new systems, you know. And I think that's what we talked about here. It was like, yes, having said that, I mean, labor wise, other than the human data labeling, which is obviously very human intensive. The rest of the model training and everything like that, isn't that human intensive at all? Yes, it's just, engineers, and you train the model and you hope it converges.

    So we've been talking a lot about, I mean, we talk a lot when we have conversations, you know, on this podcast, we talk a lot about LLMs, obviously, because wow, you got interacting in the text and all that stuff. You know, you mentioned driverless cars and Waymos, and I'm here in Austin, and, you know, Google or Tesla is rolling out its RoboTaxi. And actually, I've been seeing a lot more Zoox cars around town,

    Even in Miami, I see the Zoox as well. Yeah,

    Yeah. So I think that they are Zoox because it's owned by Amazon. I think they're starting to think, like, they got to get in on it, or they're gonna get, like, Blue Origin. But, you know, but so, you know, obviously. So what I read and what I hear is, you know, Elon has this theory that you can, just, like, do it with AI and cameras, as opposed to LiDAR, you know, custom LiDAR, which is Jaguars that Waymo has, I mean, that's asking a lot of AI. Although, you know, again, it's not like an LLM, like, this is a different type of artificial intelligence when it comes to driving around. And a lot of people, their Teslas, are already driving a lot for them, you know. But the issue is, like, these edge cases and the RoboTaxi rollout has been really, really weak, frankly, like, I mean, almost like Potemkin village type week, in terms of, like, they're like, Oh, they're like, super Tesla lovers that are, like, driving around at 2am in an empty parking lot. I don't know. As an artificial intelligence observer and stuff like that. Like, I mean, does Elon mean I know he's got Grok, you know? And you were saying that Grok3 is not doing well. Is his like faith in AI for cars? Do you think this is just like bluffing, because he likes to push his people, and he wants Tesla to have this high valuation as an AI company and and all this stuff. Or do you think that the use case of driving, for example, can you see that being a pretty plausible situation where, in the near future, like near future, as in five years, it's just routine that people, like, almost don't drive?

    Yeah. I mean, I think that's, well - First of all, in five years that's completely impossible, because the fleet, the fleet, takes like, 10 or 20 years to turn over.

    That's fair. That's fair. That's that's a hard hardware engineering problem.

    But even putting that aside, I mean, I mean, look the short answer is no, I don't think it's going to go away. You know, in the short term. I mean, 50 years is different, of course, but I think that it's a little bit like, okay, take something, take a medical thing. You know, we're just talking about curves, right? Like, I try to think like a like a stock market guy or a quant sometimes. It's the same idea, sort of the Lindy principle, you know, sort of curves, right? Like, how do you know how long is going to take your injury to heal. It's very simple. You see how it heals right away. If it gets better faster, is going to heal well. If it's not, you know what I mean? Like, how much it gets better in the first day or two makes a massive impact on how you're going to go, right? And, and I think the same is true here. And the reality is, you know, that's why, like, I have a hard time believing in the quantum takeoff, because it's been promised for such a long time. Such a long time, and it hasn't worked. So okay, that means, doesn't mean it'll never work, but it means that, like, when things are hard, they're going to continue to be hard, you know. So, you know, some of the genetic stuff we've talked about, like, how much progress have we really made in understanding the genetic code? Like, it takes a long ass time, you know? And there's specific reasons. Some things move fast, but it's not a fast moving space. That's why a lot of people have been disappointed with it. Because they expect it to move at the pace of software, and frankly, doesn't. And there's no amount of software you can change to change that, because it's biological system. So the reasons are different, and the self driving thing is just like, man,it's taking a long time, you know? Like, yeah. So do I think that's going to change overnight? No. I mean, are there specialty conditions, sure, you know, like on a closed track. This. That. I mean, you know, you look at airplanes, airplanes have basically been flying themselves for freaking decades. Yeah, you know, they have, but the last part, the landing, they still do that by hand. Why? I don't know, but they do. But, I mean, they have, obviously, small planes that take off and land by themselves, because, you know, the human constraint, you know, like just for whatever reason planes, you know, planes and satellites are good at flying themselves, and cars are for whatever reason are not. You know, it's not just the economic issue. You know, driving is not that expensive. There's always a person there. Why would you drive a car with no one in it? It makes almost no sense, unless you're, you're meeting someone sort of in sentry mode.

    Yeah, I have a friend, and she has, she has Tesla in SF and, you know, I go, I stay at her place, like, she's got, like, a big place, right? And she loves to pick me up. But, recently, she was just like, it's gonna be great when it's self driving, because, you'll just land and I'll just send the Tesla for you. And I'm just like, Okay, that would be fun, like the Tesla, like, drives to the airport and SFO picks me up. But, you know, we're not there yet, but we're close. And you know, you were saying, like, oh, quantum hasn't taken off. And that's fair, although in some ways that was true of AI too. And then all of a sudden it just took off. You know,

    Yes, that is true, but, but AI is a little different. First of all things have worked in machine learning. And, second of all, yeah, I mean, it's not - you would struggle to find too many things. You'd find a few, but there's not too many things where AI has really replaced human stuff. And were you and even in something like Alex Good's like brain rot thing, I mean, a lot of it's AI, but not all of it. A lot of that stuff is still used AI tools made by humans. It's just one of those things where, yeah, I don't know, I like, I think, I mean, you know, just look at the price of Uber. But I think, I think it's probably safe to assume that we're going to be in this world where, like, there's some self driving, which is expensive and subsidized, and a lot of human driving, and we're going to be in that world as long as you and I are around. That's my guess.

    Okay, well, I guess, let me just ask you a last question, because we discussed this offline, and I'm curious about it. You know, everyone talks about AI's slop, and, you know, you know, I don't know. I don't read them. I try to block them. But x has these. Like, here's like, a thread about the Byzantine Empire and stuff like that. Like, I'm pretty sure it's just AI, you know, whatever. Like repurposing Wikipedia at this point, because there's so much of it. And, you know, AI bots are becoming more and more of our web traffic. You know, there's this idea of the dead internet. The internet's just, like, at some point, just gonna become automated, like bots running into each other and reading each other and stuff like that. I mean, how much, how much credence do you put in that? Do you think that's, like, excessively pessimistic?

    Yeah, it's pretty rough. I mean, you know, as someone who runs in a website with, you know, 1000s, and I guess at this point, millions of articles. I mean, yeah, it's if you allow yourself to be crawled, which we do. I mean, thankfully, you can outsource it mostly to a third party, like Cloudflare or whatever. But when you're already in that world, you put up a website, and most of it is other bots reading you. I mean, that's just the reality, and it's even hard to know, who are people and and what's human? It's, you know, like, I'm not, I guess I just don't really care that much, but, certainly not promotion. But I think something like something really simplistic, like Wordle coin is making a lot more sense to me now, and I know a lot of people hate it for various reasons. Partly, it's just so stupid. Who cares if you're human or if you're a bot? Well, people do care, you know. So I think proof - I think there was a story yesterday, or certainly this week, from the match group that Tinder, basically, as of this week, is requiring new users, I think, in some locations, but in California, major location, to basically prove that they're human, to submit a short video so they can check it against like other profiles to sort of be like, Is it really you? Do you Have other accounts? And I think that's sort of anathema to the internet, if you think about it. But I think that we're gonna see more and more of that. I mean, yeah, Twitter one is a great example. Like, right now, for now, thankfully, the Twitter replies, the bot replies, are very obviously bots. But think about it for a second, you know, before you could have very easily been like, of course, that's a bot. Now, it's not that obvious, and a lot of times the bots are not even trying to hide their bots, because they're like, Oh, that's a great point. Follow this guy. But quite a few of the replies like, I have to do a double take. And to your point, then you have people doing AI. So for example, if someone asks you to write a thread or report, or it's like, hey, like, Hey, Razib, you know, do you know, write up, write up a story about, like, why the healthcare stocks are down today. Like, you would probably plug that into, into your model of choice, you know, Gemini or whatever. And then you would edit it and add your own flair. So, like, human work is starting to look like more like AI slop also,

    Yeah, I will say again, like, you know, personal experience, whatever, I don't use too much AI, but the more outside of my domain something is, the more I use AI.

    And you'd be crazy not to start your research with it, right? Like, if you have to write an article about something, you have to research something. I mean, I do it all the time. I'm very open about it. But if someone asks a question, I'm like, I can just ask Grok, and if the answer is good enough, I just copy paste from Grok. I'm like, looks good enough for me. You know, like, polling results, people will say something like this. Like, something like polling, I can't be bothered to find out, like, what percentage of you know, whatever, Americans took the vaccine or something like this, like, I'll just ask Brock, and I'll believe it, and I'm not going to check it, right?

    I usually check on multiple. I mean, I pay for multiple, so I usually have, like, you know, three tabs, three. I usually check Gemini, Grok, and ChatGPT,

    So you're the meta model, so you now you're the model, deciding between the other models, but, but, but, yeah, you're not going to, like, like, look, look. I mean, I think that's inevitable, like, back in the day, you know? I mean, you look up a book, and you'd find it in the index, and, you know, we're not going to do that anymore. So this is just, like, the next evolution I think.

    I mean, I don't, I don't believe in this idea that it's going to, like, rot our brains. Because they said that about the internet. They also said that about literacy. It's just gonna transform us. And, you know, it's a tool.

    I mean, I don't know. I mean, we'll see. I definitely, I'm definitely sad about the fact that my attention span has gotten shorter, and I don't read as much as I used to. Like, I guess I consume maybe as much text and as much media by, you know, I'm watching more videos, and I have a hard time reading a book, you know. And I used to read a lot of books. I think a lot of people were like that. I mean, normal people used to read books, you know, and now, I mean, would people read the Hardy Boys now? I don't know. So it definitely transforms things. It's not necessarily better or worse. One thing I would say, though, going back to jobs for a second, is, I do think it's interesting to think about jobs as fundamentally being of two types. Well, multiple, right? But the two major types is either you have some sort of, like, you know, here's like, basically, like, an input that you're replicating, right? Like you're driving people, or you're moving something, or you're making coffee. So it's something that's like, we focus on being repetitive. But the point is, you're, like, delivering some sort of product, right? You have that kind of job. You have monitoring, you know, basically your typical SRE you do nothing until something breaks, and then you fix it, you know, like those guys on the oil rigs with those viral videos where they're basically fundamentally doing nothing, and then they have to do a lot to fix the goddamn machine, right? And then I think the third type of job, which is, you know, well, then you have, like, research and like people like you, you know, who write and summarize things. But then I think you have like, sort of like, system builders, right? And that's generally speaking, the part that I find the most interesting, and I guess I tried to be in right. Like tome, that's the beauty of being like a quant on Wall Street. The quants, like, may not even know what's happening in the market. They could potentially sleep in because you build a system, you back test it, and then you don't even know which stocks you're trading. Now, you do have to check it like, it doesn't mean you don't work, but it's not the same as, like, the guy who has to get up and manually, like, look at the news and react to it and change the positions, you know. Yeah, so, so I think to maybe answer your previous question, I think that generally, I am sort of pro automation. Broadly speaking. I don't agree with, you know, Curt Yarvin on this, for example, Curtis Yarvin, I like his idea, but I don't think it's realistic or desirable to have everything be like, sort of artisanally homemade. I'm glad that works in Japan, but it's not even that common in Japan. Like, it's a small segment, you know, I don't think we should, or will go back to a place where everything is handcrafted, like a cappuccino. I think it's more like, you know, you'll have machines. You'll have people who build those systems. You'll have people who sort of, like, think about it, you know, and comment on it like you, and you have peopel who'll fix it when it when it goes wrong. And I think that's okay. I think that's perfectly fine. And I think some of those jobs, you know, are easier to automate than others, and I think some but, but all of them will use AI to some extent, right? Like person making the machine, the person doing the research, the person figuring out what, you know, why the machine broke, right?

    Yeah, yeah. I mean, yeah. We all have, we all have a place on, on God's green earth, I guess. And I don't know, I mean -

    Because we're adaptable. So if you think of yourself like, Hey, I'm the, I'm the guy who's like, a great driver at stick, then maybe you're, maybe your world does disappear, but then you have to become a different guy, you know.

    Yeah, I do think you know, just talking to you over, like, the, almost the last three years, I am a - you know, a lot of people are - we've been talking about, like, the science and, like, you know, these posts, you know, the business and all this stuff. You know, there's the social aspect, where people are worried about their jobs, right? And I think I'm a lot less stressed out. I mean, I wasn't, like super stressed out. But I think my stress level has dropped because you got to be adoptable, you got to pivot, you got to, like, think about it. But, you know, I had an Uber driver the other day, and you know, he's just, like, straight up, like, yeah, like, I think about the Waymos, like, they're getting better. And you know, a lot of people prefer the way most often, because, well, there's various reasons, but you know, they're hustling. You know, these are people that are thinking about it. And, you know, there's some low level office jobs where it's like, no, like, we don't need you to do this, this data entry stuff. Like, you do other stuff. You need to be able to use the AI. And so I think, people are being pushed. And I do think that, you know, we're not seeing the obsolescence of humanity yet. I think most of the issues that we're having in the economy right now are like, you know, Trump's like, kind of volatile tariff policies. And policies and stuff like that has nothing to do with AI. AI is still, you know, maybe, like, trillions and trillions of dollar worth companies that are going to transform the economy in five to 10 years, but not, like, next six months inmMy opinion, or not from what I've seen.

    I agree. And Ithink that's where, you know, the whole narrative about like, companies cutting jobs because of AI is a little bit misplaced. I think it's not a bad question to ask, How can we do this system more efficiently? I do think that. But to me, honestly, something we spend time in corporate world, you know, a lot of corporate stuff is, you know, it's like that old blog post that Slava took down, I guess, you know, basically how to get promoted. A lot of it's empire building, right? Like, how do you do well, in corporate, it's like, well, you go from individual contributor to manager, and you measure your management. How many people report to you, you know? And you still hear people, including young people, who talk this way. And I think like, getting out of that mindset is a good thing. Like, I don't know Toby is obviously very smart, but it's like, is someone like, from Shopify Toby? Is he really anti hiring more people in pro AI, where he just wants his managers to think in terms of, like, you know, how can we get this job done in a way that's better, as opposed to just, like, growing my empire, you know?

    Yeah, yeah, yeah, one of the big companies, that's the whole thing of big companies, yeah? So, I mean, I guess, like, we hit most of the topics, I think that we were going to talk about

    I mean, going back to your Uber thing, like, nobody was born an Uber driver, right? I mean, like, so if you and if you talk to Uber drivers, especially in the early days when Uber was expanding, I always asked them what they used to do, and it was always the same thing. There were people who already were driving for a living, and they liked the hours. So there are guys who used to like work, you know, UPS or whatever, and they like this gig, and it pays well, and you can hit it when it's when it's like, when the market is good. You know, this is particularly like in San Francisco. So, yeah, you know, I think, I think people, it is a good question. What are millions of people who drive for a living going to do? I don't know, but, I think it'll be interesting,

    Yeah, that's true. And, you know, I mean, just like, concretely, , I do keep up on the Waymo stuff. Part of the issue is, like, it opened in San Francisco, LA, Phoenix, Austin, probably because these are warm weather cities that don't have snow. So, yeah, so, I mean, Waymo is going to be a while before it hit Chicago in the winter. It's probably going to be a while before it hits the Pacific Northwest, just because, like rain is difficult for it.

    I remember when Uber was doing the big self driving thing. And I was out in Pittsburgh, really, mainly to see the computer, computer poker competition. But I also sort of like arranged it as an interview with Uber, you know, have them pay for the trip. And as part of the technical interview, of course they, you get randomly selected for their self driving Uber with two people. With two people in the front. And, yeah, man, I mean, Pittsburgh with the hills and the ice in the winter, like I could have told you, then it wasn't gonna make it.

    Yeah, yeah. All right. I think we have hit, like, a bunch of topics here. And, I hope the listeners enjoy, just like, you know, I talk about a variety of things. Like, you guys know about the podcast, like, you know, we do ancient DNA, sometimes do politics, whatever. I like to talk about the AI stuff, because I, you know, even though I'm poo pooing a little bit about the AGI and the end of the world, I think, I think it's a little overdone. It is transforming our lives. And if you're not using it, if you're not retired, and you're not using AI in, like, a white collar job, just, just think about it, because you probably are gonna have to at some point, because there are a lot of things you know. I'll give you, I'll give you guys an example, like, what I did just so, like so I had, you know, our company has like, payments. We get revenues. Okay, the revenues come out in these little bursts and stuff like that. And I wanted to create a chart that was, like, by month. And, you know, the back end is retarded, frankly, and like, isn't doing it by month. So I just, like, exported everything, and then I just put it into ChatGPT. And was like, Hey, can you just, like, redo the CSV and output it by month and add up this, you know, this row, right? Or just, just go do it by January of 2022, etc. It just did it. It did it in two minutes, right? I know enough programming that I could have, like, done the CS. I could have written up. I could have written some Python code to do it myself. But I mean that probably, I mean, I think it saved me at least 30 minutes, right, at least. And most people are not going to be writing Python code to, like, redo that. Like, maybe they might have some macros. There's, like, these things every day you can do. These little things. And so I think, you know, I think I should end with, like, you know, actually, it is making a big difference, but it's making big little differences that are adding up and that are kind of imperceptible. I feel,

    Yeah. I mean, that's that what you're saying could be a Gemini commercial. That's kind of what they're you know, when in their TV commercials, that's what they're saying, you know, it's helping you, you know, with a cooking recipe or like, or to assemble some furniture or something. I mean, the examples are always silly and boring. But yes, yeah, it's definitely doing that. And look, and the question for me is, is it going - are we going to see new products really emerge that are interesting, you know, based on the availability technology, you know, the way that Uber emerged, right? I mean, Uber still uses humans, but it presupposes everyone having a smartphone, right? And having, like, a universal login, you know. So, yeah, so I guess that's kind of like, where I would end it at is, yes, it made search better. Yes, it solved a lot of these small tasks people love talking about how it's them, you know, solve some sort of, like, billing formatting issue. Like, I hear those two examples on Ben Thompson thing. Yes, I couldn't possibly be more bored. Like, I want to jump off a bridge listening about these problems, , but in that way, like, I guess I am a promoter, but I am interested in, hey, you know, beyond, like, the big pie in the sky things, you know, oh my god, is going to drive all the cars and cure cancer. I'm interested in that too, but I'm interested like, what other things we haven't even thought of that are smaller but, but quite impactful, like an Uber that AI will really make happen. There are new things that sort of couldn't have been done without it. And to tell you the truth, for now, other than making brain rot comments, we don't really know. And obviously the AI chat therapists and things like that, which we can get into next time you know that, I think, like, you know, people like me and you don't use, but a lot of people do use, is interesting. I'm interested in that. But I do wonder what, what kind of, you know, Uber Airbnb level companies, you know, of sort of like better, deploying assets, doing new things that couldn't be done before. And I think there I'm optimistic. Like, I think it's gonna happen, you know, I will be, I'd be willing to bet a large amount of money that it will. I don't think that the US and UAE and all the big companies are committing, like, trillions in spending just to make more brain rot content. Like, I love that theory from Alex, but I don't think he's completely correct. Yeah, but I would just always leave that to the audience. That's the part I'm interested in. Is like, what, what new things are going to be better or more like, just could not exist without this AI layer, you know, and that's why I bring up, again, Uber it's just a communication layer that's all it was, enough identity, enough information. Now, instead of a computer that's at home, or a phone that's like a phone call, you have this sort of, like, basically a messaging device that can be programmed to arrange your ride to SFO and what things like that, will AI make possible? That might take a while, because I don't see anything yet. I'm being honest.

    Yeah. I mean, it's not just you, you know, I talk to people who are like, AI researchers work at these companies. They don't know what the killer app is either, you know,

    Maybe it's not the killer app. If it's a killer app, it would be something like self driving or curing cancer or drug discovery, right? Like that would be amazing. You press a button, you discover new drugs. Awesome, probably not going to happen, but, but, like, what are the, you know, what are the small things that basically, are still very valuable, but are just, you know, they're just either too hard or not economical enough to do by hand, right? Like, Google search was the first. It was a perfect example, right? You know, you could use a directory, but it was too complicated. Like, a good search engine unlocked a lot of things.

    Yeah, yeah. So that's for sure. Okay, so I guess, like, we have been going for a while. And I hope you guys, you know, enjoy the conversation. Obviously, check out DeepNews. If you want to keep track of the news. It's great website. I use it all the time. And you know, we've talked about Nikolais, you know, he's, you know, he is an entrepreneur, a researcher, a thinker, a public intellectual, a jujitsu master. Wait, what is a jujitsu master? Is it a master?

    Well, I'm not a master because I'm not a black belt, but yeah, I do some jiu jitsu competitions. As I mentioned before, my arms a little messed up, so it's Yeah, rapidly. So I think I'll be okay.

    Well, if you see a tall, bearded man in glasses in in South Florida, that could be Nikolai

    Or it could be one of my many, many Cuban neighbors who look very similar

    Yeah. All right, I'll talk to you later. Nikolai,

    Take care.

    Is this podcast for kids?

    This is my favorite podcast.

    You.