This podcast is brought to you by the Albany public library main branch and the generosity of listeners like you. God daddy these people talk as much as you do! Razib Khan’s Unsupervised Learning
Yeah. So can you talk about your piece about, you know, employment and whatnot like how AI is gonna affect the culture, I think you made analogies to the internet talking about productivity gains, and all sorts of things like that. I do have to say, Tim the last, I don't know, the last three or four months, I feel like GPT. And AI is just a common topic of conversation, and so many, so many of my circles, which lead towards tech, tech and science. And so it seems like it's a really big deal. But on the other hand, you kind of give indications that it might not be as big of a deal, as you would think. So you just touch on some of those I know, it's a lot, a lot of different things. But -
Sure, I mean, this, I think when you're having this kind of conversation, there's really a compared to what question, right? So so clearly, AI is gonna be a pretty big deal. And it's one of the most important technologies of the 2020s at a minimum. And so like, if you think about like, think about the internet, on the one hand, obviously the internet was a very important technology, we all use it. But if you look at economic growth over the last, like 150 years, the early and mid 20th century actually grew faster than the late 20th, early 21st century, like the internet did not for all the kind of innovation in culture and content, the internet really did not accelerate economic growth, relative to earlier the 20th century when growth was being powered by the automobile and refrigeration and vaccines and stuff like that. And so I, I guess I think about it somewhat in that kind of context. Because when people talk about AI, some people have the idea that this is going to be like the most like transformational thing that ever happened in the history of humanity. And I look at that, and I say, like, certainly it's gonna be important technology, certainly some professions that occupations are going to be transformed. But I think it's going to be similar to the internet, in that it's going to transform those parts of the economy that evolved, like sitting in front of computer or manipulating information, and I think is not going to have nearly as big an effect on housing, on transportation, on healthcare, on, you know, haircuts, and restaurant meals. And that really the most of the economy is that kind of real world stuff, where it's delivering tangible goods and services to ordinary people, usually face to face. And so is that big or small? It's like, it's bigger than like most other technologies that people have invented. But it's, I think, kind of in the same ballpark as the internet or some of the, you know, the telephone, other. It's just like another new technology. It's not something where it's going to totally change the human experience.
Yeah, I also, I mean, just like the internet, I think, obviously, the internet, a lot of us would think it would be a big deal. You know, I'm on the internet a lot. You're on the internet, a lot. People are listening to this over the internet. What do you think about the argument that GDP numbers, economic growth numbers just aren't capturing something about the way information technology transforms productivity, it's just so - you know, concretely, it's hard for me to believe that say a scientist is not more productive today than they were 30 years ago. But, you know, just because everything, there's so many papers available, you can cross reference things, you know, all of that stuff.
Right. So I think I guess I have two things to say about that. One is it's absolutely true, that there are, there's value being created today that is not captured in GDP statistics, you think about Wikipedia, for example, which I don't think probably shows up at all in GDP, but obviously is a very, very valuable resource. But that's not new. If you think about the early 20th century, you think about something like vaccination or indoor toilets, the value to the consumer of having indoor plumbing vastly exceeded the cost of the toilet and the plumbing. You know, there's large consumer surplus, the internet is a little weird, because some of this stuff is free, whereas usually you had to pay for it in the past. But it's always been the case that GDP under captures the value of consumer surplus from new technologies. In terms of science, I think what would have happened if you imagine an alternate universe that's just like ours, except we've never invented the microchip. I think what we would have seen is a pretty rapid slowdown in economic growth, as the gains from electrification and internal combustion engine and certain medical breakthroughs have gone away. And so I think the internet has kind of helped bump the growth rate back Up to, like, not not as high as a 20th century, but kind of prevent that big slowdown. But so yeah, I think I think the internet is driving growth, it is helping us grow faster than would have otherwise. But I think we have because we had about a century of fast growth, I think people that used to that's just the way it always works. And I don't really think that's true, I think there, it was a set of technologies in the late 19th and early 20th centuries, that gave us this really huge spurt of growth. And we got to the tail end of that around the 70s or 80s. And the internet has powered a smaller spread of growth, probably the AI will as well. But I think our overall expectation should be that growth is probably going to be trending down a little bit over time, because the early 20th century technology was just so important, and made such a transformational impact, it's gonna be just going to be hard to bring in new technology compete with that.
Well, I mean, so I guess here we're talking about early 20th century technology, a lot of it has to do with like, you know, changing energy sources, right. So like going from literal horsepower to the combustion engine. It's really hard to match that in the physical world today. Like maybe if we had, you know, nuclear generators, the size of your computer, your home computer, maybe there'd be, I can think of a few things like, okay, fusion that works. That's tabletop, I would be I wouldn't I don't know how comfortable people would be with a miniature fusion reactor. But I think that's the only equivalent that would be similar to the transformation from horsepower, you know, to combustion engine horsepower. So, you know, the this sort of information technology has a different, more cultural impact, I guess, let me ask you, I know that, you know, with GitHub copilot, and just for the people out there who don't know, it's a guide, it's kind of like, helps programmers find scripts complete code, write code more efficiently. I've heard people say that, you know, increases productivity 2x, 3x, 4x. Right. How is that not affecting GDP? I mean like if people say, you know, if they're completing code that much faster, like what's going on?
Yeah, so I wouldn't put it 2x, 3x, 4x the numbers, I've heard her like, 20 to 50% increase, but that's still a very big increase. And obviously, it's gonna vary from programmer to programmer. Um, yeah, I mean, I think that, well, a couple things. One, is it like the software industry has always had pretty rapid productivity growth. I mean, the story of being a programmer has been every, you know, couple years, people come out with new frameworks and new libraries and stuff that let you do stuff that you used to have to I have new my hand kind of automatically. And so software has been just getting faster and faster. And luckily, there's always like, more complicated projects needing to take on a new code paths people want have written, and I don't think that's gonna go away. But But again, it's like that this is all in the universe of the like, 10 to 20% of the economy that's involved in manipulating information. So audio, video, news, maps, all those kinds of things are like totally different. And they're way better than they used to. And that's great. But like, faster programming is not going to get you a new house, right? New house, you need construction workers and cranes and stuff like that. And you know, software can help with the edge, you can help the construction company like do it's like project management a little more efficiently. But there's just like diminishing returns to having more and more sophisticated information technology, you just that part of the economy can grow really fast. But at some point, the like, at the margin, the added value of having better information technology is just not that high.
Yeah, so I guess, do you think that part of the buzz here is that a lot of the people that are worried concerned fascinated work in information industries, knowledge industries? And that is what GPT is best able to tackle?
Yes, absolutely. I mean, I do think this has been the story also with the internet, the internet has totally transformed journalist journalism. So every journalist is fascinated by the internet, you know, some of us hate it, some of us love it but like the, you know, the, it's very clear that for our profession, it's made a huge difference. Whereas if you're like a bricklayer, you know, like, your pay stub, maybe, you know, comes in a different format, or you have to have an email address. But other than that, you're basically doing the same job. And so for, you know, it's still like, again, it's hard to kind of wrap your mind around the scales here. So obviously, the software industry is a really big industry relative to like, you know, one person like Mark Zuckerberg or very rich guy. And so I wouldn't say it's like a small industry, but the US economy is massive. And relative to the overall US economy, the software industry and content industries and software are relatively small.
Wait wait, you say the software industry is relatively small. Is that true?
Well, I mean, I just mean, it's just - I don’t know the number. It's around five or 10%. You know, the other 70 or 80% of the economy is like tangible stuff that we're, you know, houses, cars, health care education, where it's like primarily like face to face kind of real world, you know, atoms versus bits.
Yeah. So I guess I guess what we can get out of that is, the bits can affect things on the margins. So for healthcare, like, Sure, we could imagine ways that it would make things more efficient. But it's not going to radically transform the experience of needing a nurse or having a phlebotomist draw your blood something like that. You didn't you did mention robotics and your piece. And you know, you said 10 to 20 years out, you know, and, you know, I'm, we're both old enough to remember, like robotics has been 10 to 20 years out forever.
Right.
My experience. Like, what is up with that?
It could very easily be longer. So I think this is a big open question, I would like to do more reporting and like talk some actual roboticist. And get a better handle on this. So a big part of robotics is the software. Like I like a big part of what makes robotics different difficult is the robot just needs to understand his environment, understand where it is, and what the objects around it are. And so I think it is possible that as the software gets better, you'll see more rapid progress in robotics. But that also just like, like the human, if you think about the human body, like you look at the human hand, you have five fingers, each of them have, you know, three joints in so there's like many degrees of freedom. You know, your body is like self healing. So like if there’s minor damage. So, you know, there's, there's just, it's just a really like, amazing piece of engineering. And I don't know, like, it's just really, really hard to duplicate in the same way that like we've had airplanes that can fly faster than bird, but we don't really know how to build something that works like a bird, right? That flaps its wings, and it's as light it can go as long as it can. So I don't know, there's just certain things about life that I think we I'm not saying we're never going to do it, but might be 10-20 years, it might be much longer. I really don't know.
Well yeah, and I do feel like ChatGPT and Open AI - Do you think that it's a qualitative change? Because I feel like what I said about robotics 10 to 20 years, I've also been hearing about AI. And now so I, you know, in my field, genetics, the biggest changes, I mean, genomics, although we kinda like - that’s synthesis of technology and everything, but CRISPR CRISPR gene editing, that, you know, and maybe I should like, talk to a CRISPR person just so that they could outline it for the for the listeners or the viewers, but that is a sea change. It transformed genetic engineering within like, one to two years, and we're gonna see the ramifications in the next decade, two decades, okay. So that I can say, just from my field, I feel like ChatGPT. It to me, it does feel like okay, like, this is the real beginning of AI.
Yes. Yeah, I totally agree that yes, I have been, I guess, like struggling a little bit of like a pessimistic or downplaying note, just because like I said, most of the world is not information. And so relative to the whole economy, but I think within the world of software, I think this is definitely the biggest things since the internet might be bigger than the internet. So it's very important for you know, as a software project, goes and I do that one of the things that I think is really impressive about ChatGPT is how general it is, right? So people built lots of AI's for specific things, like, you know, recognize image, recognize an image based on, you know, 1000 categories, or, you know, do recognize faces, recognize voices, but ChatGPT, it does, like really amazing things like in large language models, for example, when you train them, they kind of automatically learn how to translate between different languages, even though there's not necessarily like a, you know, English or whatever dictionary in their training set. And that suggests, I think, a level of understanding that is like hard to dismiss, and that they're kind of people who are pedantic and it's not really understanding because it's just like calculating correlations. But like, at some level, I think the human brain like does correlations and, you know, a certain level of sophistication, we call that understanding. So I don't know, like how close we are to really genuinely human level understanding, but it did seem like a really big leap, where it's not just one or two tasks, it's like a really broad - this technique seems to allow understanding about a really broad range of subjects. And I feel like if you turn the crank a few more times, you might it's like, continue to crank out like really impressive breakthroughs.
Alright, so I mean, I guess that brings us like, artificial general intelligence, like, Is that even a coherent question is ChatGPT the beginning of artificial general intelligence? Like I don't really know.
I think that the human brain does a lot of different things. And because every human being has roughly the same capabilities, we try to, to think about it as a package. But really, it's like you have like specialized neurons to allow your eyes to understand to recognize things you have special your brain has lots of different and functions and So I would say that AGI is a system that kind of kind of do all the things that human brain does. And it's probably not going to be any one model, you're probably not going to get GTP five or six to be like, quote unquote, AGI. But I think if you combine that with a half dozen other things you probably need, you know, the ability to do a little more planning like, ChatGPT is pretty static, like you ask questions, but then when you like, close the window, it disappears. So you probably need some ability for planning, you probably need some ability to do specialized stuff, like, weirdly, like ChatGPT is actually pretty bad at math. But yeah, I think I could see ChatGPT, being one of five or 10 systems, when you have to put them together in the right way, allow you to do most of what's human. And I think it's like, it's just really unclear, I think, kind of what, what kind of personality an AGI would have. Because I think it's very possible that some of the characteristics of humans have the certain, you know, this sort of like selfishness, the desire for self presentation, self preservation, and attention. And those kind of things are things that kind of people are worried about, you build an AGI, and it'll kind of take over, like, maybe it won't be interested in that at all. Because that's kind of a result of like the evolutionary process, making people very resource oriented, but maybe, well, maybe there are people who argue that because, you know, an AGI would have some goal. And it will know that if I have more resources, I can accomplish my goal more it will actually will, in fact, become very acquisitive. And so it's really hard to say because there's never been like a human level intelligence thing that wasn't a human. And so we don't know. But I think it's very possible that this is this was a big step towards AGI. And if you kind of continue on the process, we're on. We'll get there in in again, maybe 10 or 20 years.
Yeah, I mean, so I guess the next question then is, you know, okay, I probably will link in the show notes. There's been some weird debates online, between Doomers you know, the Yudites followers of Eliezer Yudkowsky. You know, I like Eliezer. Those of you who followed me for a long time know that I did a blogging heads TV with him, like 13 years ago. Now. I had a blowout . It was Jersey Shore time, you know, so different a different Age.
Yeah, not bad
Yeah, I'll send you a screenshot. It's funny. But yeah, so Eliezer. I've known Eliezer. Personally, socially, socially, when I lived in the Bay Area, over a decade ago, and I was kind of at the edge of his circle. Not really, you know, this is a cultic aspect to it. I think people overdo how creepy it is, but whatever. So he's now a doomer. He thinks that we should, that we should do Butlerian jihad, against AGI, right? That there's gonna be, there's gonna be acts of domestic, I don't even know what domestic terror between domestic and international just like, you know, go, this is AGI they're gonna kill us, they're gonna herd us, you know, whatever. Then when I talk to a lot of computer scientists, they're just like, well, you know, that, you know, this is just like another tool that I know, there's other people. So I have a friend, and I'm not going to name the name, but he is a I would just say he is a statistician at a very prominent research one university, let's just put it the top 10 statistics programs of the country, okay. But this guy is really, really worried about ChatGPT. And I haven't like talked about it in detail, but he's just mentioned, he's super worried. I'm like, Okay, well, this guy's really smart. And he has good technical mind. But he's not a computer scientist, the computer scientists aren't worried. Right? So I just don't understand, like, where people start, like, you know, what kind of functions are operating on these data to come to these radically varied conclusions, right? Like, I mean, where did you start? Where are you? Like, What have you discovered? What surprised you is your worry of, you know, an AI fueled Apocalypse higher or lower after you started understanding AI?
So I would say I've always been fairly skeptical of AI apocalypse and continue to be, I think, I think it's similar to what we were discussing before, like, there's such a range between no harm at all versus human extinction. Like, I think there are many possible harms that can and probably will happen you think about things like, you know, a lot of disinformation or scams, like there's many things that you could do, you know, it’d probably make hacking much easier because you could have better tools. But so, so there are probably many bad things people will do with AI. And it's certainly possible that AI will kind of develop a mind zone and do some of those bad thing itself. But that is several levels below, like literally taking over the world and killing everybody. And I guess where I am with that is I think people just overestimate how important like raw intelligence is to like running the world. So like you think about you just think about like, imagine you have a super intelligent you know computer in a box and it wants to try to kill people. Like the so the scenario that Yudkowsky talks about is one like you build like nanobots, and then the nanobots do their thing and like take over the world. And as far as I can tell, that just doesn't exist. Like, obviously, I can't prove that a super intelligence wouldn't be able to invent it, but it seems like a very speculative technology. And then the other is like virus synthesis, where it would like invent a killer virus and kill everybody with that. And that seems like theoretically possible, but I don't I mean, you probably would have a stronger opinion about that than me. But these seem like very kind of science fictiony kind of things. And if you think about like, so. So one, one parallel, I like to think about is the development of the atomic Bob, you know, in the 40s, you had these, like, super smart people in the late 30s, who realized that, okay, and Howard thought was possible. And what they didn't do is they didn't go out in the garage, build an atomic bomb, and then like, take over the world with it, because that's how the world works, right? In order to actually build the atomic bomb, you need, like billions of dollars and 10s of 1000s of people, if you had to go to the US government, and say this is possible and get President Roosevelt to fund the creation of the bomb. And then after that happened, the US government controlled the bomb. And, you know, Albert Einstein, he had some influence, but he wasn't running the world. And I think that same kind of thing would be true. If you had a superintelligence, it certainly would have some influence it, you know, if it decided to help one government another, that might make a difference. But the idea that it's going to literally, like, be so powerful that it could like destroy the human race, I think is kind of crazy. And the other thing is, like, one super intelligence could have a lot of power. But I think the way this works is quickly with proliferate. So everybody would have super intelligent AIs. And so you're super intelligent AI is like helping you tag people, but they would be helping you defend your systems. And so I mean, it's similar to today, like one person with a laptop, would if you could set it back 100 years would be very powerful. But everybody has a laptop. And so it's, you know, gives us all things we can do that wouldn't otherwise but nobody really had that decisive advantage. I'm, that's my best guess of what it's gonna look like, well, I don't discount the idea that there will be AIs with like, very advanced capabilities, but I think it will kind of change the balance of power in the world, much less than the doomers think it well.
Does that make sense?
Yeah. Yeah. No, I mean, yeah. And that is definitely reassuring. So I heard, okay, you know, and, look, I've been talking about AI, on several podcasts over the last six months, I think just gives you a probably, I've been doing it because I am reflecting what's in my milieu. You know, last year, every startup was a crypto startup. And now all the startups are AI startups just Yeah, at some of them have not changed too much of their underlying fundamental their branding has changed. But let's not, let's not get that that's a different conversation. But so, you know, I've been talking about this a lot. But you were talking about size. I heard that Microsoft had to actually shut down some projects and reallocate computes from you know, what they use for Azure, to Sydney, Bing Chatbot. Because it's very, very compute intense. So I think one worry that people have had it, again, it’s science fictional is, oh, an artificial general intelligence could, you know, infect, your MacBook Pro, but it sounds like the way these LLM models, these LLMs. That's redundant. LLM is model, right. But the way they work requires a massive amount of horsepower, like really going back to the energy, energy inputs to, you know, so they're not going to be I mean, unless something radically changes in terms of what their footprint, there's not going to be that many of them. I mean, if Microsoft is getting, so if Microsoft is stressed in computer resources, and with you know, it's verge because like, because the BingChatbot is sub fork on GPT4 for even though it's way worse, you know, it's like something like that. If, if that's happening at Microsoft, that means that we're not gonna get millions of millions of AIs, unless we could figure out a way to sequester the suns energies or something. That's my intuition. What do you think?
I'm not sure about that. I think it's like things are changing very quickly. And a pattern you've definitely seen is that new models get released that have the required these giant data centers. And then a year later, somebody comes up with an open source laptop, accessible version, it's like not quite as good, but almost as good. And it kind of leapfrogs like the the advanced models go up a couple of levels, and then they'll be the first ones are smaller, go up a levels. And so I think it is genuinely unclear in 10 years, if it'll be case, the case that there's like one big company that has more compute resources than anybody else, and everybody uses that AI, or it's gonna be the case that everybody is can run them on their laptop or somewhere in between. I go see the one that I would have have very big implications for economics and geopolitics. Because, you know, if everybody can have one on their laptop, then you have to worry about, you know, terrorists getting their hands on it, or, you know, it'd be very good for startups, et cetera, et cetera, where if it turns out to be the sort of thing where there's like, massive economies of scale, then maybe Google and AI have half the power. So I think it's important, but I think it's like, genuinely unclear at this point. Which way AI is gonna go.
All right. So here's a random question. So you had a post, you had a post where your voice was, you know, you basically created your own voice, and you have people in your family, listen to it, Right? As we're recording, this will be posted later. But as we're recording today, the Wall Street Journal, also, some are reported to the same thing. And I've heard AI generative, you know, have your own voice based on the training data is getting good. How difficult would would it be for you to have this be like, I mean, I'm assuming I'm talking to Timothy B. Lee didn't say about it, could it be feasible at some point in the future where someone's like, you know, I just got too many podcasts engagements. So I'm just gonna because like, Okay, I will, I will say this here, even though it's going to the podcasts will be out after this. But I have a friend and he's actually created Razib GPT, based on my corpus of writing, which goes back 20 years, you know, like, so like, 10 million, 10 million words. I kind of think it's cool. But I'm also starting to get worried, because I don't know if I want little Razib clones. Like, who's gonna get in trouble If they say something, you know, what I'm saying? So, how soon are we gonna get to the point where people could just like, create, like, because it's, we're doing this over technology right now. I'm not meeting you in person. You're not like, right, you know, a person in front of me. So yes, we know, we can do the voice. You know. And I mean, how the physical appearance like that's, I know, they're working on that in media and entertainment. Right. To create, to create actors that, you know, are probably based on, you know, training sets, I mean like, How soon is that gonna happen? I'm just wondering, now I'm starting to like, you're starting to give me preview thoughts.
Yeah. So I mean, there's several dimensions to it. So this so there's two kinds of the voice loaning. One is text to voice where you can type some text and it generates the voice, the other is with a voice actor. So I think the the kind of high end like Hollywood voice cloning tends to be the latter category where they hire somebody to do it. And that's a little easier from a computer science perspective, because you don't have to like figure out what the right emotions and like speed and and tone of voice says you're just like translating. So I think we're it's probably the case that if you went to one of those, like really sophisticated companies, and you hired somebody else to play me, and then used AI to synthesize my voice and my face, because we've got video here, too. I think that technology already probably exists, I’m just I'm not sure if you could do it in real time yet, it might take some some post processing, but that already exists. In terms of whether you could have a total, you could have like ChatGPT and then have the voice generated. I think it's just a question of how good the answers would be. I have not tried to create a TimGPT— I like to think so some of the value that I provide is that I'm reading new things, and I have new thoughts. And, you know, I have like, like, one advantage I have is I think that if I like email an expert saying, Hey, can we do an interview? They're like, they usually say yes, I think if a, you know, GPT chatbot journalist tried to do that, they'll probably say no, because why don't why would I want to like talk to a chatbot. Anyway, so I think that, technically speaking, the ability to generate a plausible, you know, interview with me, just in terms of how my voice sounds is, is very close. But would it be as good as me? I would hope not. And I think probably not. I mean, so. So have you tried the RazibGPT? Do you feel like it's the answer that gives or the, the answers you would give?
So if the questions are such that it requires two to three sentences, usually it's pretty good. I'll give it like a 95% hit rate. But if it's something longer, it tends to get confused. Although it is more accurate than ChatGPT4 which has a lot of my stuff for up to 2021. Like I can write I mean, I don't know if you've done this, but I can say like, give it like a very vague prompt, like write an essay that Razib Khan would write and it's gonna give me a few paragraphs. And it's roughly in line with kind of stuff I would talk about but it just sounds very milquetoast watered down and generic. And then it's a little bit more politically correct than I would be honestly. I mean, like, listeners or viewers could try this. Like it adds like, like extra like politically correct sugar at the end of its simulation of me so that's really funny. So it's really obvious you know, like that tell me - tell me you are not Razib but tried to sound like Razib as immediately as like, you'd be like, Oh, that's not Razib like, unless he’s being held hostage. So, you know, I have tried. You know, that's I think the I guess one thing about ChatGPT and open air and what they did, they really transformed things by making it this interactive user interface that regular people, quote unquote, can use instead of going through an API, you know, because, like, like you said to him, like, you could have gone through the API and figured out your own interface and all this stuff. But it's just, it's time intensive. And then they opened it up, and they got millions of people chatting within weeks.
Yeah, absolutely.
And so like they develop that, you know, and like they're using the chats to inform future, you know, whatever updates and improvements and whatnot. One thing that I have heard, and I think this is why Bing, the Bing chat is so intensive, it's like updating the training sets is really, really, it's a hog. I mean, obviously, they're doable. But that's really, really the big the big issue right now. And like, what have you heard in terms of? So we have? We have Microsoft, we have Google working hard. We have open AI here, but you know, there's stuff going on in China, I heard that they're way behind us. Like what what have you heard, what do you see on the horizon? Because right now we got like three big players.
Yeah, so some people talked about Anthropic, which is a nonprofit that spun out of that was founded by some people who weren't, weren't happy with it way open. AI was operating about two years ago, they have a chatbot, how Claude that I think, is not as good as ChatGPT. But I think a lot of people in the industry think that the next version will be pretty competitive. I was recently watching an interview with Brad Smith, the president of Microsoft, and he was saying that he considered the Beijing Academy of artificial intelligence to be the number three AI player after Microsoft / Open AI and a Google / DeepMind. So definitely I think there's, according to him and he would know better than me. I'm certainly I would say China is a player. And then, yeah, I've been a little surprised, like Amazon does not see that they recently announced that they're basically packaging some of the other models, Apple, I assume is going to come up with something, but they haven't yet. Yeah, been a little bit. And then the other the other one, I think that Meta, they have hired some very prominent AI researchers and I have done some work. But if for whatever reason, they don't seem to have to be a player right now in the same way that they Google and Microsoft are, but I'm certainly I think they would be in contention in the next year or two, if they continue on the path they're on.
Well so we already touched about touch, touched on this a little bit. But, you know, as we're closing out the podcast, I know you're working on something about existential risk, which, you know, existential risk and AI are very connected in the last generation or so, not necessarily connected, if you're talking about existential risk to the 1950s. Obviously, its nuclear weapons and whatnot. Right. But the whole field of existential risk analysis, especially like based around the Oxford group, and Nick Bostrom, and stuff like that, you can't detach it from artificial intelligence. This had a big influence on the richest, arguably the most powerful man of the world. Elon Musk, he's a big Bostrom , you know, Bostrom, Stan. And so these ideas are influential. But you've been doing research on this, I'm assuming you put up a post about it. What have you learned? Have you concluded anything? Was it a waste of your time? I don't know. Because sometimes you just dig into something you're like, okay, like, this is just all, you know, publicity or whatever. But
I mean, what I was really- so I read, actually read write Bostrom’s book a few days ago. And the thing I was really surprised by was just how little time he spent trying to convince people that a super powerful AI would be able to take over the world. So he used I think I mentioned earlier in the conversation, he mentioned nanobots, and he mentioned engineering like super virus that kills everybody. And that's like pretty much it. And like, neither of those things will. Anyway like that. He had like whole pages on that. I think that's really the big question is like, how much damage could you do? How could you really get to like a Skynet situation where like, the AI like, runs the world. And I just feel like, I think part of it is I think people in the software industry kind of related what we're talking about before, they really overestimate like how important software and information and intelligence is in the world. And it's pretty important, but like, it's a whole thing. If you're in Silicon Valley, like really, the smartest people do are in charge, and the best technology and in the broader world, like certainly those things are important. But other things like natural resources, like infrastructure, like, you know, social connections really matter. And so yeah, that that I think was my would be my main kind of critique of of Bostrom and kind of other people who think like him, is that they just have this like mental model that high intelligence means, like automatically getting power, and I think the world is kind of more complicated than that.
Wait? I mean, I guess what you're trying to say is people just It hired their own supply. You know,
You could put it that way you said it, I didn't. But yes, that. I mean, it's, you know, it's fine. Like, right like, like, I think there is like a, you know, a reinforcement effect where people who are inclined to think this way, like gravitate towards people who are who do the kind of most extreme version and you build a community of people because like people who don't think that this is an existential risk, are going to go off and do something else, at least before like last year would say like look at this might happen soon. You know, so a few people found Brostrom’s argument convincing, and they went to the ASAP movement, and they were really concerned about it. And a lot of other people read it and weren't convinced and just did something else with their life. And so you have this kind of echo chamber. And so now I think the rest of the world is starting to pay attention, because this really probably is going to happen, and we need to figure out what's going to happen. And so I don't I mean, I could, I also could be totally missing something like it's there's a lot of uncertainty, but it seems I did not find the arguments very plausible. And I think that as I'm kind of a broader range of, you know, people with a broader range of like, disciplines and like thinking styles and experience come into this, I'm hoping that we can have kind of a more nuanced conversation about it.
Yeah, I do know, I will tell you, I have friends who delayed having children, because they were worried about the singularity. And then they got less worried and when they tried, and, anyway, it didn't work out. So
You know, I mean, this really reminds me of is debates over global warming, right, like I think Global Warming is a real problem. It's gonna have large, large, you know, large amounts of damage, but it's not going to literally cause the end of human civilization or like turn the United States into a lower third world country. And there's a number of like, people who like think that's true, and like delay having kids because they think that you know, that kids are gonna have good quality of life. And like, there's just such a huge range between like, no damage, and like, the literal extinction of humanity, that you should, if you think it's going to cause the extinction of humanity, you're probably wrong. And you should kind of like have a sense of perspective.
Well, I mean, isn't there evidence that the hysterics, you know, the world is gonna end in the year 2030. And all this stuff is actually making people less motivated to care or do anything about these particular concerns?
That - that would make perfect sense to me? I think there's we have I mean, I think there was some good stuff that, you know, the Congress passed the inflation, the inflation Reduction Act, that was largely a climate change bill. And there was, I think, some pretty good incremental stuff in there that’s not going to totally solve the problem, but it’s going to make progress. And I think the right response to that should be like, great, we're like starting to make progress on this problem. But if you think we're all gonna die in 2030, and it's not enough, like you said, I think it could easily just cause, like hopelessness as opposed to kind of pragmatic kind of problem solving attitude.
Well- Okay, so just last, sort of question, you know, GPT2 started writing newspaper articles, and other things like that. Let's go back from like, existential risk, and, you know, civilizational collapse, back to something more narrow focus. I mean, you know, I, you know, we both write, I don't identify as a writer, but I am a person who writes on the internet. So, just to be clear, for everybody out there. Do you think that, uh, you know, Buzzfeed, you know, they shut down its news division, but they are showing us generative AI to write things, you know, how much of writing in the next five years do you think will be replaced by AI? Like, for example, I can think of a lot of manuals, once it gets accurate. I can think of a lot of manual writing. paralegals are already using it. There's a lot of types of writing that are like, very structured bureaucratic, that have like, very, very strict rubrics that I can imagine. Artificial Intelligence, these LLms just taking over, right? How I don't even ask you, if your concerned, just like, what's your prediction? Like? What do you think?
So I think that GitHub, GitHub copilot is a good model for this. GitHub does not write any code for you. But you what the way it works is you write a comment at the top of the code or like the first line of the function, and then it guesses what you're going to do in and writes a first draft, which you then tweak. I think, for the first couple years, at least, that's going to be the dominant model. There'll be certain types of writing where it can get 95% Right, and so it's a lot faster. And then I think it'll kind of start probably at the very low end, there are gonna be certain, you know, like format, like customer service form letters, or things like that, where it's like, almost automated anyway, and you just like filling out a few details. And I think it'll kind of move up as people get more comfortable with it. And as they get better at training models for specific tasks, it'll get more and more. There'll be more and more things you can generate. I would like to think and I think probably the, the kind of writing we do is going to be a long time Because I think it's not the hard part of I think for both of us, the hard part that we do is not like getting the words on the page. The hard part is like having new interesting things to say. And that is, that involves doing some tasks, I don't think language models really know how to do at all, which is like, recognizing what ideas people think you're interesting, figure out who the right experts are pulling them up on the phone and talking to them. Like those kinds of skills are like, eventually, probably, AI will figure out how to do them. But the current generation large language models are not really doing that at all.
Yeah, it's interesting that you're talking about the things that the LLMs cannot do. You know, you're talking about plumbers, you know, people going down the manholes, like it's kind of like, unglamorous types of things that robots cannot do. But in some ways, it seems like maybe the LLMs is that these GPT and whatnot, they can't do the unglamorous things of like, just setting up the framework, assembling the information in a way that can like, you know, let you just like snap it all together and write it. If you give it a really, really precise prompt, it'll, you know, it'll go, but like,
Right,
In a way, it's like, you still got to be a human to have a prompt. You know?
Yes
So I mean, it's like, that's, that's the primary issue right now. You know, and I, you know, I posted I posted the essay, like, one of my blogs, just so people can see, I think you can tell that it's not, you know, it's pretty, it's still pretty generic. It's, it's definitely like years and years before it can, you know, do what I do. But you know, I, I have to say, though, I use chatGPT every day, like, at my startup, it's, you know, we do you know, it's like, it's biotech data science type stuff. So they're questions. You know, okay, like, I forgot a bash command, whatever, right? There's stuff I do every day. And sometimes GPT does tend to hallucinate a lot, as everyone knows, but if you know a field, it's just quick to sometimes just ask it the question, then, like, do three Google searches, I would say that, it's probably eating into about like, 25 to 30% of my Google usage. So that's not trivial. That's not trivial. I think I'd probably use it more than a typical person. And I do pay for the GPT4 version. But that makes a huge difference. And I can see why Google is scared. But this is what Google did to Excite at Northern Light and Alta Vista. So you know, Welcome to the world right, like, you know, you can't, you can't sit on your hands forever. And, you know, with the interest rates, getting, I mean, Tech has been, this has been a really weird year to in tech, you know, with, like, the layoffs, and the interest rates are higher, so they don’t have cheap money and now artificial intelligence. So in a way, like this discussion that we're having, and all of the media coverage, it's a little bit like, just a freak out, I think, in terms of like, you know, the, the easy expectations will say, 2018 2019, you know, commenting on tech or working in tech, you know, it was it was gonna be like, blue skies forever. And now recently, there have been layoffs at places like, you know, Meta and Google woah, Can you imagine that, like five years ago, you know, and now, like, GPT shows up? And like, what is this? I think it's a very human response to all this technology, ironically, or not. Like, I think like, as I'm talking to you, the human element is kind of key. And I don't know if I don't necessarily believe in human exceptionalism. I don't even know what that means. But I guess we are, you know, we got like 4 billion years 4.6 billion years of evolution, or like God, or whatever you want to believe, but we're distinctive enough that we haven’t - we're probably not gonna be obsolescent anytime soon, is what I'm hearing from you. But we also got a hustle. You know, we don't want to be Luddites, you know, people do lose their jobs. And, you know, the word computer used to be a bunch of women used to do calculations, and obviously, those women do not exist now. So, you know, we just got to figure out our place here. It's exciting times. So the while you're not on your own substack, right? It's just your domain is you have your own domain.
Yeah, absolutely I'm on substack, both my newsletters are on substack. Absolutely.
Yeah. So that's,
yeah, that's fullstackeconomics.com and then understandingai.org, yeah.org Yeah, that's a that's an important distinction. because .com is another one, right?
That Yes. that.org is me. Yeah. So you were saying that the you're not sure if humans are exceptional? What I would say about that? Is that in like a cosmic sense, maybe we're not but I do think we're exceptional to each other. And that's what really matters because like, we like, we have like all the money and the power. And as long as people like being with other people, I think it'll continue and it doesn't matter if the robots like us or not, because like we're gonna be fine as long as we like enjoy, like interacting with each other and kind of making lives together.
Yeah, I do have to say like, just Talking to you. I did start to think of like, you know, species bias almost. And it's like we would rather like, you know, we would rather discriminate against a obvious Android serving of coffee very efficiently and very fast, as opposed to a human. So that's just like prejudice. Like, that's, that's what we got on our side, we're prejudiced in favor of our old kind,
Right? But But we always have that with animals, right? I mean, you know, like, among humans, there's like a strong norm of equality, which I agree with, but like, you know, we always like prioritize the interests of humans over, you know, dogs or elephants or whatever. And so like, I think computers will just be in the same category. It's like, it's another category of entity. There's, like, lower in our value structure than other people.
All right, so I guess like, I'm gonna have to, like figure out a philosopher or ethicist, for the next the next installment of my AI conversations, AI Right, oh, you know what's gonna happen? It's already happening it's already-
Totally
Like, there's already people like Eric Cole's worried that like, well, booting down. ChatGPT might be like, killing it. So there's like, all these like, ais that are dying. You know, he's I don't know if he's super serious about it. But you know, it's something that you would think about, like, If this is life form- booting it up and booting it down? I mean, you know, who are you to ? I don't know, this, like weird conversation, this starts to get science fictiony again. So it's like, this whole discussion, on the other hand is really practical, like, okay, it's replacing like, some Google searches. And then on the other hand, it's like, what is life? Okay, like, that's, that's a big gap. And that's one of the things that's why you have, you know, I'm assuming you're gonna have a lot of content for this newsletter for a while. So that's good for you. It's entertaining for us. I mean, this is interesting to watch. Like I said, I always already find utility a GPT. So whatever. Bigger social cultural ramifications there are. I personally find it useful. I know a lot of my friends in tech love it because now they can put AI on their slide deck. Now they have to delete web 3.0. So that's useful , you know, the world goes round and round. All right. Thank you for talking to me again. Tim. was always a pleasure and I will see you around man.
Thank you. It was great.
Is this podcast for kids? This is my favorite podcast.