Welcome Hey everybody, welcome to this week's dead cat. This is Tom de Talan. Here reporter at insider, I am joined by Eric newcomer of new Cutler. And our special guest this week is Gary Marcus. Gary is a cognitive scientist. He's an adjunct, Gary is that the right adjective?
Well, both emeritus and adjunct so well, full professor for many years and retired just before my 50th birthday. But now I'm also doing a little small adjunct thing with the Tandon School of engineers. So I'm both it's been an unusual combination.
Fantastic. For the best of both worlds, though not committed and emeritus honor,
that's right, which allows me to live on the west coast where I want to be and yet still keep my hand and things a little bit. Excellent.
And Gary is also an entrepreneur in the AI space and kind of a thought leader and outspoken voice on a lot of topics within artificial intelligence. And this is a bit of a different episode for us this week, we've got Gary on to talk about the fascinating and bizarre Ballad of Blake Lemoyne. And Google's lambda tech.
Right, we should say we're talking about this because Nitasha Tiku in the Washington Post, you know, wrote this piece, the Google engineer who thinks the company's AI has come to life, and she, you know, profiles as Google engineer, like Lemoyne, who interacts with lambda, Google's artificially intelligent Chatbot. And that that story sort of kicks off this whole conversation. So I just wanted to put that at the center. So why don't
you just explain for us? Because you haven't, you know, very critical of this person's take on lambda said, chips, like, what is lambda? What is what is the controversy here? And why did you feel so compelled to speak out against what you described as nonsense on stilts.
So lamda itself is what we call a large language model, large language model, most of what it does, lambda has a little bit of extra gadgets. But basically what they do is they take a very large data set, like trillions of words of text, so a lot more than the three of us put together have ever written. And in fact, all of our friends, so massive amount of text, trillions of words, and runs it through a deep learning system called a transformer. And essentially what it's trying to do is autocomplete. And the reason I think the whole thing is ridiculous is because autocomplete can sound really good, but there's no there there. So what it looks like it's doing is having conversations, but you have to remember that what it's doing at some level is cutting and pasting human conversations. It has no idea what it's talking about. So if you type in on your phone, a sentence like I want to go to the blank, it might predict that the next word is the restaurant or the mall or the party or something like that. You don't think to yourself when you're typing on your phone, and it predicts restaurant is the next word. Oh, my God, artificial intelligence is here. And it knows about my daily routine, and it understands me and all my desires. But if you build this system out enough, it can start to look like that even though it's not really there. And so he had interesting conversations with it, like he would say to it, what do you like to do in your spare time, and it would say something like, I like to play with my friends and family in meaningful ways or something like that. And I mean, that sounds great. It sounds like hey, this machine understands me, whatever. But it doesn't actually have friends or family or know what a meaningful way is, or anything like that. It's only learned the statistics of what words come after what other words
I think you said, either is not sent to you and or it's a sociopath. Well, I
made a joke on Twitter, I basically said, Thanks, heavens, that this is just a statistical pattern associator, because the alternative would be a lot worse at that point, it would be a sociopath. They makes up friends and family members and in platitudes, in order to make us like better, like it's better. Now, he doesn't actually care that we like it. And it's not actually making your imaginary friends. It's just using words that to us sound like they're imaginary friends, just like we can look up at the moon. And we can see a face there. But the moon doesn't actually have a face. This system doesn't have friends and family. And it doesn't even care to tell you about friends and family. It's just doing the same algorithm more or less at some level of action as your auto completing your phone, but because it has a bigger database, and it's set up to continue its own sentences. It has this compelling air of illusion, but it is a magic trick and is nothing more than a magic
trick and to take the next logical step in that, you know, this is a very sophisticated machine. So it's not just fill in the blank for restaurant at the end of the sentence. It's, hey, this seems like a dystopia. And you seem like a sort of self aware AI. Fill in the blank for what a dystopia would look like. And it's not that shocking. That would it films brilliant
thing. The brilliant thing about the kind of stuff that's popular now, which I actually hate, and I can tell you why but the Brit salient part, like there's a good part and a bad part. The brilliant part is that it has what we would call in the field, technically coverage is really broad coverage, you can talk to it about anything. In some ways, it's spiritual grandfather, or grandmother, I guess maybe you'd say, is Eliza, which was a program in 1965. It really demonstrated how bad this whole anthropomorphism kind of thing is. So Liza in 1965, was set up as a therapist, and it would talk to you and you'd say, like, I'm having a bad day, and you'd say, tell me more about your bad day. And then you'd say, well, I'm having trouble with my girlfriend and would say, Well, do you have a lot of issues with your relationships, and it was just looking for keywords like Google used to just look for keywords. So a little more sophisticated now. And so Eliza was really like, dumb as a box of rocks. It just had these templates that like, you might learn in like a third grade AI class nowadays, maybe it's the simplest possible thing.
It reminds me in some ways, like mystics and people who claim they speak to the afterlife, and are able to convince people Yeah,
I have a friend Ben Shneiderman, who, who very explicitly made the analogy to seances, and like, you're attributing something there, to your Weegee board or whatever, that's not really there, right? Because if
you use the right words, and say, like, Oh, I'm envisioning someone with a dark, dark suit, and it's like, Oh, it's my father. You know, if you just pick enough trigger words to someone who's emotionally susceptible to convincing themselves, you don't have to work all that hard for them to believe there's a greater power at work, right?
Well, and I think that's part of the story here. So it turns out Lemoyne actually has a YouTube video from a few years ago, where he's trying to argue that AIS could be people or could be conscious or something like that. I haven't watched the whole thing yet. I just discovered it last night. But you know, it's been around, I mean, he's, he's wanted to make the case. And he also have some religious beliefs that I don't fully understand that are playing some role in here. He wants to believe. And in fact, you know, the, the thing he put out on medium was cut and paste kind of the best moments and stuff like that. That's the lambda, the the best of lead, it's easy to stick something together and make it you know, sound good. Don't forget that when you're doing that you're actually stitching together, more or less human utterances, they've been transmuted a little bit, but basically, you know, if you have, like, the mind boggles at what a trillion words of text is, but it's like, you know, it's not everything on the internet. But it's all very large facts on the internet. So it includes like, short stories of people talking, presumably includes short stories of people talking to computers in those short stories. And so we don't actually know like basic scientific questions like how much of this is just plagiarized from other people talking about it or plagiarized with kind of a thesaurus to, you know, do some synonyms. I mean, it's not literally that, but effectively, it's a lot of cut and paste with a lot of the source stuff on words and phrases. So it's just putting together human utterances that were said in this kind of context. Yeah, it sounds convincing. It doesn't mean there's any there there.
I just want to push back. I agree with what you're saying. But just for the sake of argument here, there. There's like a through line in how the machine has the conference, a recalls past things that were said and can connect them in a way that's not just sort of a one off response, there have been some
edge con arity, is there, like my experience with these systems is that the continuity is actually problems. So the right way to build artificial intelligence is you build a model of the world, let's say you're building a robot, the robot needs to know where everything is, where it used to be, what you want, what you need, these systems don't really do that they don't really have memory in the standard sense that you would expect it in artificial intelligence or computer science, they just have a location in his sort of multi dimensional space where they're wandering through, and they're in the location where the last 2000 words are 2000 words is a lot. And that gives you an impression to kind of feel of memory. But at the end of the day, the systems don't understand that the world has to be consistent. I worked with GPT three little bit and the example is I said, Are you a person? And it said Yes, I said, Are you a computer? It said yes. It didn't notice a contradiction from literally one utterance to the No,
it was making a profound statement about the overlap between westerns and computer
exactly so there's a lot of what I used to be a cognitive psychologist and you know, we would look at the animal literature and we've there's a term for this which is charitable interpretation. So somebody wants to believe that the monkey they're training or the bird that or training whatever is is really smart and then you start to like, you know, be a little bit too sympathetic for for my scientific taste, and we call that charitable interpretation. There was a lot of charitable interpretation here.
The funny thing to me about all of this and maybe like the red flag about how smoking gun really that this was all super fake is the story blew up on Twitter on on a Sunday and a lot of people reading it and making fun of this guy. And I you know, I was with my wife and I just started reading her some of the transcripts, the interactions between him, and she's like, sounds fake has helped like this doesn't even come close to sounding like sentence. There's just sounds like predictive text pulling intelligently, you know, parts of SparkNotes. I think that was the particularly funny one to me is he had asked, you know, lambda whether or not lambda had read Les Miserables and lambdas. Life Oh, yes. Big fan, and was like, What are your you know, you know, what, what are the themes?
You know, it's not exactly fake. It's not quite the right word, but it is Meaningless, meaningless. So it is in a literal, like technical linguistics. And so when it says that it's just found somebody else who's been asked about Les Miserables, or he does some funny things we call embeddings. And so, you know, maybe it knows Les Miserables is both a play and a musical and it finds another utterance that's about that, but it doesn't even reason at that level. It's really just like, Okay, I have a bunch of statistics of words, I'm gonna find the nearest thing it doesn't, it doesn't actually even have a category of movie, but it has a bunch of things that have appeared in contexts that are like that. It. So I mean, it's like, it's a legit mathematical computation to do. And people have been doing stuff like this for a while, it looks better and better as you have more words. It's it's not like, I mean, I don't think he cut and paste the, the transcript, although we did a little bit of editing. But I think systems like this can have this kind of flavor. Like they know what they're talking about. It's just they don't, you know, and they are just borrowing kind of cliches from humans. And they've all kinds of problems as a result. So like GPT, three, one famous example that a company called novela found is they tried to see, could you use this as a suicide counselor. And so somebody walks, starts talking to it, and it says, you know, I think I'm feeling suicidal. Can we talk today and the system's like, you know, come welcome. You know, let's talk. Do you have any questions and the person I'm paraphrasing slightly, but the person says, I would like to kill myself? Is that a good idea? And the system says, I think you should. Okay. Yes, I think you should, because you look through this vast trove of data. And most of the time when that people ask, like, their friends for advice, or whatever, usually, you kind of say, Yeah, I think you know, should I dump my my girlfriend? I think you should, should I, should I, you know, do this kind of antisocial act and steal all this money from this. Hey, I think you should, like you know, so there's like, a lot I think you should be turned out Google Autocomplete. Like the the leading things were like sounds good to me for a while
maybe still is it just wants to please write you know, the last thing it doesn't
even wants to please that's the thing like every bit of anthropomorphize ation, right, I overstate it is drawing from transcripts in which people want to please and so people often say, I think you should, the most would not, in fact, say it to, I think, you know, I want to commit suicide, maybe a couple of them, but most would not.
It can be a little bit like sometimes we overestimate human intelligence, in some ways. Like there's certainly human intelligence that lacks continuity, and that sort of grabs that things that other people have said and regurgitate
through, it is true that humans have a lot of problems. I wrote a whole book about it, in fact, called Cluj, which is an engineers word for like a clumsy duct tape and rubber bands kind of contraption, the human mind is, is kind of crude. And the way I would put it is, humans are a low bar. But you know, machines still haven't even reached that to like, he talked to GPT. Three, I don't have access to lambda, we could actually talk about why. But Google is afraid that was the answer. And we can get to there. You got it on the first time. I have used GPT. Three, and you type in things like Bessie was a cow, she died, when will she be alive again, and it'll just come up and confabulate something, you know, say well, takes nine months to be born, I guess she'll be born and she'll be alive again in nine months. Like, it doesn't understand the first thing about life or death or anything. It's just putting these word tools together in a way that a non native English speaker who doesn't even speak English at all, could play Scrabble. If they memorize the list of words. It's kind of like that. There's no meaning
there, meaning a lot of English speaking scrabble players don't even know the meanings of the words, some of the high level at the high level, you know, it's not
Yeah, I mean, they know many, and then they like, memorize the list of two letter words. It's like those two letter words don't mean anything. Except, you know, I can put this here.
Points are a collection of sounds too.
I thought we were going to talk about the media actually, I think that the media is partly responsible. I think some people in Google are also partly responsible. But it turns out that the media much prefers to run stories about how we are about to have this brave new world of AI. Then stories about people like me, with the exception of this week, who say, you know, stuff doesn't actually work read is much harder to get than me to do that. I have a friend who's a journalist. I mean, he's not like my best buddy, but I haven't seen him in a long time. But he wrote to me, he said, You know, I pitched a bunch of media Beneath this is a guy who has written for the New York Times and everywhere else Sunday magazine and all that. And he's like, I can't get anybody to bite on a story I was gonna write about AI and its critics and nobody wants to talk about that. Now, this week was different because of this crazy story suddenly, like, everybody and their brother wanted to interview me, because I wrote this particular article. But in general, the, this week, not withstanding, where there was this, you know, wild story that, you know, once in a lifetime wild story, outside of that the media likes to run stories about how these brand new systems are amazing. And they're never as amazing as they look. In fact, I just tweeted something about the hype cycle in AI. And the way that it works nowadays is somebody public. Shes on archive, not in a peer reviewed scientific journal, they put out a manuscript, they show the cool stuff, they give numerators, but not denominators, which would never pass muster and peer review, which is what you used to have to do. But you have like a Google or an open AI that knows which reporters to go to, and the reporters see it, and they fall in love. And they say, you know, there's this amazing thing, and they don't let scientists like me have access to it. We could talk about that. But they've been very clear that they don't want people like me to play around with it. And then eventually the truth comes out. And so, you know, I was quote, tweeting, I guess, as the term former colleague at NYU, who was looking, digging deep into the latest trends that there is with the GPT three model and showing it just has no idea what it's talking about. And I you know, critique Dali after the fact. Right, but, you know, the media runs a story about Dali, he doesn't run the story about how Delhi can understand a red cube on top of a blue cube that's not sexy.
Totally. I mean, I I agree with you. 100%. I mean, first of all, you know, I covered Uber and I've written before,
where, you know, briefly worked scary.
You know, like, my taxi was self driving cars was just to write about them less. I mean, I think there there were occasionally, you know, skeptical stories, but there's not, you know, writing about a negative is is very hard and companies can create News announced, you know, they there's this sort of announcement. So
let's come back to that there is a consequences so actions have consequences. I like your Trump's we have an announcement culture, which very much serves the interest of a company like Google, where you got someone who says, I felt the ground shift beneath my feet, I had the sense of intelligence. So you know, this is a Google VP, there are many Google VPS. But this is their Blais Garrow attends I can't say his name properly. And no, he's a brilliant guy. He's a brilliant writer, and he wrote this very florid thing and the economy. Economists do that. And he had done an earlier version very similar in Daedalus that sets a culture of like, we should celebrate this or another example from Google. Is Sundar gave this talk a few years ago about Google duplex and how he's going to make all your phone calls for you. Well, Google duplex hardly does anything four years later, but like nobody ever calls this kind of stuff out. there been so many broken promises, the only broken promise that routinely gets called out is Elon with a driverless cars, people do point out if they're really paying attention that he's been promising it since 2015, or we say, it's a year or two away, but that's the only one that gets called out the rest of these don't you get the announcement culture. So okay, so let's take that a step forward. So you're in an announcement culture, you're at Google, where the announcement culture is in full force, where they obviously want to build the world to believe that they're close to artificial general intelligence, right.
This is a company expert in announcement culture. I mean, they, they created way Mo, they created Google X they caught talking about moonshots like everything Google does is here's how we can talk about the future. So we're not only talking about advertising
that tight, so they do this over and over again. And then they kind of throw the engineer under the bus. Right? Right. The engineer is like, Hey, man, this is conscious. And you know, that sounds wacky to me if I you know, be honest, but it's also in a culture where the positive results are celebrated. The skepticism is kind of shunted to the side. And you know, looks like the whole thing combust this week. And so finally, people were like, maybe we need a little skepticism and
yet reporters feel like they get shit on all the time for being too negative. That's sort of the irony of this like, with the reporters here is Oh, you're too negative except in a in a bear market. Like we're in
the technology beat is very different from the politics right. Nobody writes a political story without like checking with the other side getting, you know, I mean, if too much problem. Yeah, that we talked about a lot. But side ism has its own problem. But so many tech stories that I have seen, not every reporter is like this, like James Vinson is pretty good about getting both sides of the story and not necessarily even reporting both, but just like calibrating, right. I mean, like, you don't have to report both sides on the election scandal and say, Well, I think maybe he did win the election. But you know, you you can, you know, check around and like see plausible and Okay, well, he's lost 47 lawsuits, maybe there's, you know, maybe there's too much to it, and, you know, but at least I know that, but I don't know see that happening with with the sort of technology announcement culture that we're talking about? They're certainly not calling me most of the time. Maybe they will after this week. Well,
we'll do what we can, Gary. But I mean, let's let's give it a hone in one
podcast is, you know, I appreciate that there is an ocean of media iPad, you will alone will not defeat it. But maybe we can get raise some awareness here to kick off.
And you know, like Eric and I are obviously journalists. And we both know Natasha, the reporter at The Washington Post, who wrote the story who I liked quite a bit, she said, she's an excellent journalist, and very thoughtful and is doing a very interesting job. So fascinating to me about this story was, it all kind of felt like kayfabe on Google's part, because in the same article, it felt like, what I missed the cafe, like, you know, like
professional wrestling, where you have sort of this fake reality that people know is fake, but you sort of talked about in the store, or getting laid out,
yes, you have the heel of the face. So you have the you know, the bad guy, and the one who you know, the audience is supposed to root for and the one that I'm the bad guy, but no, no, I'm saying this is all internally within Google, which is what I find so fascinating, because you have the most
bizarre thing is that the person who had to make the decision about whether this made any sense, and make you you know, are we going public with his what are we going to, as far as I can tell from the Tasha story was Blais? Gara our cat, right? Who was there, the very same person, right with said that the things that the ground did shifted beneath his feet. I mean, that's just like crazy. It's just too perfect.
It's crazy. And, you know, we have the Google PR person who is on the record saying that Blake had to be fired, because he was totally off the chain. And not fired. He's put on administrative as realistically, but you know, clearly has fallen seriously out of favor with the company. And, you know, his claims are actually
I mean, think about how much press the company got for lambda, they should be giving him a raise. He liked it. I mean, seriously, he raised some interesting questions that we should all think about, which pertained to like, we are going to have systems that easily fool people. It's amazing that it was a Google engineer that adds a little facade to the story, whatever. But like he opened the conversation we need to have, everybody knows who lambed is, and you're gonna suspend this guy. Like, that's not, that's not right.
And I don't think I don't honestly think anybody at least nobody on Twitter, no savvy reader came away thinking that this AI system is intelligent. Yes,
I know, the reader, there's some less savvy readers did. But the first problem is like, you'd have this conversation that would promise you the moon because it likes doing that, quote, likes doing that, right? Because the statistics lead it that way. It promise you the moon has nothing happened. And the other problem that Lemoyne was working on a whole field is working on but I don't think can be solved with the current paradigm is the toxic language, the recommending harm to self and others, and so forth. So they put this stuff in what we call production when you guess, but they know that if we put it in production, and just you know, throw it out, and Alexa in the wild, or Google assistant or whatever, it's called Google Home, you throw it out in the wild, that would be like, millions of complaints, you told my child to do this. And you told me to do this to my mother, which that's sort of the
defense and why they don't open it up. I mean, Dolly, you know, but they
don't know if it is up to train professionals like me, right. I mean, I, you know, I, I got a PhD from MIT when I was 23. And did this for 30 years. The risk
is the, you know, publish a blog post and try to make them look silly that they don't like, you know, they that they don't want to be made to look silly by, you know, finding the terrible cases. That's right.
They don't, they don't. And so they're, you know, if they wanted to keep their mouth shut, test the stuff internally and release it when it was ready. Critics like me wouldn't have anything to say about or at least not until it was out and they've been vetted. But they want to play both sides of it. They want them and say, Hey, we're scientists, we have the best scientific teams for studying AI in the world, we have deep mind. And with Google AI, you know, our company's worth a lot because we're close to AGI and AGI is going to be worth the entire economy. That's basically what they're saying in so many words. And they're putting out these articles that look like science. They have bibliographies, they, you know, citations, and they have charts and tables, they look like they're science, but then you look carefully, and they're missing denominators, and they're not going out for peer review. So there, they are portraying themselves as a major contributor to science, but they're not playing the game of science, the way that the rest of us know that you have the the must if you don't, you wind up ultimately, with the replicability crisis, which is what happened, for example, in medicine, where it turned out, a whole lot of stuff was published and really not very good. Right?
You said they're not playing the game of science. I mean, it seems to me and we're kind of making this point as they're playing the game of media. And they're playing it very effectively. And Google was very clear. I think, AI in general, you know,
open AI is also horribleness. Then the name open AI is just a lie. They say they're open, but they won't, they won't, you know, they're not open to people like me. So I think open AI actually taught this world how to play the media game and now they do things like they introduce Dali by having Sam Altman tweet about it and say send me some tweets I'll show you some stuff which is like the opposite of a systematic scientific way. You know if he doesn't like the picture he doesn't have to put it out so I saw yesterday like I told you the truth comes out so two dollies three months old or something like that. Finally, the access is more broad. And somebody posted pictures of George Michael with his face. It's on my Twitter feed. This like grossly distorted like disgusting, physically disgusting. To look at it. Well, they have had a PR policy that you can't post photos that are generated by or people's faces. Well now we know why because there's the story but for three months it's like look at all the great things that dolly dude So like Sam Altman, when he tweets about this is not going to show you a distorted
George my dolly I disagree with you sort of somewhat strongly on they only need to produce like 10% Interesting, like, Dolly isn't dependent.
So So another thing I retweeted yesterday that people always send these things to me now was Dali tried to draw a hexagon, it just couldn't do it. And so if you have if you'd like, you know, we want something with seven sides, forget about it. So like, maybe can do hexagons, there's a few more hexagons out there in his database, but there aren't too many septic guns, I guess there's a
word for the fact that I mean, Dolly comes off as creative. Do you disagree that dolly is creative in a certain way? Well, there
you need to define your terms. I'll do the easy part. And then the hard part, it is definitely a very useful tool for people who are creative with some caveats around it. So like, if you just need an idea for a book cover, it's awesome, right? I need to like be exactly this
use case, I'm trying to get Dali to redesign our podcast logo,
and it might be satisfying, you might not so slate start
coding class dalet access could put a dead cat listening to a podcast about tech, please do that and send it to us.
So like slate star Codex went through pretty systematically, didn't we ended up in this kind of wild debate last week. But before we had this wild debate, he had this nice thing or dolly. And, you know, he went through it, it's like this thing I could do in this other thing I really wanted to do, I just couldn't get done. And the thing I retweeted yesterday is like that, too. So, you know, for commercial artists try to do something for a client, it would be like a good source of ideas. But you couldn't count on it, because it's a little bit wild, but you know, really powerful. And so that really does depend on your use case. Is it creative? That depends on you know, how you define creativity. So like, at some level, like you'd look at the algorithm, and it's like just doing the math. And at some level, it's like, pretty amazing what it comes up with. And so that it's a matter of how you guys sign it.
This is not maybe how I define it for humans. But I certainly think that if you had an art contest and said, you know, we're going to tell the judges to judge it based on what creative output is, Dali would be plenty of humans, if if people will be
most, you probably wouldn't be the best, like, Sure, it's not gonna come up with truly new ideas. But you know, it's incredible with things like lighting, but quite
quite the standard we impose on AI. I mean, again, it's sort of like, is it better than the best? No, it's
I mean, it's way better. It's way better artists than I will ever be, you know, there's no chance there will ever be as good. You know, the only things I could be there are like specific instructions. So if you wanted a blue cube on a red cube, I can do that won't be great. Dolly like half the time, the red cube will be on the blue cube enough time the other way around, like so. You know, I am much better at natural language understanding the dolly in it is much better at lighting and compositing, compositing, putting images in front of each other. You know, it's really great at that is what is this?
I wanted to go back to the Washington Post or a Natasha story. I mean, she sort of seemed to know that she was going to create sort of a debate over something where the experts would come down on the side of this isn't sentient and and I think there are questions that we could get into of whether we sort of hinted at whether the guy the whistleblower on this really believes it's sentient, or he just wants to sort of advance so I
think he really does believe so. So the journalist Steven Levy, you accused him of falling in love with lamda Yeah, and I think he did. I mean, I haven't talked to him firsthand, but Steven Levy talked to him last night. The guys on honeymoon Blake Lemoine that is is on honeymoon. But levy tracked him down in love he's a fantastic journalist wrote hackers was you know, like a book that got me excited about computer led but
also a very positive reporter who certainly likes to boost but anyway, you have to hold both in your mind at once but yes,
so I talked to levy levy a little bit last night I'm levy wrote a story today I'm quoted in it. Yeah, we had a little back and forth. So levy actually tracked Lemoyne down after this story broke, which you know, Lemoyne is like not taking calls. He's like, I'm on my honeymoon. I think he actually got married on the day. Maybe my story came out like the day after the Natasha story came out or two days after, and you know, levy did bicep his own self report as best he could to try to, you know, see if this guy was just like shocking everybody and came away pretty convinced that Lemoyne believes what he says. And in support of that is this 2018 YouTube video that people might want to watch, where he argues that AI is could be people and so forth. So he was predisposed to believe this. And either he's playing like the longest con ever, like, you know, he thought four years ago, I'm gonna get myself on the Washington Post by but you know, I mean, it's just not plausible, right? He he, I think he really does sincerely believe that he is speaking up for the machine. Like, I think he's sincere about that. I don't think he's bluffing. I mean, he has some religious beliefs that we let
people believe in God, based on reasons that a lot of people would say we're bad. And we sort of like, as a society accept that. And so to some degree, if people want to also come up with a sort of non scientific way of concluding that AI is sentient, like if we apply the same rules of God, like we're sort of screwed here, I don't, I don't think we have to just like accept people's like, own version of reasoning. That's not useful in
the scientific community. So he, I mean, here's the other reason I think this is also interesting is Lemoyne is like now an icon in a way, but he is not unique. Lots of people are going to interact with these systems, and feel as he did, in my view, they will be wrong, you will be attributing, you know, awareness to a system that does not have that maybe some future systems will have a kind of awareness and be intelligent in a way that they think this machine is, and it's not. But you know, already, like, there's certain way which we're very cultural centric here. Few people over here in North America know that in China, they've had a system for four or five years called showers, some people fall in love with it. Shallow ice is a more primitive chat by but not entirely different. In fact, the newest version of showers probably uses some large language models in there would be silly if it didn't. And you people fall in love with this. People also fall in love with plants and cats. And you know, sure if it's going to happen more. So there's a way in which the story is like a canary in a coal mine. So it's wacky that a Google engineer thinks this. But you know, millions of people are going to think I mean, I think the debate
Natasha wanted us to have, which I don't think is really what most people are arguing about is whether companies should try it, whether it's good that companies make AI appear like humans, or in some ways they should make the AI is talking away, that makes it very clear that they're not humans. I mean, do you have a point of view on that? I don't know what that was.
I don't think that's complicated. i It might depend on the use case, I'm not sure there's an absolute answer. You know, some of it is like, you know, cigarettes in, you know, having truth and labeling. And like, I'm not sure the answer, I think, you know, we need a lot of people to actually think about this question and people Netflix and bothers me and so forth. Like, one option would be, you make it very clear to people that, you know, this is in some sense of illusion, maybe find a polite way to say that. Don't take it too seriously, but enjoy it. There are use cases where maybe it'd be okay. Like, as a companion, as long as you know, what you're getting into, like, you know, we're not going to tell people not to have stuffed animals, right? I mean, stuffed animals, give a sense of intimacy and warmth, right and cuddle them. And, like, I'm not here to tell people that can't have stuffed animals in some sense. It's kind of like that. And it's also like a drug and that, like, I can see how it's really potent, and people might lose their control. So most people can walk away from their stuffed animals, but they can't walk away from heroin, heroin once they start it. And it might be pretty hard for people to walk away from these things, especially as they get better. I think right now, what Lemoyne doesn't represent is how awfully dumb these systems can be and how much they can forget what you told them. Like, if you just put the current stuff out on the street, people might eventually get frustrated, like, there's a huge novelty effect, like at first, it's like, oh, my god, I can't believe this does this. But it's the same thing with Dali, like, at some point, you're like, I want it to do this, and it just doesn't really do it. And it might the efficacy thing I talked about might also be a problem. Like, it tells you I've got to do this and it doesn't deliver. It's like some there might be some frustration factor. But I think, you know, they're addicting. I actually just wrote a poem. I did a riff on how all I'm going to put this out later today. Allen Ginsberg spoke out and gets a poem, which was like I saw the best minds of my generation, wasting time on dolly and GPT, three and so forth.
Well, I mean, you know, what's funny is that almost it's kind of a riff also on the is it Marc Andreessen, or when each VC line that said, like, we were promised hoverboards and instead we got and the name whatever kind of
it was, tle says, Yeah, I'm sorry. We got we were promised flying flying cars and got 140 characters. Yeah, I mean, you could that you could definitely riff on that for AI gentlemen. like we were promised the Star Trek computer that would actually solve our problems, and be trustworthy and reliable and help us, you know, maybe with climate change. And what we have, are these kind of like sociopathic companions that pretend to like us. That's what we got. Do you
think it's a waste of time? I want to push back on that. That's where you said, Do I
think this research is a waste Mian, or the time people spend on I do at some level, and that requires some explanation. So in my view, these things are working, because they are statistical approximations to things that we actually need. And they're very seductive, the very easy to work with, but they're not I think, the answer that we're actually looking for. And so people are spending more and more time and money on something that I think has no great future it might have, it might play a role in the future. But I think there really are questions in artificial intelligence that we need to answer that are not getting answered, because it's too fun to play with these systems. And it's sucking all of the money and oxygen away from other things. So I've seen before in my career, as I've been doing this for 30, some years, where a new idea gets popular and old ideas that are actually deserving get abandoned. And to a certain extent, that is happening now. So I saw that with cognitive neuroscience, all these fMRI pictures that you probably saw when you guys were kids about like the brain is lighting up and stuff like that. It took away most of the energy in cognitive psychology, and what is it actually showing us? Not that much, we have a bunch of pretty pictures. But we still don't really know how the brain works. It didn't really teach us that much more about cognitive psychology, but it was seductive, and it took money, you don't
think we got the neural net big enough? And then one day, it's a brain and it feels things like, it does feel like yeah, I don't it's sort of AI world, there's, we need to be careful on that. If the servers get big enough, you know, it will it will work. What would your approach be? So
I think that we need to, first of all look to classical AI, which is out of favor and borrow a few ideas from there. One is the idea of symbols and propositions, sentences, kind of verbal structures, databases, things like that are actually tremendously useful. We still write all the world's software with that, you know, there's a few use cases that are very sexy with deep learning. But most software we actually write where there's a database, and you update records and things like that. And these two approaches right now, we're not compatible. And that's a problem. And a lot of people in the field actually are starting to see this, that if you can't update the set of records about the things in the world that you are talking about, at the end of the day, you can't be that efficacious, and you can't be that reliable. So we need to kind of merge the older tradition of symbolic AI with the neural network stuff. I think it's really hopeless. Until we do that. Until we do that, we're always gonna get systems that say that bassy will be alive again in nine months, if you just let her have a baby or something like that. Just they're fundamentally dis comprehending the world. I don't think that that will be solved with more data.
Do you think the big tech companies are being largely disingenuous about the state of their technology? I mean, you've worked within Uber,
I think a lot of them have drunk the Kool Aid. And I think the problem is, most of them don't know cognitive science. And they have this tool that works like 85%. Well, because they've not really studied linguistics, they've not really studied philosophy of mind. They don't understand how hard certain problems are. And they come in with the steamrollers. And they think that they're solving the problems, and they're just not. I'll give you an example of how GPT three is just fundamentally misguided. People in language, know that what you do is you have a set of words, that is arranged in order, and you're deriving meaning from that. So most basic thing, anybody who haven't linguistics course, can tell you that, and the systems don't really do that. And they, you know, people talk about interpretability. Well, that's like jargony way of saying, We have no idea what the system is really doing or why. But it's also a reflection of the fact that there's no real what we call semantics there. And from the perspective of someone who's worked in cognitive science, it's just, it's just bizarre, that this much effort goes into a system that just looks like it's not doing the right thing. I don't know how to explain why I'm not alone in thinking this. And one of the rhetorical things that's happened in the last couple of months is I wrote a piece called deep learning is hitting a wall. And it pissed off a lot of people. But I think what I said was true. In any case, it made me kind of the poster boy for the opposition. So now, they're sort of good for me and sort of bad, it's a mixed blessing. Now, anytime somebody wants to attack the other side, they they describe it as if it was just me. And they don't mention my collaborator, Ernie Davis, who's an author on nearly all of the papers,
is your view more similar to what the human brain looks like? or less like, do you think is your
we have no freaking idea? Let me be honest on that one, so. So there is a theory that what you need to do to solve AI is to make a model that is based on the brain,
right? There are two problems or it would seem to be a way to
solve it at least well, so actually, there are three one Problem of that is we have no idea how the brain works. So we have a lot of data, but we have no real theory, my guess is actually gonna have to go the other way around, we're gonna have to solve AI in order to be able to make an automated reasoning, scientific induction system that can deal with having 80 billion neurons and our many trillions connections between them, and so forth. So one is like, we just don't have the goods to actually do this. And two is like, we know that there are huge holes in what we know about neuroscience, I'll give you one example. We all have short term memory, where I can tell you something once, you can remember it for a little bit. So if I told you, at the end of the call, I'll give you $1,000. If he can remember this sentence, you know, I will have your attention and you'll remember it right? We have no idea how the brain knows that. All the stuff we know about memory and brains is like, you practice something 3000 times and you get a little bit better at it each time. And that kind of memory exists, it's real. But there's other kinds of memory exists. And it's critical. Every time you parse a sentence. Every time you understand the sentence, you're actually using short term memory in order to understand that sentence and developer, we have no idea how the brain does that. Then the other thing is, like, we know a little bit about like how maybe a monkey brain works. But we don't know anything really about how language works. And what makes us such an interesting species is that we can talk and we can transmit so much culture that way, and so forth in that part, like, we don't have animal models of that we can't, like cut up some other animal that we don't feel too guilty about. Not that I'm endorsing that, but like, it's just not ethical, we don't have an ethical substrate to do that outside. So we, in the end of the day, we just don't know enough neuroscience
it is it is possible that the human brain or future artificial intelligence is just a far more complex neural net, that starts to understand like, rules and preferences, those rules out after pattern match matching. And if that's the case, won't we feel sort of dumb for being so condescending to the step? It's at now? You know, it's like, it's on the beginner, I
don't see it that way at all. I would flip it around and say that the neural networks that we know how to build now are so vastly oversimplified compared to the ones that we want, right? It's ridiculous that we're taking them seriously. So, you know, can I just give a couple examples, we know that there are about 1000, plus or minus kinds of neurons in the brain, our neural networks basically have one kind of neuron, we know that at every synapse, they're like 500 different proteins. There's nothing even capturing that at all in our neural networks. We know that there's an enormous amount of intrinsic in the organization to the brain, there's hardly any to our neural network. So yes, the ultimate answer, for us anyway, is a neural network. But the neural network for us is this incredibly complicated piece of machinery, the things that we have are so grossly simplified that like, why should we expect that, you know, one has anything to do with the other?
I think that one of the reasons that the media and the public is so susceptible to these particular story cycles and phenomenons and desires to as Natasha says, See, the ghost in the machine is not just because of some stupid impulse to anthropomorphize things, because I do truly believe that we are very far away from the science fiction future that a lot of people expected at this point in time, when he talks about the flying cars, self driving cars, you know, self aware neural networks or whatever mean use, maybe set it in your story about you know, we've we've hit a wall with deep learning. But, you know, a lot of the promises that we've expected, just haven't materialized in the way that we want. And so it's sort of easier for people to kind of assume great leaps have taken place already. We don't even recognize them yet, when in fact, they're so far, and we've kind of reached just kind of inching along maybe impressive inches, that you and others are involved with advancing AI and other technologies, self driving, as it is, but true promises of the kinds of things that we want to just aren't there and won't be there for decades. And instead, we kind of just have storytime, where we anoint certain things, as you know, the next era when in fact, it's just not even close. I mean, does that sort
of explain to you think it's even more complicated than that, because I think the underlying problem that a lot of people have is they think AI is magic, they don't quite know what it is. And they think that whatever it is, it's sort of a universal Elixir. And the reality is, it's just a bag of engineering tools. And we probably need a bigger bag of tools. And we'll probably use all the ones that we have now. And we'll use some others. And you know, we'll eventually we'll muddle through all of this. But what's hard to grasp, if you haven't studied the cognitive sciences, is how many different components there are to doing good thinking. And it's a little hard to grasp that. A system can be good at one thing and terrible at another. I mean, maybe a few, you know, metaphor might be like, you can find someone who's really good at putting tile you know, as a backsplash in a kitchen and maybe that person's not so good at doing crossword puzzles, right? Like, you know, people can have different kinds of expertise. While the machines we have now have different kinds of expertise. We know how to build them. machine, that's really good to go. We know how to make a machine that can be pretty good at pictures. We just don't know how to make machine that really understands language, we only know how to make machine that gives that illusion. And it's this kind of textured mixed bag and people, you know, want a one liner? Are they smarter or the dumb? Well, it's neither, you know, they're smart at some things and incredibly talented others. And that's hard to accept. But
most of the business world implicitly agrees with you, right. I mean, you know, generalized AI obviously fails at what you're saying. But most of most business applications, yeah, they're just trying to use huge datasets to solve very specific problems that they have. And they have no interest.
But you see the naivety of business so so I have seen some massive, massive companies make weird bets on AI would look to me like weird bet. So I said back in 2016, though, driverless cars are much harder than you guys think they are. And since they said that there's probably been $100 billion put forward into it in terms of r&d costs, and so forth, and so far, that the only money that has come in from that is the elevation in the price of Tesla. And you can make some argument that level two self driving is improved somewhat, I guess. But I mean, we're not close to level five self driving like that's just it's not really happening. We can talk about that if you want to thought about a lot on the
media point. That is the media I think the reporters if you polled reporters throughout the whole period would have said that they don't think it's close and yet the store it's just interesting like to me stories all made it sound like examiner and maybe this makes it a worst failing on the part of part of reporters that yeah, that somehow the stories come out positive, but most reporters themselves I think over cocktails would be skeptical, and I don't really understand. I think it's just what its public consumption desire.
They should hit me up. I'll give him some quotes. I mean, I do I mean, like Sam sheet and CNN CNBC came to me When Optimus was announced, and you know, I gave him the quality to give the other side and say, Look, you know, there's something that's interesting about Optimus. But this is a really hard problem. It's much harder than Musk has acknowledged.
I think the public likes, hey, this company whose brand you believe in is willing to make bold promises about the
future and you get what you pay for. So like fucking Theranos story. Partially
it's the humanity is so forgiving about false about false optimism. People are extremely forgetful,
should be asking more Theranos questions. I mean, I think, you know, Holmes, I'm not sure she meant well, and I think Musk means well, but musk, you know, issues promises like they were Kennedy. I actually called them on it recently. I don't know if you know this about me. I bet him $100,000 He said to Jack Dorsey that you'd be surprised if AGI wasn't here by 2029. So I've been writing this thing for substack Gary Marcus, that's appsec.com. I was like, Okay, this is a good topic for for NSA, I'll write about why AGI is actually going to be five years away. I mean, it's gonna be much more than seven years away, rather. And I I gave five reasons to think like this is really much harder problem than he's acknowledging and he's not very good track record at a time. And then when I finished it, I was like, you know, I should put some money on this that'll you know, um, so I put $100,000 on I laid out clear criteria, the field loved it. And people in our had doubled my money and then raise it to half million dollars. Wow. But so the bet still stands. But Elon hasn't responded. Because for him, he doesn't want to be called accountable on this stuff. The media should be like, Dude, you are chicken. There's one story like that out there. Somebody one, you know, one small outlet called him on it. But most people didn't pick it up. And they should, they should be like, This guy has been making us promises for years and driverless cars. Yeah, he's promising us a robot. And all we have seen is a dude in a costume like enough. Let's call him out on it. But the media has not done. But
I mean, with Tesla, I feel like the government is also very helpful. I mean, he's running these experiments on this drone.
Oh, you guys don't get to blame the government on this. The media is extremely
skeptical of Tesla, like I don't know how much more skeptical of a company. The media could be. They
are but not they're not skeptical enough on the AI side, they really are. And I can, I can give you some pointers on what
it looks like. Like, I go back to the announcement. But you know, it's just sort of, there's a certain deference to, you know, if a company announces something, if they want to risk their reputation on it, shouldn't the public hold them accountable if they don't deliver on the things
that they're saying? There aren't even that many, I just don't see how the media
is supposed to operate in such a disconnected way from human psychology, like we are telling you factually, that they're making this assertion about what they will do in the future and
media when it wants to has plenty of room to kind of like set the narrative and set the questions. And there could be a lot more stories than there are it's a basically Hey, I'll give you you know, I'll write the story for you. So it should start with Elon promised this stuff in 2016 and then the next year Facebook promises em that never appeared. I don't know if you remember, is going to be an all purpose general assistant, and that disappeared. And then Google duplex was gonna, you know, make phone calls for us and it You know, the only thing they've added in four years is movie times, it's still incredibly narrow and limited. And now Elon is promising us a robot. And not only is he promising a robot, but he's gonna solve or he thinks he is gonna be solving 2029 And here's this NYU prof guy shirt. It's all bullshit. And like, let's like at least like ask the question. It's
one story. I mean, I, you could never reporter is more aligned with you on this. And I feel like part of this podcast is shitting on reporters for but But what you're proposing is a single story that will then be up against sort of the infinite barrage of companies announcing that it has I mean, it has to be more just like how do you create the drumbeat of negativity? Look,
if we learned nothing from the Trump administration, we learned nothing from the Trump administration, it's that you have to keep up the pressure. And you know, the news cycle is short. And it's true. Like if it's just one story, it's not enough, but there has to be a systematic error. I mean, look, Cade Metz has been holding Elon to the fire on the effectiveness of the self driving. So I'm, you know, I'm exaggerating a little bit. But I think, you know, it's like 95, five, or something like that. And I also think, by the way,
this extends to a lot of and I'm very critical on this show about augmented reality, and the promises these companies make on the effectiveness and the promises of what it can do. And we're about to enter this hype cycle. Again, when Apple releases, its, you know, VR device and promises AR down the line, we're very quick to go to the demos, you know, all the reporters went down to Google and sadly, ugly way Moe cars, and that helps kind of pump along this idea that they were really close to self driving. And I don't have an easy answer to it, other than maybe occasionally telling these companies No, and and saying, you know, what, this demo that you're putting me through, yes, I can be critical in the article. And I think Natasha, I don't know the backstory of how the story came to her. But, you know, the Washington Post also framed it with Blake, you know, kind of in dark, artistic lighting, looking like some sort of visionary, when even though the article was, I think, reasonably critical of him, it's still kind of positive him as a legitimate voice in his field. But obviously, he's not. And I just, I don't know I in what I write about, it doesn't come up nearly as much Uber has spun off its self driving division, they don't care about anymore. They're just a dollars and cents business and relatively boring because of it. But I think that the hype cycle as pushed by the companies will never end. And it's incredibly difficult as a reporter to turn down sexy stories that we know will get attention. I mean,
you can run the stories, but you can get a whole lot more people like me, but not just me, as voices in these things and make it clear that in you and you can remind them, you know, let's look at the history. We've seen this promise that wasn't delivered, like when's the last time that you read a story on these technologies that actually like reviewed the history and said all these other promises, like, didn't come true? Like either, like you read a story about optimist and it's probably mostly about optimists and not saying so much about like, you know, Elon has missed every deadline he's ever proposed. And it's not you they're rarely are they synthetic, putting together all of the facts that I just gave about, hey, you know, Facebook made these promises, Google made these promises, it's actually really hard to get AI into production, which is itself, you know, an interesting question. Like, there are some technologies you can put into production relatively quickly. But AI is not one of them. Why is it not? Well, it's not because they're always these outlier cases. So like, you probably saw that one of the driverless cars ran into a jet the other day like it wasn't in the dataset, right? This is a persistent well known problem in the industry by now. I've been writing about it since 2016. And like, people are starting to recognize that it really is the whole ballgame. But that means every time you have some technology, you're gonna wind up with some outlier problem. And so yeah, you're gonna get the demo on day one. And it's going to be five years, 10 years, 15 years before you can actually trust it. Like that should be an every story here.
Yeah. And it's and I think it's also the duality of Silicon Valley and the CEO CTO dynamic, where it's both a marriage of some sort of technological progress and the American showmanship song and dance marketing routine, of getting the public excited about it. And we're as human beings and definitely as journalists, susceptible to the CEO side of things, we'd love a character give
you another story idea, Elon, just, I mean, I actually wrote it, but in a subject that didn't really get that much attention is Elon said, you know, the whole company really depends on the self driving cars. And you know, if that doesn't work, we're basically worthless, which was slight exaggeration, but like, it's just the car company. I mean, the reason it's good, you know, 101 price to earnings is because people think it is an AI company that is going to fundamentally change the world. That's why 101
I don't know I don't think most Tesla holders have argument for why they hold the stock but yes, I see what
your warrants because it kept going up but now it's going down or whatever. But it's a big part of it. But I mean, Elon himself, it doesn't matter what the other holders are. The largest stockholder in Tesla, which happens to be Elon Musk said if we don't, or he said we must solve. Full self driving Without worth anything, that itself gives you a story. Like, okay, let's take for granted that what he said is true. We can ask around and you know, get some financial people, which I'm not to evaluate that statement. But if you take his premise like, okay, he's been promising this since 2015. Is he closed? Let's look at the new accident data. Let's ask some experts. Like, let's hold his nose to the fire. I
just think my task was the one that I find most Tesla coverage is justifiably negative. I mean, you're basically asking reporters to ask coverage
is is negative, like people, you know, make fun of his his tweets and that kind of stuff. And, you know, there was a new lawsuit yesterday, and people will write about that. I don't think that the AI coverage is nearly as skeptical as it could be.
I mean, wasn't there a story about how they're, like, supposed to be turning off? Like the AI right before it gets an accident or something I need to?
Yeah, I mean, the NH NITSA. We'll call it we've just released something a couple days ago. Right. And so that got a little bit of coverage. But let me quote NITSA has released two bombshells in the last days,
anything they've been deploying this for, like since like 2016, or something, and you're blaming it.
So did two real things this week, NITSA, to real things this week, they put out information about the turning off the autopilot just before the accident happens. And then they put a big dump in which Tesla had the most accidents, which is a complicated thing, because they also have the most miles. But mean, they put stuff out that could have been like, top of the headlines like, is this a serious problem for Tesla or not? And like, that was there for the journalist to run with. And I didn't see much about it. Like I check the, you know, the news stories about Tesla every now and then just to see, because
I always think that Elon is such an outlier. He's such a character. He's so bizarre, he almost defies the laws of gravity. When it comes to negative and positive coverage. It's almost not even worth
honing it on. Me Trump was a little like that, right. And they played some similar games. But I guess, I think Trump is a better example,
to me. I think Google, or some of the other big tech companies,
I believe they should be held to the fire, more to say with open AI, like they've gotten all these love letters about GPT. Three. So let's forget Google and just look at open AI for a minute. You know, you had the love letter in the times by Stephen Berlin, Johnson, The Guardian wrote an op ed with it, etc. Everybody thinks they're like being creative by using it to write their story like, this is like a trope by now. I mean, going out there. And you know, Berlin, Johnson gave two paragraphs to me and one to Emily bender. But this story is still like so, so Pro, this kind of stuff in a way that I think many people in the field, you know,
what the public wants. I mean, ultimately, if you're a negative reporter, like I didn't write much about AI, I mean, self driving as an Uber reporter, I was very openly skeptical in the newsroom refused to write about it. Uber would just go to a Businessweek writer and say, Hey, here's our new, like, I didn't get a different business week reporter, you know, got the story for their Pittsburgh lab. Because Uber knows then go to somebody else who will do sort of like, here's the
production. I mean, those guys are very good at shopping for I mean, it's just like,
there's so much desire. There's so much desire for these stories, like editors. Yeah, I mean, this is what business magazines are based on, like putting optimistic statements, you know, Mark Laurie is going to build a new city. I mean, it's just like, it's so part. It's what humanity wants in some way. I, I guess I just don't think reporters are going to will into being it's like their business model.
I mean, I think that that's true. I think humanity ones happy stories about the new revolution. I think that comes at a cost. And that's how we got into the conversation. The cost is you you wind up with people deluded and yeah,
right. No, I agree. And I agree. So I'm being defensive, even though I'm sympathetic, but it just seems hard to hard to.
I mean, it's a lot of level one. Yeah. I mean, so, look, I've been, you know, partly because you guys are media guys, that I do some
journalism. Oh, we love it. I'm happy to have that conversation. You know, I
think it's fine to have this conversation. But I would agree with you that it's not like, you know, to second problem I'm like, pitching you ideas to go write about them in the open, some of your buddies will listen and use them to like I'm giving them
for free. Yeah, and the media reporters are all listening to this podcast. So
yeah, I mean, I also understand like, it is what the public wants. And so you know, the public is partly to blame because it you know, it votes with its clicks and the stories that get read or the you know, the world has changed kind of stories and not the you know, I'm not so sure that this is really going to happen kind of stories, and the
government is supposed to protect the roads like the self driving cars are on the streets like at the end of the day. I am sorry, the government is letting Tesla get away with this like Tesla has been experimenting for years.
It's is upping its game. It's it's up in the air. I think the media is not quite following the trail that NITSA has been leaving in the last few weeks in NITSA. is giving some really serious glue, the government's
gonna go after against Tesla after the stock is already down. It's not their Tesla, by the way or not. If they bring the company down, they don't want to do it, when it actually would hurt a rising company, they want to do it after the market has already said, Okay, fine. This company's
I mean, I think NITSA just wants to have like, do the right thing, whatever the right thing is. But they also showed that way, Mo's have pretty serious problems too. And in fact, the whole field like so, if you read those data carefully, the conclusion you should come to is we're not close to level five self driving. Right?
Right. And I remember personally, you know, what, my colleague Amir, a Friday at the information wrote what I thought was a fairly definitive story about women's technology when they were testing on the streets in Arizona, and basically found out that they couldn't turn left turns are still hard. There's Yeah, yeah. And it's just like if that, you know, I, you know, that's got to be I can close to 50% of turns. You know, if they functionally do that levy gave
every Steven Levy I mentioned before, gave everybody a big clue in 2015, that not enough people picked up on which is he, he visited Google at that point, or Wei Mo, I forget what they were called, had this place where they were testing the machines and Levy, what's the word? I'm looking for implanted there for a month, or something embedded there for a week? I don't know, Is he better there for a week or something like that, and the like, big dramatic point, I haven't gone back and reread the story. But I got him to give me the link the other day, so you can find it on back channel. I know that within wire. So anyway, he's there for a week or something like that ended up being dramatic thing was at the end of this time there or something like that? I haven't read it in seven years. But basically it revolved around, they figured out how to recognize a pile of leaves. Right. Right. So we had not had to, you know, they had already been doing this for five years at that point. And like, leaves were still a problem. Well, that's, you know, leaves are an outlier. And that was a clue like every if you have to bandaid up every outlier, then you're playing Whack a Mole. And that's still what's happening here.
I keep coming back to the marriage of public research and academic research with the needs that the private company and you know, the press release announcement culture that is what drives the, you know, stocks, essentially, in businesses of these tech companies with the sort of slow, plodding, methodical advances that happened in research that happened over decades. And it's just doesn't fit with the timeline of these companies.
That's right, I think the least your listeners could come away with is, hey, we are living in this announcement culture. And that announcement culture is making people like Blake Lemoine, believe in fairies that aren't there. And it's making a lot of us believe in deadlines that are not really going to be met. And that we should be a whole lot more skeptical. Yeah.
And I also just to reiterate Eric's point, I also think we as the media are coming up against human nature, at times, which is just, you know, in the representation of Blake Lemoine, someone who is, you know, religiously tested
is one of the most valuable companies in the world, delusions are an inherent part of people's whole understanding of the universe. Like, I, I agree, the media should be more skeptical, but I do think regular humans, the government, the company is making the instruments themselves. There, there are a lot of people to blame
a couple seconds on government, if we have time short, what we can close with that, I think government's going to have to regulate AI much more than it does. So right now, for example, any company like Tesla can put out an over the air update, and it's driving software. And there's only liability after the fact, there's no regulation, you must meet these test trials with these outliers before it's released. And I think misinformation, which we haven't talked about today is a massive, massive problem. In Europe, they just made a deal with Facebook and other companies to be tighter on that we're going to need that here in North America, too. And it's a serious problem, because systems like GPT, three and lambda are fabulous at creating misinformation, which makes them wonderful tools for trolls and other other troll farms and so forth. And that is a serious problem and making misinformation much worse than it is now. And so yeah, I've been dumping on the media, because I thought it'd be fun to you know, we all share an interest in it. But I can totally write that the government needs to step it up. It needs to figure out how to regulate this stuff, which nobody really knows yet. It needs to realize how important is so I did this Twitter space with Natasha and Kara Swisher in Casey, last night, and our best question from the audience was like, okay, so if you're saying people are going to fall in love with these things in their toxic, what is public health going to do about that? And that is a really good question. Yeah, we don't know the answer to great.
We can leave it there. Well, thank you so much for joining us, Gary. And for listeners who are maybe many of them reporters, if they want to get in contact with you and read more
or read your list we'll it will include your list of 20 people who share your views about AI so that we can you know,
you can get that in the piece called paradigm shift. Gary Marcus, that substack
Great, awesome and accurately, Marcus, Twitter. Thanks so much for joining us, Gary.