work. So that was JFLO. And now we're going to hear our fourth musician. And these musicians are going to play at the transitions between our speakers before we get into the conversation among them. And last of our four is Cory, the Michael Jordan of accordion, he also has the world record in playing the accordion for the longest amount of time. Bring it.
I won't play for 32 hours right now, John, don't worry will do something Brazilian.
Great. Thank you, Cory. And I see another one of our speakers. Cheryl, my favorite immigration lawyer is is also in the room and Lynn Dawei from South Africa, an amazing tech entrepreneur. We have a lot of great people in the room, in addition to those future speakers. So welcome to imagination action. It's two minutes until we officially start. We have a great show today. Really excited for you to meet Tom Gruber and and Rebecca Kleinburger and Alex Wissner-Gross and Esther Wojcicki. Not to be mistaken with Esther Dyson, who is a regular on our show, Esther great to Great to see you. She was so in the show just a few few weeks back. So hope people are doing well. And big theme of tonight is going to be AI and we have some real experts on on the subject. So excited for people to participate. And last last show, we had a total of, of 8870 people come through the room over the two hours and 45 minutes. We had 7579 people asked questions by the end of the end of the night. And it was a great dialogue. And we're editing that that event and we'll be getting it on our website to follow our shows. imagination and action club is the website where you can go in and you can see all the speakers and in see the past shows we're going to be putting them all up. So one minute until we officially start. It's great to see you Thank you, me for playing first and then Haley and then j flow and Cory we'll be having you play in the transitions. Okay, it's officially six. All right. Welcome to imagination action, we aim to curate a powerfully weekly dialogue with pioneers in innovation. We're changing our world. We believe that we need people who are using their imagination more than ever to help guide us and inspire us and show us how they have had impact how they're putting things in motion, how they are how having an effect on the world. Tonight we have four extraordinary people, Tom, Rebecca, Alex and Esther. What we're going to do is we're going to introduce each of the speakers ask them five questions, Allison, my, my partner in in interviewing, and I will go back and forth on those questions. And then we're going to after we interview each speaker, we have about five questions, we're going to open it up to the other three speakers to see if they want to build on something that was said or ask a follow up question. And then once we get through all four speakers, then we're gonna open up the dialogue among the four and then then we'll open up to all you who are in the room, and we're really excited about this. Tonight, is building on a tradition of some great rooms, we had Neil the 18, and a half year, Chief Product Officer of Netflix for a first show. Then we had NASA and Tesla, the head of operations for the Mars rover came off a shift. And she and the president former president of Tesla did a show. The week after that we had Esther Dyson, who's here in the room now and James courier for men effects. Last week, we had Mike federally, the CEO of Forbes, and he's they have 15 million followers on their Twitter account at Forbes. And once we get him the audio, who's gonna say video, he's gonna amplify it. And we had Tria works for Bob Langer, and amazing up and coming engineer, scientist, and innovator. And then next week, we're going to have Robert Waldinger, Keyun, who's going to be coming in from London. She works on privacy at Google, but is creating the happiness foundation. And we have Wade Davis, if you don't know who Wade Davis is, he was one of the inspirations for the character, Indiana Jones. He's a cultural anthropologist, and Robert Waldinger is running the longitudinal study on happiness. So that should be a great show. The week after that, we have Cady Coleman, who's an asked two time astronaut on the International Space Station and Ben Schwegler, who was the head scientist for Disney. And we also have Juan Enriquez and Jane Metcalfe chain is the founder of Wired Magazine, and the two of them are going to talk about bio. And Cady and Ben are going to talk about infrastructure in space. And then after that, we have Deblina and ed ed Boyden had Dublin in his lab, and now Dublin has her own lab that's going to be a great one on neuroscience. And then we have Elizabeth Rowe from the BSO, she's going to be doing a show, we're still figuring out who to pair with her. And the week after that we have nyer, who's one of the world's experts on longevity. He did a seminal research study on centenarians. We have St. Chin and Emily from the week after that from the Salk Institute, they're going to talk about circadian rhythm. And then we have Naomi and Sarah who are going to talk about Hollywood and, and me move me to movement and Black Lives Matter. And they're both amazing movie producers and excited to have that one. And we have some other great speakers, check out our website. We're up to, I think 135 speakers on the website, 60 of which are female, we're looking to gender equal, we're getting there. And okay, so let me introduce Tom Gruber. He's the co founder and CTO for a company that created theory, it came out and a few weeks later, it caught Apple's attention. And in 2010, just as mobile was becoming a thing, when we were all on 2g, you know, there's a lot of talk about 5g on 2g. Sirius B was was bought by Apple and brought into it and it suddenly became an integral part of all the Apple products. Siri issued a new paradigm in human interaction with devices. And Siri was Steve Jobs, his last deal before he passed away, Steve saw Siri launched. And then the day after he passed away. So Siri was close to Steve Jobs. Calm stayed with Apple for eight years after Siri was brought in, in house. And he had a corner office in that circular building. There's a funny story behind that. He led the advanced development group. And Tom is an important pioneer in AI and and UX, and UI. And Tom is all about augmenting human intelligence and has done it in many different ways and is doing some extraordinary stuff. He thinks about hey, how AI can be used in the how it is being used an attention economy, and he worries about Frankenstein tech scenarios but He sees all the possibilities and he's trying to help a number of people lead in this space. And you think about how to have the impact every day. So like to introduce you to Tom Gruber. And our first question to Tom, I will be asking, then Alison will do the next one. So Tom in 2007, you co founded Siri which created the Siri intelligence personal assistant, and it was building off of the non knowledge navigator that john Sculley and talked about, I think, a decade or two decades earlier, can you share with us a bit more about what the process was like of creating Siri? Was it easy? Was it hard? was creating a voice assistant, your original intent? Tom, welcome to imagination in action.
Thanks, john. It's a pleasure to be on your show. It's pretty cool, cool, cool bunch of people. Was it our original idea to do a series? Well, the original concept of Siri was, yeah, do do that age old vision of a personal assistant that AI people have been thinking about forever. I mean, as you mentioned, the knowledge navigator the 1987 video vision video that happened from Apple, Alan Kay and other people were behind that. And it showed a vision about a talking guy with a bow tie and a face but I know an assistant in the what was then the uninvented iPad, using the uninvented internet. It's pretty visionary thing. So both Adam and Kira and I were both very, you know, motivated by that. And inspired by that we were also both inspired by Doug Engelbart. He's a big hero of ours, who was the guy who invented collaborative computing on the internet and the mouse and some other things. So what here's why we we did it not because it was like a cool idea, a new idea was old idea, an old vision. But what happened is in 2007, we just just saw that the clouds were Party and the timing was right, we could do it, I hadn't been, I'd already been doing it for 20 years, and I've been no intelligent interfaces. This is like the holy grail of Intel's interface, you just talked to it. But we couldn't do it before. So that was the thing about it. It was it was hard until then. And then it was about to get easier. And so it was the time to do it. It was the time to do a company like that. We had the three founders were a great team, dog Klaus came from the mobile industry. He walked in, and and he's holding this iPhone one, you know, in his hand and going this is the future of computing. And I say, Well, how do you know this? Because I've been, you know, doing a, you know, mobile forever. And, you know, no one else can do it like like apple. And, and then Adams, like, what his entire career is in vain building sort of demos of personal assistants and architectures for what we call orchestration of services. In other words, like, if you say a thing you want to do, it figures out how to go get it done by like hiring various subcontractors like Yelp and Open Table and so on. So and I had been doing this intelligent new eyes and AI things. So it was a pretty good time. And we figured we could probably do it with a startup. My previous company had raised six rounds of funding and 200 people, blah, blah, blah, it was really hard to build tech then. And, you know, and 10 years later, we could do it in a team of 24. If so, specifically, also, you mentioned 2g. You know, for those of you thinking about, you know, other startups, timing and timing is really important. Timing isn't luck, timing is catching the train the right train, at the right time, that mobile industry was just beginning. So dog brought that inspiration to us. The ability for us to connect to services was just beginning like all these open API's were just happening, because of the web two oh era, it had left this wonderful piece dividend of all these open API's to all these services. And then, of course, the idea that you could do natural language understanding was coming online. And so we just said as sort of a perfect, you know, perfect storm was like time to do it. So we put together to raise the money and get to work. And one more sub plot here is we built the whole thing was what people call a minimally viable product called viable product, which I always find annoying, because why would I do something that's minimal and barely viable? But anyway, we didn't we had one ready in an hour and in a year, sorry. And then we didn't ship it. And we went to the board and said, well, there's one other feature we really want to add. And if we added that to the personal assistant, it would be even cooler, but it will take us another year. And guess what that feature was speech. So speech recognition was an add on feature at the end. But because the board was smart, and they'd seen they understood, this is a big play. They said, that's fine. Let's go raise another round. That's wait another A year, let's get the speech right. And then we'll ship this thing. And it was a good play, because we stayed in stealth throughout the year. And then when it launched things all at once,
Tom, so great to have you with us. And I really loved that idea that speech recognition was an add on feature at the end, I'm really gonna take that into my planning in the future. I'm curious when Siri was acquired by Apple in 2010? How did that change series identity and yours? And particularly, I'm curious, did apple and Steve take Siri in a different direction than you would have if it had stayed 24 of you?
Well, it was, you know, it was a inflection point in the company, who knows what, what the other path would have led to we certainly had a tiger by the tail in terms of e commerce, we were, we are on the head of the curve of like, being able to do make money by having people get things done, like order order stuff online, and so on. But and so we had that line that business ahead of us. But when Steve said, basically, you know, you can you can make a business or you can make a product, and you guys like to make products. So come work with me, we'll make a product, and then I'll take care of the business. So will Apple take care the distance. And so what that means isn't isn't just that will acquire acquire the company. And you don't have to think about revenue, it's much bigger than that. Because really, it's instant. Impact, right? Essentially, this is the this is the payoff for founders like us is we can have users at scale using the product without having to get lucky or, you know, fine, crawl our way up the hill. So we can focus all of our attention on the product. And so that was what Steve Apple allowed us to do. Now, what it did differently was it wasn't really an e commerce product anymore. It focused basically on how to make things easy to do on the phone. And, and we and so we doubled down on that aspect of the assistant. And of course, that meant that you could operate it hands free in a car, you could operate hands free if you don't have hands that works so well. Or you could use it just to do dictation, because who likes to type in that little thing? And all that it can do all those things that are you think of them. They're not like, amazing, deep mat, you know, Terminator style style AI, but a lot of billions of people use it all. I mean, it's used billions of times a week, it's used all the time. It's just terribly useful to have that level of interface improvement. And that's basically what I think that we're most proud of.
Tom, how did you decide on what voice to use? For Siri? Did you build in a sense of humor? How did you kind of anthropomorphize it? What were some of the choice points? What got left on the cutting room table?
That's a good question, john, thanks for asking. Cuz a lot of people misunderstand that they think theory has a female voice. And it isn't. It always had two genders. From the end, it was actually male in I think, in England, and maybe it was like we launched in in England and several English languages then and also in Japan, and Germany and France, all at once. And it wasn't always female, it was a choice. And by the way, that's just the default, you know, I can't I can switch back and forth. So that was the reason that that is that way. Because it's a design choice. The the voice is designed aspects of the character, just like the name, the name was a design, carefully designed that into lots and lots of filters. And so I think that's a, we let's remember that, we all have lots of design choices. And we see now we have virtual assistants that look like Deepak Chopra, right. You can have, you can put any kind of body around it or face or voice around it, and people will, it will shape your interpretation of what what it means. What what kind of an agent it is, but it's still it's still basically a designed artifact.
Does that make sense? Great. Thanks, Alison, Europe, but just before I want to remind people, you're an imagination, action. And we have, we have four great guests. Speak speakers, we have Tom, Rebecca, Alex and Esther follow them. And I can't wait for you to hear them all talk on the subject that we're talking about tonight. I also see some of our future guests in the audience. Katie Coleman is next week to time astronauts to see Elizabeth row. The first food is for the BSO. And thank you for coming tonight, Allison.
Let's see. Tom. I just wanted to follow up on one of John's questions because I love the early stories that you tell about how Siri had humor or how you had to add humor. Can you just share a little bit of stories about the early days and what was intended humor and what was unintended humor?
Sure, I mean, essentially, we had a, we had a problem. Remember, the idea was to make it so that people could talk to the talk to the device and ask for what they want rather than having to type or tap or swipe or figuring out an interface. And so it was going to be a paradigm shift, they had to get used to something new. And we also knew that by making it much easier to ask whatever you want, people were going to ask whatever they want. And when then we were going to be a hot water, because of course, you know, sure, we were, we had some good programming and a good offering system and all that kind of stuff. But we weren't going to boil the ocean on version one was there's going to be a lot of things that Siri simply could not know or answer. So this is kind of a bug, right? How do you turn a bug into a feature? Well, we call the programs called graceful degradation. That's the that's the the tricky word for like, how do you deal with this mess? that people are going to have their expectations shattered by the actual reality? So Well, the answer is, of course, give it a personality and give it some give it some make it fun, also, so I mean, do a shout out to someone who was responsible for most of the whole humor and personality. His name is Harry Sadler. He was with us. He was basically the he's the genius behind the personality intelligence. He's up. He's had several people working for him and with him since then. But in the early days, Harry was was our guy on on point for the consumer and everything. He's He's a technical guy and a writer, he was the perfect person. The other piece of that that's interesting is that the whole way that theory operates in terms of humor, and the way it comes back with answers is also influenced by the practice of magic. I mean, like normal magic, like you'd go to a show and see magic. Because Adam, Charlie co founder was actually also a trained magician, and a big fan of the techniques of magicians. And so we would do things like we would anticipate what kind of things people might ask. Just like if you're trying to you know, you want to you want to get a take a volunteer from the audience. And the magician is always magically seem to know what they're gonna ask for, but use the same technique for the humor. And so it was a lot of fun solving that problem.
Can you share with us a little bit more about how in your view AI and machine learning has evolved in the last decade? What do you realize this is a pretty big question, but what do you track as the major steps in terms of how the science has evolved the players and the companies that you see as leading? And what would you say are some of the most exciting milestones since 2010?
absolutely. Since we said, we watched it happen, the deep neural net revolution happened. Well, you know, while I was at Apple, Alex Acero, who was a deep learning guy in speech recognition at Microsoft came, joined the team early on, and he brought the deep learning speech technology. And he gets a lot of I think he should, he's, he should be more well known for that accomplishment. He also brought Hinton to Microsoft, it turns out originally, so that happened all in the middle of the decade. And it was really amazing to see what happens. So we had speech recognition. Before that, and it was Oh, it was okay. It was good enough, kind of, but once we started doing the deep neural nets, the speech recognition got much better very quickly. And that is because of essentially the the triumph of brute force computing, and clever and clever modeling over, over, you know, like, expertise in understanding language, at least at the level of speech recognition that the deep learning has completely changed the way we think about pattern recognition. It happened in vision two, right at the same time. So I what's what's amazing now is, essentially, because we could harness Moore's law in the form of GPUs, and, you know, and DSPs and those GPUs, you could basically take what used to be complicated programming and turn it into matrix algebra, which, which these hardware racks can do really well. So what does that mean? Well, certain classes of problems began to fall, like these pattern recognition problems. And at the same time, it whetted people's appetite for what AI could do. So what's really happening now as we've seen that the the stuff that can be done and we're gonna we're gonna have a great conversation, I think with Alex later about this, maybe even an argument or a debate. But the stuff you can do with growing really big iron at the problem of imitating humans and seeing what happens. And that's what we're, that's what's happening with the machine learning in the last decade.
Great, thanks, Tom. You've talking you spoken on humanistic AI A lot can you you know, paint a picture about how you What that means to you and why you think people should be excited about it. I like, you know, AI is artificial intelligence by like augmenting intelligence. And you see humanistic AI is a way to augment human capabilities. And then at the same time, while we're in a Renaissance, around AI, you know, it was in Dartmouth in 1950, that people first started really talking about this, and it's taken a while, but things are moving really fast. A lot of people are so worried. And there's some controversy around AI, can you touch on on what you think that some of the controversies are? And how you feel about them?
Well, sure, in fact, I'll just also just seed the conversation is I think we'll have a lot more as a group to talk about in the next couple hours. But let's go, let's go. First, what's humanistic AI HeMan is the guy is essentially a philosophy. It's a way it has basically two camps in an AI has been for 3040 years, there's the campus, which today I would call a machine intelligence, which sounds like a perfectly reasonable thing, right? Let's make machine intelligence. That is, the goal is to make machines as smart as possible. And the way we measure and celebrate success, is we see that a machine can now be the human index can be the mid chest and then go and that org can do a reasonable imitation of humans, things like speech recognition, and so on. And so we think that just in the in that camp, that the goal is just by whatever means making machines intelligent. And then it's essentially follows from that, that the applications are automation applications. So you use that intelligence, that automated intelligence to do things that humans used to have to do by hand, and then you can, you know, you can win at certain games efficiency optimization. The other camp is what I call humanistic AI. And that's where instead of having the machine, intelligence for its own sake, instead of hugging and therefore things like the machines competing against humans, you from the beginning, orient your AI as augmentation, either directly like eyeglasses or an augmentation of our vision, or indirectly as an in collaboration like the assistant as a collaborative AI. And so that philosophy, it turns out is important and has been throughout my career. It's also very similar to philosophy that Doug Engelbart had when he uses the word augmentation. And he's he basically said that, when he when he was coming online, 50 years ago, it was like the reason why we want to do this internet thing. And this collaboration thing. And computer interactive computing is a powerful technology. He wanted to use it to help solve collective intelligence problems for the world. And so what I do with humanistic AI is I want to use AI to solve human problems, to be on the human side of things. Now, this sounds like happy talk is not the equivalent to saying let's do good things with the Iman is strategically important. For example, when it actually, if we taught you just to give you a hint, when when you optimize for them, if you use AI as an optimization. And as a competitor to humans, you end up with things like the you know, the adversarial nature of Facebook and YouTube algorithms that basically tricked people into doing things against their will, at scale. What's happened now is the optimization. The AI has been come idiot savant, it's extremely smart at getting people to stay on site. And it's pretty much ignorant of everything else. That could matter. If you on the other hand, start out with Well, the AI is supposed to be odd, many people, then you, you, you basically start with the evaluation function for AI being How is it doing for humanity? Is it improving health? Is it improving our ability to communicate and get along? Is it improving, you know, our ability to design things? And so it's that's why I go run around that banner of humanistic Yeah, I ended up actually helps me make my choices and what to do.
Tom, that is such a great way to start off the session. Thank you so much, just to reset for all of you who recently joined. This is imagination and action. John Warner has an amazing stage and clubhouse every Tuesday from six to eight. And I think what we'll do next is invite Alex, Rebecca and Esther to see if they have questions for Tom. I think, Tom, your concept of AI as idiot savant is like throwing down the gauntlet. So I think there'll be some good ones. And then I noticed the number of you have your hands raised. We promise after hearing from all four speakers and Rebecca is next to get to questions in the room. I think we took at last session so we promise not to shut down the revenue but because we have four speakers this time we're going to hear from them first is so they each get a chance to share their richness. So Alex, Rebecca, Esther, do you have questions for Tom?
I'm happy to open up. So as Tom knows, one of my, one of my favorite grenades to throw here is, is to ask about intelligence amplification. So, so Tom, going back to one of your earlier points, if we augment every human with a personal assistant that learns that effectively makes the person plus the machine together, smarter than the person alone. And if we do this for all of humanity, going back to your comment about about conflict, and amplification, if we basically simultaneously amplify the the intelligence of every human on Earth, is it reasonable to expect that conflicts between humans might also be proportionately amplified? Or would you expect that in a post amplification worlds that conflicts would actually be reduced?
That's a great framing of Alex. So you see, you did a great job of sliding from argumentation to amplification. And they're not quite the same thing. And this is where the this is the crux of the My answer is that augmentation of humans with a personal assistance would be about helping the human do what he or she already wants to do, like be healthy, or be social, or, you know, be involved or engaged or be successful at something. Now, it could be successful at being a dictator. That's true. But it's also helping everyone else who doesn't who wants to be successful and living in a democracy that doesn't have dictators. So I think the augmentation, if you take you start from the augmentation philosophy, you don't end up with just immediately bad consequences, like you're talking about. Now. What's interesting here is if you think about how, if you amplify What if you augment the IQ of everyone, what you're really not really augmenting the IQ of many what they want to be as a human shoring up things that we're not so good at. So we're not so good at memory, for instance. So it's sure up the memory given what a much better, more perfect and exhaustive memory, we're not so good at long term planning and prediction. So help us do those kinds of cognitive tasks that we're not so good at. And if we start to do that, we'll start to realize that maybe the AI that makes it 4 billion people who are online, a little bit, a little bit smarter, will help them understand, for instance, why climate change really matters. Because climate change is not something you can just go down the local pub and understand, you know, the, it's a long range prediction problem, right? Or why the ocean actually matters to save or things like that, or why, in fact, why it is easy to delude yourself, if someone says, do your own research about q anon, or about vaccines. If you had an augmented intelligence, you realize that doing your own research doesn't mean going to your neighborhood filter bubble and asking them It means doing research, and the assistant could help you actually be able to do research, state of the art research, say, and really get a good answer. So yeah, I think that's it's really important for you as soon as standard getting making everybody smarter, and that sense of augmentation is actually probably going to lead to better outcomes. Now, I'll give you go ahead and come back with the What about the bad actors? Well, we can talk about that too. But that's my first volley. Thank you.
So you guys, Emi if you could play for like 90 seconds is Emi Ferguson, Juilliard trained musician. She's going to be affiliated with Tom Gruber, who was the co-founder and CTO of Siri and eight years head of research, some extraordinary research at Apple, and then I'm gonna introduce Rebecca.
Great, thank you Emi, and JFLO will be affiliated with Rebecca. So Rebecca, thank you for being awesome. And that was the great musical interlude, Emi. So Rebecca is creative mix of science, engineering, design and art. she explores ways to craft experiences for vocal connections, she's pointed out in her PhD thesis and many times I've spoken her that your voice is something we all know. But yet, there's so little we know about it. Air, you know, from your lungs 100 muscles are involved in your body, there's emotions, millions of years of evolution there 6500 different languages spoken on the planet. And yet, there's so much more to learn and know about voice and language. And Rebecca is at the forefront of this. And she also raises hedgehogs and she knows how to cheer. Whoa, she is a very talented person. Rebecca, thank you for being here. First question to you is, you're an engineer by training scientists, by heart artists by nature and design or by necessity, what is it that first led you to study the human voice? and What? Why have you made that your calling?
Thank you, John. Yeah, um, well, before coming to MIT, I really like to explore many different fields, I was interested in a lot of things. I grew up in an old farm, the countryside in France was my parents were literature professor and I went on to study math, and then switch to physics and to mechanical engineering, and then to computer graphics. And when I arrived at the Media Lab, after about one year of exploration, I realized that a lot of my projects happened to resolve revolve around the voice. So I was aware, the physical aspects of the voice. So I, I looked into modeling the voice into spring and masses. And I realized that the scattering equation I needed for that were pretty similar to the work I'd done on turbines before but just in smaller scale. But also the social aspect of the voice, how voice and breathing actually synchronize when you talk to people. And maybe we can influence that with technology, as well as the musical aspect of the voice. So I've worked in building tools for professional singers, and realize that some of them, tenors have kind of an unfair advantage with the use of their voice, because some of their resonances can be picked up not only by their ear, but by their skin receptors, that could make them independent from room acoustics really realized that there were a lot to look at in there. And the voice became a way for me to connect a lot of the things I was interested in. Because there are a lot of things going on in the voice. Of course, there is a sound aspect and the math and the processing. But it really opens up when you look at the voice as an experience. And the voice is really studied in a lot of different fields. But those fields often don't talk to each other. And I for the last 10 years studying the voice at the Media Lab, I really had the luxury to not be constrained by field boundaries. So I would say that my first motivation, in looking at the voice was how little I knew about my own voice. I used it every day, but we had no idea how it works. And I think it's kind of the case for a lot of us. And then, after about three or five years to look into all the details of how the voice work, and really became even more fascinated by the voice beyond words that have local interaction, tales way, way more than the words. And it was kind of interesting, because both my parents, as I said, our French literature professor, and I was a little rebel there and saying, okay, but maybe what's beyond language is, you know, vocal interaction is even more powerful and meaningful than the words outside.
Well, I was just gonna ask you, Rebecca, I love the idea of the musical aspects of voice that's so beautiful. And I think you're in a perfect position to understand what it is about the complexity of the human voice that we may not have captured yet in our technology. For example, you've written about the importance of watching what you call our voice bias. Can you say a little bit more about this concept? How does the White voice predominate our thinking and our approach to studying and replicating voice.
Yeah, sure. So well, we all have voice biases, very often toward women's voices, minorities certain type of accents. But it's not only accents, right? There are certain vocal styles like vocal fries, voice disorders, disfluency, Lisp or nasal personalities, and those biases often created by societal trends. But they're really subconsciously intensified by our own mind. So the brain is very fine tuned to listen to voices, we have a special circuitry for that that's different from many other sounds. And we analyze very quickly, voices based on our prior experiences. And to do this, this circuitry uses a lot of special shortcuts that can really be detrimental to true listenings and human connection, does this optimization in our brain really lead to very ingrained biases. And for technologists, it's kind of easy to follow those trends and use those biases to achieve specific ends. So, for example, psychologists have demonstrated that we enjoy hearing women's voices more when delivering basic informations, but we are more receptive to orders when they come from his voice. So there was some research about that. And then kind of in consequence, many product designers decided to use attractive or female voices when they want to please or serve but use male voices for more complex information load or to sound more assertive and impose respect. And that the case in there was a case in for many, many years in several metro systems all over the US and Europe. So they used female voices recording to announce when the train is arriving, but as soon as there was a delay or an issue on the tracks, then they would use male voices. And it's kind of chilling to think of how those technology really reinforce those biases. Of course, some people might argue that it's more efficient, but it's also kind of a vicious cycle. So that's just bring up the importance of becoming more aware of how we use technology of the voice and how that influences at different levels. And, of course, people have tried to use tech in ways to fight those biases, people are now working on what we call voice skins. So it's even like avatars, and it changes your voice in real time. And then use that in the gaming industry. It's really booming, or for call centers or remote learning or interviews. And if it's used Well, it could really empower some populations. But there may also be covering some issues rather than really tackling the problem. And we might be currently going backward on some of those issues. For example, there are so many training programs for people to learn to sound more assertive, or more trustworthy, more professional, more white, but not enough education for people in power to become aware of their biases and really find those biases.
Rebecca, thank you. Next question. You've written about inner voices, and how we superimpose voice when we read tweets, say from a boss or someone well known, you have talked about the critical role of voice and creating relationships, where we alter our voice for each person in our life. Have you studied how this works? When the voice comes from a computer or machine, which is partly the theme of tonight, the human in the in the machine?
Yeah, kind of a lot of points around that that we could talk about. And because there's voices we hear, and there's voices we imagine, and imagined voices can be as powerful as the real ones. So for the past five years, I've done a lot of work on looking at the experience of the inner voice in it's been studied a lot in terms of schizophrenia, auditory hallucination, but those are just one type of manifestation of those inner voices. Most of us have some of those experiences from silent reading or mind wandering or ear worms. And some people are hooked into what we tell ourselves, what are what are those voices saying? And I look into how we talk to ourselves, the acoustic properties of the inner voice, and a lot of different applications to that. But specifically in social media, it's interesting because when you read a tweet, or it's different when it's from someone you've heard before, or you haven't heard, so it kind of the example you said if you get an email from your boss, you're going to read it silently, but in your head, you will certainly read it with the voice of your boss and with their tonality and with their specific vocal posture. And it's, it's much easier to interpret wrongly. A message when you have it written and resident when you talk to someone because It's much more dependent to your own state of mind, you're going to imagine the voice of your boss, but the tonalities are going to depend a lot on how you feel at the moment. And it's kind of the same with tweets, right, you create a mental model of who might be writing this, especially if you've never heard about them. And that can really lead to more social polarization, you're going to imagine a certain type of voice or imagine a certain type of tone. And I'm kind of interested in how the clubhouse is gonna, you know, based on different biases, it's not going to be based on imagined bias is going to be based on biases based on actual acoustic information, which is still biases, but it's also kind of different and the way we talk to computer or machinery, I mean, we can talk about that for a while, I'd love to talk more, with come later about that. But for me, I'm kind of taking a detour when looking at this question. And especially looking at how we talk to animals, or how we talk to babies. Because we do have, in subtle ways, very different voices for every person we talk to. But when we talk to Siri, or when we talk to Alexa, we we have it's also a special voice for that. And it's kind of interesting to know about that, because it's not going to be exactly the same thing as talking to your friend or to your boss. And I like to think about the different degrees of variation of how we talk to animals, as a model to better understand how we talk to missions.
So Rebecca, you've thought so much about the diversity of our voices in a pluralistic world? I love your point. And what questions do you have for Tom about how series voice was configured?
Yeah, I'm, you know, I'm so glad to maybe be able to ask a question or have a conversation with Tom about some of those questions. I mean, I've been wondering about, what would that mean, if Siri were to age, right, our voices changed throughout our life, from voices of baby to voices of, you know, older adults. And that has a big impact on how we perceive ourselves. And I'm kind of wondering if you folks have thought about some of those questions. But I also want to kind of jump on one point you mentioned earlier, this is our holy grail was this question of imitating humans? And I think it's, it's interesting the way you phrased it, because often when we imitate, what we imitate is a manifestation. Right? You can imitate the manifestation of the human through their voice, or through other elements. But what I'm kind of looking at is, can you imitate the experience of human? Or can you think about, we need to not only the symptom of the voice, but also, where's the voice come from and how the voice is not only from scenes that we use to project ourselves in the world, but it's really a cycle of voice shaped us, the voices, because it's the sound we hear the most in our life, our own voice shape, like the biology of our ear and shape, our brain and the voice of people around us also shapes. And I think, I think it's going to be very interesting to look in a few generation how synthesize voices might shape us. So I'm kind of curious to hear if you have any thoughts or reaction to some of this slightly different way of looking at the problem.
Oh, yeah, absolutely. Rebecca, thank you. I mean, there's, first of all, your overarching point, and even underneath those questions is that the the, the voice is, carries a lot of information about the speaker about the agency of the thing behind the voice. And it also is a very culturally and socially designed thing, I mean, contextual thing, other words, like you say, you could age if you're talking to someone at age, I mean, a big movement, a story that when my first AI project in 1981 83, was to augment people with cerebral palsy with the ability to speak by hooking them up to Apple to ease and Botox voice synthesizers. And you would use the AI as a language model to predict kind of like you do when you're typing on your keyboard on your phone, it predicts so you don't have to type every word every time. This would do it at the word level and sentence level but same idea. So what's interesting there is when we gave this voice to people who by other previously just couldn't speak, they all had the same voice. And then we found out that Oh, yeah, everyone knows is the accessibility word. Like these guys go to conferences and they talk to each other have exactly the same voice. And it's kind of weird. So since then, people like Patel is we're have worked on how do you customize and personalize a voice just to make it be more unique. You can actually donate your your voice to be the the samples that are used to train the model to make a unique sounding voice. But it's a lot. It's a lot more than the sound and that was getting to hear more deep deeper questioning at the end there. Because we hear as you know, and as you've taught, we hear so much in the voice. We actually, obviously we hear things like, Well, our brains are tuned to hear, are they telling us the truth? Are they you can tell when someone's smiling, why they I'm smiling right now, while I talk to you. And now I'm not smiling. And you can almost hate see the voice in the voice of the face? And then, of course, you the reason why it's voice synthesizers, today Don't fool anyone is because it's not just the the fact that it sounds like a human, you know, whose sounds like, you know, words articulated, okay? It's because there's enormous amounts of information about intent in the way the voice is rendered. So what's what's gonna be fun to see is this is the art form of voice design. Right? And we're seeing now people like you professionals in the in the, in the science end of it, and then also in the sort of rd performed the creative side of it as well. People are trying to make synthetic new voices that have new effect on people, which is just really exciting, scary to if it's used to deceive people, but but very, there's a whole new world. We're just just beginning that space.
Great. Thanks, Tom. So my last question to you, Rebecca. And then Jay flow is is going to do a musical interlude and then I can't wait for you guys to meet out Wisner gross. Just a giant among giants. And as a lot of really interesting things to say. And then Esther is our, you know, our closer, and then we're gonna get into a dialogue. My last question to you, Rebecca, is I want to stretch things a little bit. I know when I heard your, your thesis you talked about not just human voices, but animal voices, that you were studying zoos and convening interspecies internet conference, if something like that were to happen. I'm just curious. Who would you bring to the table? I know in your thesis you mentioned that they're that humans aren't the only one with the drop layer next that they're there for the wolf. I think the deer some the Hammerhead bat, and some other but I'm just curious, like, as you think of not just humans on the planet, you know what comes to mind? What do you what should we be studying? What should we be aware of? I know Star Trek for whales played a big role.
Thank you for that and Thanks, Tom earlier also for for mentioning Professor rupal Patel, I'm a huge fan of her work. And she was actually on my thesis committee, you know, it's small world. And yeah, so animal voices. You know, I feel like the past the past decade or couple of decades, people talked a lot about language. And the concept of language believe was often used in terms of who's in the club and who is not. And even before that, he was used to say, oh, to define who is human who is not, you know, click languages was first not considered to be languages. And that was a, an efficient way for, for people to define who is human who's not. And I feel like there's still a lot of that's in terms of, of the animal world. And what really guides my research practices to frame the voice and also the human voice as a modern manifestation of social grooming. So, and social grooming really is action of like cleaning choses for this is ideas of how we use the voice timings a choice of whom he talked to all the subtleties. And come as you said, all those elements of intangibles is determined by you smiling or not. The many, many, one of the reasons I like the word reason for that. But why those are present in the voice is because they're there to be markers of social dynamics. So really looking at voluntary interactions as new social grooming, because in primates and a lot of other species, they use those grooming to reinforce social structure, family links, maternal behavior, build relationship, resolve, conflict, etc. And I think that those are both the direct origin of vocal of all human vocal messages, but also still one of the most important part of them. And I'm doing a lot of research right now and seeing even all the connection between the voice and the hands. It's kind of incredible in the brain, in our interaction in how we how our body works. There are a lot of those very direct connection between hands and and voice. And that's really is this idea that because functionally speaking, we could argue that they're officially non of the conversation taking place during the regular day that contain any information that's really important for immediate survival, right. So what really matters debates over contractions might have to be found somewhere else. And I like to think that that the content of most of our discussion are really just an excuse to absorb or contraction to touch each other through the voice.
Great. All right, thank you. Next step, we're gonna interview Alex but JFLO. Can you do a musical interlude, ladies and gentlemen, the International world winning beat boxer. He's extraordinarily talented. Take it away.
Thank you, john. So yes, I'm actually a beatbox. Right, if you guys don't know was just making music with the mouth. So everything you're about to hear is coming from mouth. I hope you guys enjoyed this.
All right, thank you. Alex Wissner-Gross, is a fellow at Harvard's Institute for Applied computational science. He's also the last to triple major at MIT. He was a triple major in physics, stubbly and math. He, he I've heard him talk about the power of algorithms and data sets and their role in breakthrough AI. Alex, you're an award winning computer scientist, entrepreneur investor, you've authored 23 publications and a grant been granted more than 24 patents. Can you tell us a bit more about the future of conversational user interfaces? Or CUIs? And where you see this is going post Siri?
Absolutely. And thanks for the wonderful intro John and Jonatan, I thought that was an absolutely incredible performance. So I think, going to the question to understand the future of conversational user interfaces, it's useful to look at the recent past of AI revolutions in general. And my go to rule of thumb when I first started thinking about where these AI revolutions come from, and where they're going bout a half decade ago, is that data sets tend to be the limiting factor more than compute, and more than algorithms, which is completely counterintuitive. algorithms are what get all of the attention. Whenever there's a major breakthrough in AI. You get tenure off of algorithms, you get awards off of algorithmic developments, but ultimately, after doing a survey five years ago, have a number of key breakthroughs in AI over the past 30 years or so. It appears that data sets really Are the limiting factor. So to understand the future of conversational user interfaces, like the one, Tom, that, that you famously developed, I think that the future is going to be reasoning based on past behavior, it's probably going to be defined by the availability of data sets. So one of the key data sets that that I would argue, supported, played a foundational role in the development of the modern conversational user interfaces that are based on speech that many of us now enjoy, was a data set, prepared by by ARPA, the Advanced Research Projects Agency called the air travel information systems dataset or database. That was really the key data set, one of the key data sets, along with one or two others, including the Wall Street Journal data set of of spoken wall street journal articles, for enabling benchmarks to be created, that enabled folks in in the AI community to be able to benchmark the performance of all of their different models for for speech recognition. So we wouldn't have spontaneous speech recognition to the fidelity that we have it today, if we didn't have data sets that enabled us to objectively measure the performance of all of our models. So I think the future going back to your question, john, of what the future of C wise looks like, is going to be largely driven by the datasets that we have available. And I think one of the most intriguing data sets that that we've been as, as an AI community, getting a lot of mileage out of over the past few years, has simply been scraping the entire web. There, there are many different public projects, open source projects, to scrape the web, to build a large natural language corpus based on the web, and then use that all of that natural language basically, the the collective knowledge and wisdom of humanity to train very large machine learning models, that are able to predict next words, I think Tom earlier used the term a language model language model is as a neural model, typically, or machine learning model that based on the past few words, or past many words, in, in written language, tries to predict what the next word is going to be. That that's very simple task of next word prediction. Yep, I would, I would argue is going to be the key to the future of CUIs because it has to understand human general knowledge in order to succeed.
Alex, that may be the perfect place to ask you to tell us a little bit more about large language models like GTP-3, and what you think these large neural networks may make possible in the next decade. You told john and myself the other day that you spend time many days conversing with this large neural model GTB three, and we'd love to hear a little bit more about some of the things you learn in your conversations.
Yeah, no. So I'm happy to promote this model GPT-3 generative pre trained transformer that was developed by a number of my friends. And I just, I think, large language models in general, that, that have been trained off of the knowledge of humanity, I think, to your question, I think one of the things that's been missing in the academic discourse of these large neural models has been not really taking them quite so seriously. For for challenging problems. There, there is a growing body of literature on GPT-3 and other large language models, using them for classification challenges, using them for for a variety of prosaic tasks, what I haven't seen really thoroughly explored and, and what's become almost a hobby or free time passion of mine, to the extent that I have the time has been talking, really trying to treat language models that the most advanced ones that we have, almost as if they're people to basically treat them in a collegial way and to engage them in a way that that one would with with a colleague to pose challenging questions to them, not just menial labor, but actually see what they're capable of in terms of solving hard problems, Grand Challenge problems, future of humanity, future of economics, future of various technological fields. And one of the sort of hobby projects I have is, is trying to to systematically catalog all of the the different answers that these large language models can produce. When handed challenging problem and and some of the answers really are remarkable, I think one of the one of my favorite interactions, perhaps one of the spookiest interactions was conversation, if you will, with with GPT-3, on on the future of capital and the future of economics. And and the model went on this long discourse I was I was asking, you know, one of my other interests is post scarcity and abundance. And I, I posed a very, I think, simple question to the language model, which is, if, if humanity is moving towards towards an era of abundance of post scarcity, then is there really a purpose in accumulating capital now? Or should we just wait for this era of abundance to wash over us, and really not worry about capital accumulation in the short term? And it gave me this really thoughtful answer about how how it is still important to accumulate capital, how capital will not simply be washed away, if if and when abundance comes, but rather, will be converted into a new form of currency, that will, that will be managed through through a descendant of, of course, that this is clubhouse. So I have to bring up the blockchain through descendant of the blockchain that will be incorporated into into the minds of of superhumans. And it went on and on and on, laying out this really detailed scenario than yet for what the future of capitalism would look like in an era of abundance. And I think that that's one of many similar interactions I've had with some of these large language models really trying to treat them, not just as these pedestrian servants that carry out small tasks, narrow tests, but actually by train trying to treat them as prodotto artificial general intelligences. And I think, to the point that that Rebecca was was raising earlier about human non human animal interaction and non human animal intelligence, I think from an ethical point of view, we have a moral obligation, even though arguably, AI is not, not at the general human intelligence level yet, I think we have a moral obligation to in certain respects, and in certain directions to start practicing for the day, which may not be in the very distant future. When when these AI's will be our intellectual superiors, and so we'd better create the world that we want to be living in at that time.
Alex, thank you. And that leads nicely to my next next question, the Turing test, where are we in the Turing test timeline? How do chatbots fit in? The Turing test? Is the Turing test? does it provide a high enough bar? And if you were to create a new test, the Alex Wissner-Gross test? What would the milestones be there?
Yeah, no, it's a great question. And I think, if we could go back in time and repose, the Turing test, not even as an imitation game, but as a mathematical theory, one that that can operate independently of human judges. I think the recent history of language models suggests that that there's actually a much a much more objective test that we could pose. And that is what one might call a perplexity test. Basically, how well can can an agent whether it's a human or a machine, predict masked elements of a human knowledge? How well can it predict the next word in a novel? How well can it predict the missing piece of a picture? Self supervised learning and self supervised prediction in language modeling, over general human knowledge? And and how well that can be performed? I think is has the beauty of being a much more objective challenge that's not subject to to the vagaries of human judges. And so if the question is based on recent trends in the field, when it seems likely that that will have human or superhuman level reasoning, there are any number of benchmarks that we can look at the number of friends at the at the Allen Institute maintain this absolutely wonderful leaderboard that folks can find if they search for AI to leaderboard have all sorts of natural link Understanding and reasoning benchmarks. And I think that the history shows and certainly my survey from five years ago that I mentioned earlier suggests that typically within three to five years of, of a data set, and a community and perhaps an annual competition being built around the data set, whatever problem, whatever grand challenge that data set was oriented to solve, that problem gets solved. So for human level reasoning, about physics, human level reasoning about social relationships, all of these benchmarks suggest that we may be within a year or so of human level reasoning in those narrow fields. And then zooming out more broadly, if you if we consider perplexity as an appropriate substitute for the Turing test. Again, given this three to five year rule of thumb, we could be maybe two to four years, perhaps 2025 away from if not sooner, human level perplexity, and some of the fields would probably argue that, that we're nearly there already. So probably pretty soon.
Wow. Sort of newsflash on a Tuesday evening, Alex, given that AI is gaining pretty quickly, and as Tom started us on this direction, it would be great to hear a little bit more about your views on the various efforts that are out there to create believe it's called AI value alignment. What do you see as the role for this field? And do you think in the coming decade, we'll see more AI values integrated into some sort of formal regulation? And if so, what are examples of regulations you can imagine could pass in the next decade round about the time that AI is passing this perplexity test?
It's another another great question. So So here's how I think about AI alignment regulation, I would say, as as a civilization, we have a fair amount of experience with regulating entities that are faster than we are vehicles, for example, that are more energetic than we are various sorts of explosives, for example, and, and also entities that are able to think more quickly than we can. An example of that would be quantitative trading firms. One might suggest that quantitative finance and various other fields, perhaps quantitative advertising, as well represent blueprints for how if if we go down the road of regulation, which arguably we are already well on our way towards how we could use regulation of existing entities, for example, corporations, as one form of sort of super intelligent organism, how we could use those those existing regulations as blueprints for helping to regulate the friendliness or the alignment of human level and superhuman artificial intelligence. So some of my favorite examples if we, if we pretend that that the future AI will be Corporation like let's let's think through some of our existing corporate regulations that might be applicable, anti trust regulations become applicable, in order to, to try to avoid Singleton's basically extreme consolidations of power or monopolies. I think source code audits have have some relevance as well as model audits, these have been suggested by the CFTC for various trading algorithms. There are a variety of other other measures that that we see on wall street like circuit breakers, if, if AI's start to to make sort of extreme moves or, or actions that that are suggestive that that either they're they're modeling the world incorrectly or they're taking some dangerous actions. The ability to have centralized cut off switches for AI's is, I think, analogous to the circuit breakers that that we see on on Wall Street and in the financial world. There are other more sophisticated regulatory mechanisms, short term capital gains tax is, is something that is perennially been proposed that might offer in an appropriate General, appropriately generalized capacity, a way to tax or to throttle AI bandwidth to the outside world. And of course, each of these individual regulatory strategies has its faults. It's not none of them is a silver bullet. And anyone sort of schooled in, in AI alignment as it pertains to reinforcement learning agents will be quick to object that that a superior intelligence will rapidly find a way to overcome each of these measures, which is why I think regulatory frameworks in general will need to adopt a defense in depth strategy and not just lean on any single solution, which is, again, what history of regulating superhuman organisms suggests is probably our best solution. That and maintaining and this is final point, making sure that humanity remains economically coupled to API's. If ever we become decoupled, if ever, the API's are trading amongst themselves, and not with humans, I think that's, that's the really dire scenario that we want to avoid, we want to make sure that we're all providing value to each other, and in many cases, probably merging in order to provide that value.
Great. Alex, we know a lot about maximizing for efficiency. FedEx is an example we know a lot about maximizing for low cost Walmart, we know a lot about maximizing for Attention, attention, pick any of the social media companies like Facebook that that are doing that. And we know how to maximize for profits, much of the corporate world, but it seems that we know less about how to maximize for happiness, resilience and joy. Can you imagine a time when our algorithms start programming for some of these more challenging items? Or are we just heading into a more and more efficient low cost world? And what are some of the more challenging problems and datasets that you have seen AI? Something that AI can solve for?
Yeah, that's a really interesting question. It's one that I've studied. tangentially in the context of of trying to formalize definitions of intelligence, in terms of information theory. And I think that the closest answer I have to that is to recognize that all of the all of the objectives that that you named cost or profits or human attention, these are all narrow objectives that that arguably aren't instrumental. So so there's a popular concept in in the AI alignment literature, of while there are few concepts, but of instrumental goals and instrumental convergence. And this idea that certain certain goals are requirements in order to accomplish essentially any other goal, and some are not, I think, is going to be core to answering that question. So, so an AI that tries to maximize human attention is, is perhaps going to achieve that goal on some timescale. But really, unless maximizing human attention, or capturing human attention is a milestone on the way to many other longer term goals. It's not a terribly useful objective to maximize for, and as a result, that AI to the extent that it wants it lives in in an uncertain environment, and has to maximize its optionality or freedom of action, and cover lots of potential contingencies, it will ultimately perhaps be out competed by API's that are instead optimizing for more generic, more universal, more instrumental, short term goals. And so I would argue that as long as we have a competitive economy of lots of API's that are competing, and not just not just a singleton of a single AI, that that has the opportunity to the term would be to wire head, humanity and make us all happy or make us all sad, or make us all pay attention, and to sort of rewire our our own utility functions. I would argue that, fundamentally in a competitive economy, the API's that will compete successfully are those that are maximizing their own future freedom of action. And then this is going back to my earlier point about maintaining economic coupling between humans and API's. And ideally, in many cases, merging. an AI that is trying to maximize its own future freedom of action will necessarily also be trying to maximize humanity's future freedom of action. And so I think in summary, the the right algorithm that the right algorithmic objective to strive for is algorithms that try to maximize humanity's future freedom of action and not some narrower objective, because ultimately, those are the those are the objectives. That will be the milestones on their way to essentially any other objective.
Great. Thank you, Alex. For those in the room. This is imagination action. We're recording this, this will be a podcast on our website imagination inaction club. The last question I just asked had to do with happiness. Next week, we have Robert Waldinger who gave a TED talk that I think, top 10 most watched TED talk on happiness doing a longitudinal study on happiness. And we have two other guests next week with him. That'll be fascinating to continue on this on this topic. I'm so excited. We're gonna have here play musical interlude. She was Bostonian of the year a few years back as a teen she's traveled around the world. She's an amazing folk singer. And then I get to introduce Esther wood jiki, who is a international figure in education, she's done so much for education to such a leader can't wait for people to meet her. I think when you think of technology, understanding education and how we prepare the next generation, for the future work for future of being good citizens, Esther as repulsing a lot of this stuff, but Haley bring it bring it
This a little snippet of an original song called "Would You Wait."
I decided I don't need to be different, I just need to be okay. I can tell you everywhere I'm running as I sit here in my place. I could come along fine given that you're lenient when I change my mind. Oh, and it's the time, time only the time that I'm not counting that counts as mine. Would you wait. Would you wait, wait, wait for me. Would you wait 'Cause I'm gonna be everything I never seem to find the time to be. If you wait for me.
Great. Thank you, Hayley. Our next speaker is Esther Wojcicki. Known as "Woj" a international figures in education, a leader in blending learning and the integration of technology into the classroom. She built the journalism program from 20 students back in '84. To now it's close to 1000. She has nine award winning publications coming out of that group. It's like the gold standard for high school journalism. She's gonna she's a practitioner, she's going around and inspired legions of people to imitate her and follow her lead. She was California Teacher of the Year she was chair of Creative Commons. She's the author of a few books, moonshots, and education. And another book how to raise successful people. If you don't know, two of her daughters are fancy CEOs and her other daughters fancy Dr. Susan runs YouTube, the other and runs 23andme. She's an inspiration. She says her secret has to do with the trick. And she can tell you what that's all about. Google was founded in her garage. There's an interesting story behind that. And she's been talking about 20% time. And I think she's onto something. Esther is the Pied Piper, for education. And I think as we think about AI, you know, when you could type a sentence, and suddenly a paragraph can be written, you know, what's the role of the teacher? And I think there's gonna be a lot of insecurity as AI comes on the scene. And I'm very curious to hear what what Esther has to has to say, ladies and gentlemen, Esther Wojcicki. Esther, you've had a front row seat on the birth of the internet revolution, living in Silicon Valley. And with Google, being founded in your garage, and I know you were one of the first with Google education. What is surprised you with the impact of technology that has had? And what has disappointed you? And how much? And how promising Do you think technology we haven't realized? Did? Did you hear that question, Esther?
You know what, I had forgotten to turn on my microphone, but I just did. And I just want to thank you so much for that great introduction. And also, thank you to everyone who's here today. I like the description of the Pied Piper in education, because I'm hoping lots of people are going to follow this. And I've been working on it actively by giving talks everywhere, literally all over the world. And I'm really, in some ways, happy that I've had to stay home for a year. Okay, maybe I'm not so happy that I can't go out for dinner. And that I can't go to the movies and I can't see my friends but I don't have to travel the way I was traveling before. And I was literally in another country every week, if not two countries a week. So it's been my body is saying to me, thank you very much. We really appreciate you not taking all those things. bills anymore. So that's what's been going on. And so in terms of education, and what has surprised me or not surprised me, and what's going on, so I have been watching, I just guess, I've been there experiencing education. There are the technology revolution, from the very beginning. And as you say, you know, I was there at the beginning, when Google was born. I watched what was going on in the schools, I first started teaching in 1984, and started to use technology in 1987, in my classroom, so I was probably the first teacher in California to do it. And maybe the first teacher, I don't know, anywhere to do it. And what I saw as I saw technology as a support, for me, the teacher, I saw that I could do a lot more, that my students were much more empowered, that they love to come to class. And it was, it was really interesting. And I can tell you that one of the thing that's important, I remember when I google was first born. And that was around 1999 2000. And at that time, my program, my program had grown to about, I think it was already at about 100 at that point. And that was like with me, just one teacher. And it was at that point, I said, hey, it's it's time to start another publication. And maybe we should start a magazine. And I remember that administration was, let's put it this way, less than enthusiastic about my idea. They said, high schools don't have magazines, they've got newspapers, we already have a newspaper, we don't really need a magazine. Anyway, to make a very long story short, I decided to go ahead with this anyway. And we published this first magazine that was called their day. And we did four issues for the first year that was published, sort of, in the back of my class kids were working on it independently. And thank you to Columbia scholastic Press Association, because that first year, that magazine won a gold crown first place. So after that, I guess it was the administration was like, thrilled with the public recognition. So of course, got to hire a teacher that took over the bear day program. And over the next, I'd say, 14 years, I started a publication every two years, because the program just kept growing and growing. And we had to have more publications to accommodate all the students that were taking the class. And so what I realized early on is the power of giving students control of their learning. And the only way that I could do that was with technology. Because before that, they had to listen to me all the time lecture or tell them where to go or go to the library or whatever. But now with technology, they were able to find this information themselves, go online and do it. And then I could handle these huge classes, because I trusted them. And they were, they rose to the expectations. And they were able to publish in a way that was never seen before. So I would say for me, I thought that was probably the most exciting thing. And that was what I thought was the best aspect of technology as being able to empower the learner. So what surprised me, and perhaps disappointed me was how long it took for other teachers to adopt this in the classroom. So in 2005, I started, worked at Google. And I started Google Teacher Academy together with Kristin fidella. Works still works at Google. And we set up Google Teacher Academy. And we invited the first 50 people to come for the Google a Teacher Academy. And it was an experiment. Is this gonna work and well, let me tell you It was a two day event, we were all really excited about it. The teachers were thrilled. They The main thing that they got was a T shirt, and a lot of other swag. And they thought it was the greatest. And then what we did in that Academy was we taught them how to use the Google tools like how to use docs and how to use sheets and how to use Well, everything presentations back then they weren't called that. And they weren't the same tools, but they were similar. And to make again, another long story short and time, we stopped doing it Google Teacher Academy ourselves. And we hired a group to run the two Google teacher academies. And that grew and grew and kept growing. And then there were outside groups that were teaching teachers how to use the Google tools. And so today, it's the largest teacher. And program out there doing professional development. Obviously, in the world, there's probably 70 million teachers using Google account, Google teacher tools, which, of course makes me very happy. But it's still it's, I mean, while they're using the tools, they're used, they're learning how to collaborate, everybody is doing, because all the tools are collaborative. Those docs is collaborative sheets is collaborative presentation, everything's collaborative. But still, it's very interesting. Now with this interruption, in the education space, the pandemic, is a giant interruption. And so one of my philosophy has always been, let's look for the silver lining. So the silver lining here has been that the education space has completely been disrupted. And so they have to start using technology. While they may have wanted to or thought about it before, they had no choice now. So the first thing that I was surprised about in this last year was how many teachers still needed more training with all the technology tools. And I guess they were sort of doing it a little bit, but didn't really know how to do it completely. And so now they had to really do it. And so they needed more training. So they all rose to the occasion. So the benefit of the pandemic, more technical education for teachers, and perhaps now, there will be more trust and respect for students who are then educating themselves online. And now we're talking about AI, which I've just heard everybody talking about it just great. So that would also be, are there ways to include AI to teach students skills that they do need to memorize because quite frankly, you don't need to memorize as much as you did prior to the tech revolution, because you can always look it up on your phone. And so can we use this interruption to have people be much more attracted? Teachers more attracted to students more capable of using AI? To educate? And my answer to this is, let's see. I would say yes, with a little bit of hesitation, because I think one of the problems and the pandemic is that they've tried to recreate the classroom with the zoom calls zoom, and they mute all the kids, and they keep them all quiet for an hour, and then that teachers done and then they do the same thing the next hour. So there's still some cultural issues that need to change. But I'm hopeful. I'm hopeful that people will see that these skills, I mean, a lot of this memorization or skills where people really have to know. Like the, I think the multiplication tables, the certain there's basic things in math, you know, a lot of the chemistry, there are things that need to be memorized in spite of the fact that you can look them up. Maybe AI can help with that. So that's a long answer to your question, john.
Great. Well after Let me ask One more. And then I just want to assure our amazing audience that has so patiently waited with us for about 90 minutes that if you haven't been with john for other imagination and action sessions, they tend to run over at 8pm. And we take as many questions from the audience as the speakers are willing to answer. So don't despair. If we haven't gotten to your questions yet. I'm going to ask one more question to Esther. And then we'd love to open it up to your questions. So, Esther, I think you started to answer this. But I'd love to understand just a little bit more clearly about the link between your teaching deep experience and the role of AI? Where do you think AI is being used? Well, in teaching, and when you meet technologists and AI wizards, like Tom and Alex and Rebecca, what do you wish they could most developed to reform education?
Well, so I can tell you that most of the schools that I deal with and have interacted with, and this is worldwide, most schools do not use AI to the same degree that you perhaps wish they would. They, they just don't. And there's another company that I've worked with called area nine. That is really, they're experts in AI. And a lot of the AI that they're using goes into the corporate world, goes into the Army and the Navy and the Marines. But I don't know this is Palo Alto High School, Palo Alto School District, also Los Altos, Los Altos School District, this is Northern California. I don't see a lot of AI being used in the classroom. So that's a short answer right now with that, and I think there needs to be just more teacher training. With AI. That's it's so easy to use. But I also I think it just needs to be made available more readily. Maybe the programs are too expensive. Maybe the the technology is threatening to teachers, I just don't know. But I think there needs to be more PR for AI and how it can help teachers. And so that's something we can work on.
I think you're you're onto something. Let's open this up to the audience and get a few few questions. Esther Dyson, do you have a question?
Yeah. I loved Esther Wojcicki's comments. But I actually had just a question for Rebecca. And it's maybe not a question, but an observation that I'd love you to observe more about, I have this kind of normal American guy friend who runs an organization. And he also speaks perfect German. And his character changes dramatically. He, he stands more tall. He's much more officious when he speaks German. And it's just a striking change. I wonder how common that is? What can you say about it?
Sure. Thank you, Esther, for the question. It's very interesting. There's still so much we don't know about this, for example, I've been working a lot with people who stutter for the past few years and kind of creating a whole model to understand stutter as a discrepancy between inner voice and out and out. And I've met people who stutter who are bilingual and stutter in one language and don't stutter in another. So that's kind of interesting, but absolutely, the way we even the muscle we use for some in some languages and less than others is, is is just kind of one example for which really is a voice how we use it and really shape the body shapes a mine. So I I have anecdotal knowledge of that too. And there's some good research of a bilingual individual and what happened in their brain but yet, it goes really deep. And I thank you, because the question of language is kind of one nice paradigm to look at this. Absolutely.
What do you see?
So we're gonna take questions two by two, and we have Quite a few questioners already up here for those of you that are new to clubhouse and and I noticed quite a few of you, you have those wonderful party trumpets for your first week. So we're glad to take all comers. But for those of you who are really new, you just raise your hand at the bottom. And we'll call you up. And we'll take questions two by two, if you don't mind keeping the questions a little bit short, because we have quite a set of people who want to ask them. And if want to direct it to all speakers, let us know. But if you have a specific speaker, please, please tell us so omit I think you have the first question you need to unmute yourself.
Alright, thank you. Yes. I have a question for Esther or like whoever wants to answer it. So as a person who has studied in other countries, and also in the US, I see a huge gap. It's been education, high school and universities in us. So my question is, what do you think is, would be the rule of AI and filling this gap? Like, for example, one thing come to my mind is that, to this day, many students you didn't still know, you know, they their interest, or the future career is machine learning and like AI could help, you know, make that path easier for people who are still confused and don't know, you know, what are their interests? Or what career would be the best fit for them? Thank you.
Super omit. So that's a question for you, Esther. And we're gonna take one more question. Joe, do you have a question for all the speakers or one of the speakers?
Oh, yeah, mine is actually for Tom. I'm curious about what he looking back thinks about Siri and Steve Jobs perception of it and how it could have developed had a Steve Jobs still gonna live or be if say Tom was somehow in charge of apple and series development, how he might see it evolve today?
Well, I love that question. All of us friends of Tom would love Apple would look like under his leadership. But let's start with Esther, do you want to answer omits questions? So I think you have to unmute yourself.
Using AI to help you find your dream job is something that I think is a realistic expectation. And something that I think a lot of people are doing already. And there are multiple companies, offering students, teachers, counselors, opportunities for kids to use AI to try to figure out what they like or not. And I support that, I think that's great. And I think it will grow. Because there's a big need for it. kids graduate from high school. And the only thing they know they wanted to do in high school was get out of high school and go to college. That's about that's their goal. And once they're there, they actually need something like AI and AI program to be able to help them figure out like, what do you want to take? What what do you what are your career goals? In college, I think it would be a really great tool for all colleges to offer students. And so yes, I think that's a great use of AI. Thank you. Thanks for asking that question.
And, Tom, I think it's over to you on what Siri would look like if Steve Jobs is still alive and what Apple would be like under your CEO helmsman ship.
Well, I can't speak for Steve and that were everyone's really an off what he did include a great company. He bought Siri because he wanted to make his products great. And that's all I can say about what he he sent a status D. What I what I wanted to do, and what I think we still want to do is finish the agenda that was already clearly articulated, which his assistant is supposed to understand what you want, and get it done for you to do it. Right. And in my particular pet peeve is true. It doesn't matter whether it's Apple and Google anybody, Alexa, all these guys, all of these have the potential to take the friction out of the interface between humans and machines. I mean, how many times do we struggle every day, doing web interfaces and just silly things everyday get a vaccine. These things aren't completely unnecessary in a modern day, and yet, we haven't optimized for that yet. So that's if I if I had the reins of power in tech, big tech Companies I would say, optimize for a human ability to get what they want on, you know, and you don't have to invent new features. They're also low hanging fruit sitting right there. Right? So and I think that the way I would do it, and this is not forgive me a CEO, but doesn't matter. Well, if I had control or could influence right people, I would say do it with a universal design that is designed something that everybody can use, including people whose hands don't work so well whose mouths don't work so well, who is whose eyes don't work so well. And it was designed for everybody whose brains don't work so well, right? All of that, who are older or younger? And then if you do it that way, you'll get a better product.
Great. Thank you, Tom. So my question to all of you if you want to answer, which have you used voice personal assistants? And what applications do you use? A friend of mine was at Pixar for years. And he had his kids going to I think Montessori or Rudolf Steiner schools where they didn't use technology, and he was in charge of the graphics. And also curious, for Alex, you know, given that you're just at the forefront of AI, in order to maybe pose a question to this distinguished group of speakers. So who wants to take the first how do you use voice assistance? which apps are you using?
Maybe we can start with start with Esther. Since you're, you're not muted? Do you have a Google Home at home? Or an echo or Cortana? Or Siri, what what do you find you use at home?
So I have Siri. And I have an echo. And I would say that the number one use of Siri was with one of my grandchildren who thought that Siri was a person and love talking to Siri all the time. And Siri was very responsive. And so this was a way to keep her totally occupied. Because she took she carried Siri around. So that was probably the number one thing in terms of Google Home. I kind of like to turn my own lights on and off. And so that has been somewhat of a conflict, you know, with my daughters who have the whole Google Home thing and Google Home is you know, controlled by your voice. So that's, that's my personal interaction. I like using like Google. on my phone. while I'm driving, I use the Google Assistant. That's, that's basically my use of AI in my life. And of course, I like the music. Music in the news.
Well, so that's pretty prolific. I mean, Alex, maybe without naming any specific products, since you're able to hang out with GTP? Three, do you actually have any kind of personal assistant at home? And? And do you actually use something like this? Or do you build your own systems to turn off the lights?
Yeah, no, I think that the personal assistants that I wanted doesn't exist yet. So I'm stuck in interacting with language models in text mode, I've interacted with all of the speech based assistance, but I think we're a few years away from, from the assistance that I really want, which is one that that is able to help me solve hard problems and not just turn lights on and off for start a clock.
Yeah, you know, Alex, let me let me I'm gonna put the ball in your hands, can you you know, you're somebody who thinks about AI. You know, very seriously and you've, you've contributed a lot to the dialogue, why don't you pose the question or or you know, say something that you want a reaction to from the other three speakers?
Well, okay. So, I see a through line to a number of the points that that have been raised. And I think the through line is this when we think about language as as fundamentally the medium by which intelligence can arise and whether this is human intelligence or machine intelligence or non human animal intelligence. The question I pose is, I think, pretty topical, if we want to construct an egalitarian society where multiple forms of intelligence can flourish, the machine human, non human animal intelligence, what? How should we go about constructing language and and other social mechanisms to encourage the flourishing of multiple forms of intelligence and not just human, or not just superhuman AI? But to include non human animal intelligence? What would what would said future look like? What would educational mechanisms look like? What what is what is constructing a society that that is as heterogeneous as that look like in your view?
So Tom, or Rebecca or Esther, do you want to comment on that?
I'm happy to start. Of course, I don't have an answer for that. I can ask more question around this question. I would say, one important point for me is time. Because I feel like a lot of our technologies are there to allow us to do things, but very often, to just save time. And of course, time is limited, and life is limited. But I believe that there are people who have spent the time to learn other intelligence to learn to communicate with animals. And for me, it's a mix between accepting that some things that are fundamentally important to shape who we are, and better understanding between those different form of intelligence. Trying to use technology to save time to reach them, is a little bit of a conundrum, because time sometimes is what brings you to those different intelligence. And we've seen that a lot in science, how we've tried to, you know, the experience with cocoa when we try to teach other animals, our language, but not enough scientists have looked at how can we learn animal languages, even without technology, just taking the time to do it. So sometime, you know, I like to go low tech into some of those elements, and might not be really the question, but I do feel like some low tech approaches can teach us a lot about how to design new technologies around the different paradigms that is, and don't do technology to save time I, I will take the human time and I will dedicate part of my life to answer such a such question. And, and think about technology to go alongside that.
So I would like to comment on AI in a few areas. First of all, I will talk about it and business, I actually cannot stand it. There's nothing like calling Comcast up and getting all these automatic bots talking to you. Really, it drives me crazy. I'll tell you one other area, have you ever checked into a hotel, and you check in via AI? I mean, I did that. And actually, it's more popularity than now. Because then they don't have to worry about any COVID. Okay, it works. But it's certainly not very welcoming. And then retail, you know, I can check out almost everywhere, even at Costco, I can check out by myself. And that is actually, that works pretty well, for me, for the most part, that there are some stores where they still don't have it together. And so you know, you're in a line and somebody doesn't know how to use it. And so it takes even longer. So those are some of the uses of AI that I could probably do about. But so especially, I don't know, do you ever tried to contact Comcast? Do they have a single person there? That's a human being? I don't think they do. That's my answer on a few areas. Oh, and what about health care? You know, if you call up somebody and you get AI in healthcare, you want to talk to the doctor. They don't want to talk to a robot. But maybe I'm just being too picky.
Well, so to pull on that thread of it, Esther and Rebecca, it almost sounds like Rebecca, in the case of of what you describe AI could serve as a valuable intermediary between, for example, human and non human animals. Whereas Esther if I understand your comment, you're you're arguing that, that AI is precisely the intermediary we do not want for human to human interaction. And then for Tom, with work with Siri, that Siri is, is perhaps at or let's say conversational user interfaces that are speech based, have found their their killer app, largely I think for for humans, in isolation, there's almost in many cases sort of a stigma against against using using speech based agents for certain types of work interactions. And it may be especially effective for humans in isolation, but perhaps not for human to human interaction. So I wonder maybe, collectively, if the through line here, among sort of the three of you is when is AI an appropriate intermediary for social interactions? And when is it not?
Yeah, Alex, that's where I was gonna go actually, without I mean, to, to imagine a multi species. egalitarian society, right, where some of the species are artificial is is really lovely and elegant. And I love science fiction writers who play with that game, play with that model, it's a real thing is going to happen. I'll call out a project called the earth species project, which is using techniques of modern AI to build a bridge to other animals in a language bridge. So we can talk to them, whales and stuff like that. So go check it out. But in terms of, that's just making the first you know, throw the rope over the over the canyon, then you build a bridge, and then you build this freeways and everything. But the the vision of egalitarian society, here's what, here's what I think would have to happen. Remember, you mentioned the alignment, alignment problem, the value alignment problem is that, you know, if you don't, if you just let the eyes go run or run amok, they'll optimize for whatever, whatever is put into their little heads that wouldn't call the objective function. And then of course, the if the objective function is nothing but attention, optimization you get, you get what you get optimum, you know, addiction and all that stuff. So instead, what if you optimized for collective intelligence, and collective welfare? If you could program that into the objective function of all the API's, then you would also have the API's optimizing to help humans participate, and the collective intelligence and welfare. And all of a sudden, we'd have value alignment at the collective level or the societal level. And I think that's, that's the kind of world I want to help build. Because if you think about most of the problems we're facing that were just tear our hair out, gone. Oh, my God, can you believe what 2020 one's like? A lot of those, you can think of them as collective intelligence problems, the failure of collective intelligence, right. And AI actually did a little bit of damage to collective intelligence that, you know, among other things, but if you imagine that you have people like that, like you're talking about with Alex, who's studying the value ladder from if you if you can program it in? And I think you can, you can program and human welfare and collective intelligence goals. Only if you take on the problem of understanding what human welfare is. In other words, you can't like bury your head and sanitize too hard. We don't know what our eyes can't understand what's good for people. We can't do what as an outlaw say, That's not true. You can, it's just hard. And and that's, that's, I think, if we, if we take it on that way, I think we're gonna get kind of outcomes we all want.
I love that. And speaking of optimization problems, we have 22 people on the stage with questions for four speakers, I'm sure there's a perfect AI algorithm for how to do this justice. But john and i pioneered something last week that we think works well, we'll have to see if it works with this group. What we're going to do is take the questions in groups of five that requires those of you asking questions to really shorten them, it's it's not quite the time to share all your thoughts. But if you could, when you name a question name, who it's for, and then if you as speakers can group the questions if you get several. And then we'll sort of circulate to have you answer them. And we'll do several groups of that, I think, at least like four groups of that. So by my count, the next speaker in our row is Kartik, do you have a question? You're a newcomer to clubhouse. We love to have a question if you have one.
Yeah, thank you, Alison. My question is to Tom. Tom, thank you and thank you to all the speakers. It's fascinating. My question I was very taken with what you said about graceful degradation. And particularly I wanted to ask and this ties back also to what Esther was mentioning about The context in which he would prefer not to have interaction with AI right now. So one of the problems we are seeing right now as we try to apply AI techniques to more complicated, you know, business oriented or enterprise oriented domains, is the whole world, it seems, has been trained to expect that they can speak in full natural language to any AI assistant thanks to the success of Siri and Cortana, and Google Home and Alexa and so on. Tom, do you have any thoughts on what it might take to adapt the concept of graceful degradation? And some of this magic and misdirection that you were talking about earlier? what it would take to adapt that to more complicated business and enterprise domains? Thank you so much.
Cool. And, Tom, we're, we're gonna take four questions for more questions, and then come back to you. So hold on to that one. JOHN, john Pierre, do you have a question? You'll have to unmute yourself.
Hi, thank you for having me. Yes, I was fascinated by everything Rebecca said. But I was really interested in this idea of the vocal skins. In order to kind of address biases people might have in like an interview process or something of that nature. So obviously, vocal skins or modulation manipulation is not something new. I'm in the music business. Right. So you go into studio, you're not a great singer. Auto Tune? You know, you come out You sound like when Easton. But I'm interested in this idea of proactively accepting its use to kind of alleviate certain biases, you know, how can we try to normalize that, you know, what, what can we do to implement something like that? I'm done speaking.
Cool. That's so exciting. So Rebecca, that question was for you, Evan, you're our third questioner. Who would you like to address your question to?
To Dr. Alex, please, for 100. No, just kidding. I'd like to understand your thoughts, Doctor, on the AI arms race analogy. Increasingly, it's described as a race between the US and China for AI supremacy. And even more specifically, there's talk of integrating AI into autonomous weapon systems. And we've seen early demonstrations of that in Russia, and here in the US, and aircraft and other systems. What are your thoughts on this AI? arms race? And how could we stem the worst case scenario here?
Perfect. So Alex, we'll get to you for that question. And we'll take two more enough. Do you have a question?
Yes. Thank you for having me here. It's a real pleasure to be here with all of you. I'm calling from Italy. And it's it's a great pleasure to be here. So my question is about legal systems. So legal systems are really strongly related to language. So how can AI from your point of view can help professionals to work better on a judicial level and how we can work with AI, knowing that legal system are really related with that?
Who would you like to answer that?
but Well, it's for all the panelists, actually, if it's possible that maybe they can relate to that question. Thank you.
Okay, we'll see who who signs up for that. And Johan, you're our last of the first five. Do you have a question?
So my question was regarding what will be the future of the transformer modules of the GT three? And please, not so weird. Like, self supervised something else? Thank you.
Great. So maybe Alex, a couple of these questions went to us. So why don't we start with you? And then we'll do the circuit of the speakers. And please feel free to address whichever questions you you'd like to.
Sure thing. So So first question for me, I think was about arms races. Second question is future of transformers. So for the for the arms race? For the arms race question. So it's there has been a historic narrative, let's say going back about five years, that, that in great power competition, the winner would have said competition would be driven by availability of data sets. And, and that would certainly be in line with the thesis that I articulated earlier that data sets ultimately drive AI revolutions. However, with the With the largely transformer architecture led revolution in unsupervised or semi supervised learning over the past three or so years, that that whole paradigm has been turned on its head. And the data sets that are now being used are datasets that are publicly available. These are scrapings of web text. These are video and audio and images. And there's no shortage of data at this point that that post scarcity of, of training datasets for bleeding edge models has pushed the scarcity over I think increasingly, from datasets that that require sort of, or that might benefit from nation state level resources to accumulate over to, to algorithms and compute with, with algorithms in sort of new approaches coming out every couple of years with transformers, now taking the pretty broad lead over convolution networks, sort of bubbling up from from the entire international community and not being an obvious differentiator, leaving compute that the third pillar of AI success after data sets and algorithms as being a point of, of competition in great power competition. So to the extent that, that there will be an arms race, if we can rule out data sets as the terrain for for competition, and we can rule out algorithms, given that algorithms fundamentally seem to want to be International, the nature that just leaves compute, and I think it's an interesting intellectual exercise to speculate as to whether we'll see nation states start to compete on training, compute and inference compute for ml thus far, we really haven't seen it. But but these are early days, and that might yet happen. And for the second question about sort of future of transformers, I think, going back to what I was saying a moment ago, transformers are simply the latest of a of a sequence of architectures, you know, before transformers, you could argue that bleeding edge applications for for vision and a number of other domains looked a little bit more like resonates and before resonates, confidence and before confidence, sort of pre modern ml type architectures, it seems likely that transformers are going to be obsoleted in the next maybe year or two by whatever the next architecture of the year is that that learns from everything that's good about transformers. And for those who are interested in the math of why machine learning works, I would definitely suggest that you go back and read attention is all you need the seminal paper behind the transformer revolution. It I think it's reasonable to expect based on history that we will see another great architecture arise in the next two years. And then, you know, two, three years from now we'll all be looking back and saying, why didn't we think of that? Why were we so obsessed with Transformers now?
Super. So Tom, do you want to answer cortex and a couple of the other questions, and then we'll get to Rebecca? And Esther?
Sure. Yeah. Thank you. I mean, there's First of all, I would second Alex's thing that we're watching. Unbelievable acceleration in progress in AI right now. And I, you know, I used to be a bit of a curmudgeon in the space. And I still believe that we're underserving huge parts of the AI agenda like understanding Well, anyway, but what's happening in front of us with these datasets, as he's saying is that we can now basically imitate human language behavior very well. Right. So what can you do with that? So let me let me take that as a as a starter for the answer to Carter's question. Kartik asked about well, what do we do with this failure of expectation like this, there's experienced with the robots and Comcast, and all that, and so on. Those are really bad Eli's that has some AI stuck into them. They're extremely bad user experiences. And and it's just I wish that we had Comcast wasn't a monopoly, then they would be had normal pressures to have them stop doing that. But it's just it's basically a very poor use of AI. And it's actually a use of a very badly implemented AI. Right now, the language and mimicry ability is so good, that you could make a really fluent and pleasurable interaction via speech or typed language happen at these customer interface levels or other places. The thing to change is, we haven't really nailed the dialogue, dimension. The thing about language that makes it better than a touch interface is that the interface can ask questions back and it can say what it doesn't know. And so when the interface pretends to be smart and isn't, it's a loser, it's bad. But if you have an interface that goes, You know, I don't really know much about what you're asking. I know what the words kind of what what word sequences makes sense there. But I'm only buzzword compliant with that concept. Can you help me out with that? You know, and then the user can say some more things. If we actually build API's that do dialogue like that, and we know how to, by the way, there's a literature on it, it just takes a bunch of work, if we do that, we will have the experience that we're all going to be able to teach are not so bright. This generation AI is a little bit about human dialogue. And then we'll be able to have a to and fro. And what I would take this going forward is if you keep if you draw a line from those points, and you keep going in the future, what you end up with is AI is that learn from individuals at scale. That is they john Warner talks to his AI and it's really stupid today. But over time, it gets better at understanding how to talk with john and what john is interested in, and when john doesn't know what john knows, and so on. And it knows how to, to be transparent about what it knows with john, so john can teach it. And I think that's how we're going to get the revolution in the conversational interface.
Sorry, guys. Hi. I have a request. Please join shinai Jaffe.
Esther, did you want to finish? Answering? facilitating those last five questions?
Yeah, I think well, there were questions both to Rebecca. And Esther. I don't know if you had thoughts you wanted to ask. But Rebecca, it's hard to remember. But jump here with us you about local skins.
I'm happy to try to to to answer quickly. That question. Yeah, so things are I'm here for the question. Voice skins. It's an interesting phenomenon. I'm interested to see also where it goes, I'm working with a local company around Boston called modulates IO, who works on creating vo skin for the gaming industry. And looking into how those voice skin can also influence other things. For example, what if, during a zoom meeting, everyone has the same voice, or different questions like that, or what if during the meeting, you switch your voice with someone with the other person you're talking to. But so far, a lot of those technologies, at least the one that working near real time, less than 10, second, can millisecond delay, really only target the timbre part of the voice. It's kind of interesting to think about what that mean, and help us at least dissociate the effects that come from temporary and other part of the vocal posture. But again, I think it's important for those to come in tandem with more thinking about why were those biases, what does that mean? Etc. And I would actually recommend us a work of the pretty amazing work of Dr. Nina H. Shine, she's part of UCLA. And she recently wrote a very great book called The race of sound. And it's specifically in the music domain, but also generally thinking about the voice and what the voice is the various trends and complex connection between voice and race. And when we have those biases, kind of understanding what are the acoustic elements in the voice that triggered some reaction from people from people that feel like the voice is not from their same community or same group? And then kind of also thinking, Well, those acoustic elements, where do they come from the speaker's thing and to kind of understanding it's a whole phenomenology of, of what's going on there, and just asking more questions and bringing more awareness to those phenomena. Great
because, why don't you answer and then I'm gonna ask Tom to just pose a question to the speakers. But yeah, well, Esther, why don't you answer the last question?
Well, can I just read the question, so people know, or do you want me to?
Sure. You answer it however you want. If you want to read it, that's fine.
Oh, well, so the question is, education is so essential to what makes us human to developing our potential. And as we learn to be a functioning democracy, what is your What is my vision for the right combination of human and machine or technology potential that we need to really take our approach to the next level? What does the classroom feel like when this is done really well? So you know, as I listened to the speakers, and I listened to all the possible is for AI. And, you know, the fact that, you know, you can modulate, you can change your voice, or the voice, you can do a lot of different things with your voice. You know, people read into other people's voice, a lot of message. And if they're getting an artificial voice that somehow confuses the signals. So, you know, if a dog doesn't bark, then you don't call it a dog, or you're like, oh, my god, there's something the matter with that dog. And I, and I see a lot of this, you know, artificial, you know, changing of people's voices? I'm just wondering, in a classroom setting, I mean, how would that help education? I'm not sure that it would. I mean, in a class, a classroom and education, the number one thing in education is relationships. That's relationships, peer to peer. It's relationship between the teacher and the student. And its relationship between the student and the community. And so if these relationships are distorted, or if these are relationships with, with something that's not a human being, then the question is, does that help? Or does that confuse the education? And I would say that it confuses it. So, as I said, I think the main advantage of using AI in the classroom would be to teach things that people need to understand or I mean, like, let's just talk about geography in America. You know, most Americans, including some of our presidents, could not figure out where one country or another country was in the world. I mean, I, I mean, we can talk specifically about Bush, but I'm sure Trump also had the same problems, although, you know, perhaps his other problems were, were more overwhelming. So we didn't pay, pay attention. But I mean, all Americans could benefit from studying more about geography and the world. And that would also help them understand more about climate. But and that's something that could easily be taught with AI. But I'm, I think it would be subject dependent on to as to whether or not it would be really helpful in the classroom.
Great. Thank you, Esther. Hey, Tom, can let me just ask you to frame a question. to the to the speakers.
Sure, there is a there is a common thread here that we could tie together. And that is, if you get really if we get really good conversational interface that is we get nuanced speech with, you know what it sounds like? There's a lot of meaning dimensions in it. We have fluent articulate synthetic language generation. Maybe even it's in the context of, of our literacy. If If we had that? What does that mean about how do we protect ourselves against fraud, universal, deep fake fraud, that would allow, you know, demagogues or whoever bad actors to essentially take over our our systems of collective sense making? How do we how do we stop that outcome?
I think that's a very important question to ask, before we continue.
I'm happy to take that one on, I would say, Tom, that, that every generative model is also intrinsically a classifier. So that that the same models that give us deep fakes also can take us away. Or maybe the other way around, the models that take away our confidence in the accuracy of data can also give us our ability to classify fakes or not fakes. And, and we see this with some of the bleeding edge transformers and other architectures that just simply by by looking at how, how, with what probability they would, they would generate a given sequence of tokens or give an image that also tells us how likely that that data those data are to have been generated by set model.
Great, thank you for that. My my next question is, what do you see as some of the most interesting investments being made in AI Today, what do you think will be some of the most profitable ventures? What do you think are the ventures that have the greatest potential for planetary transformation? And you could define planetary transformation, however you want.
Happy to comment on that? Maybe briefly. So I would say that the most promising ventures are, are those human activities, that that right now represent scarce choke points, things, services that are scarce. And all one needs to do is, is look at the service economy, which dominates Western economies at at a variety of different verticals, to see areas where, where labor is effectively scarce. And as john, you know, an analogy that I like to use is imagine that suddenly, human skilled labor became 10,000 times cheaper, what would our economy be able to do? And what would be the right sort of startups to invest in? If suddenly, you had access either to labor that's 10,000 times cheaper, or to a human population? That's 10,000 times the number of people actually living on Earth, you answer that question, and that will guide you to the right investments.
So like a startup could compete with a s&p, you know, company just by harnessing that, that ratio, yes. Right.
Yeah, I'd throw in that the, of all the super power, superhuman powers that we're giving AI is and how you can monetize them. If you look at them, almost all of the really giant preventable problems on the Earth right now, almost all of them are due to human cognitive failure. They're not due to mass energy, lack of capital. And so if if anyone can get a handle on of how to make a business out of helping us know ourselves better, how to not get chronic diseases, how not to, you know, poison ourselves or our planet, and so on. I think that's not only a company that licenses addresses fundamental human problems could make money, but also actually might make the kind of world that we want to live in. And crickets too good business.
So let's see, I think we're going to do one final wrap up round if it's okay. We have a number of people who've been so patiently waiting in line including three clubhouse newbies, so we always feel particularly honored when people come and join us at imagination and action in their first week at clubhouse. So we're going to make the rounds, starting with Adela. And if you can keep your questions short, and name who they're for, this then becomes its own AI challenge for our speakers, because you'll have to listen for all the questions, take your own notes, and then we'll go around each of you and have you answer whichever questions you think most interests us so Adela, would you like to kick us off? Yes,
thank you so much. This is a question for Tom. I'm gonna I'm just gonna add that crickets and also jellyfish. I have a question. I am an immigrant artist living in United States. And my grandmother lives in Europe, in mountains, and she recently became blind, she's visually impaired. And for a couple of years have been cracking my head and thinking about ways how to connect AI. With social impact and how to make it more inclusive. I think something that we forgot to say is that a lot of these lot of these tools like Siri and other tools are just available for English speaking countries, and for certain people with certain abilities. And so my, my grandmother, who is visually impaired, is really knowledge hungry, and has been trying to learn a lot of different languages when she was still seeing. But now, when she wants to listen to music, she has to put a CD in the player, and then she knows the band. I wonder Yeah. If you have any projects or Tink tanks or anything in your
Adela that I think we've got the gist. That's a great question. And let me go up. So that's for you, Tom. But we're going to cover everybody. Liz Heller has been with us, I think every single week, Liz, so love to hear
an instrumental team member too
thank you. Thank you. Well, I wanted to, I think you've answered actually most of my questions related to a number of different things that you were talking about and even connecting tapping into things like linguistics what I want, which I was gonna ask you about, but I Yes, I would just say I have one. I want to thank everybody, of course, and give a shout out to please follow the speakers and the mods, because that's important and helping them build their presence here on clubhouse. And also that I guess I would, I would say there's always there's so much excitement about the opportunities and possibilities what already exists in AI. And there's also confusion and fear of tapping into a little of that in the low tech way. Could you could anybody answer the question of how they would help somebody who has a sort of confused approach to what they should do what they should embrace? I mean, Esther, you said a lot of wonderful things about very specifically about how it could help say education. And I would love to hear from any of the any of the speakers, anything else you want to add to that point. And also, if you have anything that you are working on today, that is important to you, whether it's research, new work, things that you'd like everybody to checkout? Could you share that? And where could we find it? And this is
the only risk that Liz is I can tell you, every one of our speakers has an amazing amount of exactly.
Maybe something they want to highlight.
Yeah, you might find that on their bios. But thank you so much, Liz for that. And Adriana, do you have a question? You're new to clubhouse? I'd love to get your question in.
Yes, thank you so much for your time. I think Dr. Alex and Tom, touched a bit on this. But my question is, what trends do you see from startups in terms of machine learning in an effort to keep customer centricity as a as a North Star? And and also for john to I know, on his bio, I know he's worked for a Y Combinator startup. So I'm just curious about the trends. With machines and and customers interested, I think, I think is is key in doing business. And and I think a lot of right entrepreneur's keep that in mind, right? So just curious about what what what the the moderators have to say on that.
Super. And I mean, just a note to our speakers, I'm sure you figure this out. But we have eight more questions to come. So please take notes, because I've taken some and we'll come back to you with the answers you want to give to each of these, but we're going to, this is the biggest clump we've ever put together. So it is our own AI test of your human cognition. Let's see toshka you're also new to clubhouse? We'd love your question.
Hi, my question is along the line was a Dallas question. Currently also having problem using Siri with an my accent, and my grandma as well. So I was wondering what kind of work needs to be done on what kind of data needs to be collected for
Apple Siri to work better with other languages? And another question is, well, AI can be used, I believe in the dispatching services like 911, because, as Esther said, and Rebecca mentioned that all these AI and other businesses are not working together, because it could be used in many more useful ways. For instance, in 911 call because if you have an accent, if the dispatcher is, does speak only English, they tend to it's like psychological effect where they they just block out and they cannot understand you even though you are speaking pretty good English.
So I'm super cash. I think we got that one. And we'll go on to some of the others. And for anyone who hasn't seen it, I'm sure, Tom, you've seen it. But Saturday Night Live has a really funny take off on what they call Midwestern Siri. And that's about the kind of accent Siri needs to anticipate. So, Jamie, I may be mispronouncing your name. But do you have a question?
Hi, there. It's Danny, them. So my question is to Tom, so you talked about sort of the hacks that you used early on in Siri to to deal with the large sort of problem space. In terms of combating that problems based on launching new features or new functionality over time? How do you think about sort of, in terms of voice interface, presenting that the new features and functionalities of voice assistant to users? How do you conquer the problem of not having a direct sort of user interface that you can easily sort of discover new features and functionality?
Great, and Shree, I may be mispronouncing your name. Am I close?
Yes, you're correct. Thank you for the opportunity. My question is a bit broad. Hopefully, it's not off topic. So AI, with all the wonderful potential use cases and potential to improve the world, there is potential for us in warfare, nation states gaining more control. And x, you know, also big companies and corporations, but I, you know, because AI requires a lot of access to data and computation. There's an argument out there makes the powerful, more powerful. So it's centralization, in, in essence, whereas there's another technological technological trend that that's captured the human. I mean, our society's mind space in the recent time being blockchain and crypto and Bitcoin, which is, in essence, decentralization, the kind of disempowering nation states and corporations. So do you agree with this view? kind of way of looking at things? And what are your thoughts? And do you see any parallels or contradictions? Thank you.
That's a great one. Let's see. I'm gonna keep going. And I hope that our speakers can come back to these, but we've got a few more. So Dexter, do you have a question?
I do. Perhaps mine will be easy. Just a quick aside, since it's mainly just an endorsement of Tom's point, which is that I wholeheartedly agree. And I kind of give up my question, to your point about human self understanding. It seems that especially given the voltage and progress of AI technology, as well, the delta between our mastery of the natural world and the failure of human self understanding to keep pace with that power, and to know how to marshal it responsibly, is really the progenitor of so many of not only the societal ills, but potentially the greatest progenitor of existential threats that we face. And so in some sense, that project of human self understanding just under the simple rubric, I'll end very quickly that what we have as a species, if we have anything in particular, is that we aren't the only species that we know all forever to have inhabited the planet to have a ceded to rapidly coevolution and culture. We're the only ones and that's why we're the only ones that are here on clubhouse talking about AI. And one of the great mediators in developing that coevolution culture was language. And so as part of the project of understanding ourselves and how we built culture, via language, and also what violence we may be doing, as we melt merrily mince new cultures and and spread them via technology is, yeah, it's essential to know what we're doing. Humans, really, I mean, famous only to ourselves for basically doing things and then realizing the effects on after the fact. So a part of that kind of like lenzing, of where we apply those efforts, I think, yeah, just again, to endorse Tom's point, human self understanding. Before we set out to marshal destroy powerful tools. We have
super I have feeling Dexter, that's almost a whole separate imagination in action session. But I'm sure John's already got it somewhere on the master plan. But we have four more questions if our speakers can manage that. And then we'll go do the full circuit of the four speakers. And please speak to whichever one of these questions was was most powerful for you and most directed to you with theme? Do you have a question for our speakers?
Sure, I'll keep it short. I think what everybody's kind of pointed out is AI has incredible potential to really solve so many different tasks and things i think i think AI will progress very quickly, as Alex mentioned. So my question really is on cognitive surplus, do we agree that one of the main things AI could help us achieve is a space where humans have cognitive surplus, so we're not just focusing on paying the bills every month, but we're actually in a position where we have the time and the space and the mental capacity to focus on issues like one of the other speakers just mentioned. Now, self discovery, or, you know, creating more or, or seeing what else humans can become or what potential we have if we have the ability where the collective or the majority of humans are not just stuck in just paying the bills, which means we have more time to be creative, and really outshine where we are currently. And until of education for Esther. I think, you know, she's just fantastic. And I've had great conversations with her. But I think one of the things I'd love to know is I know currently is being used for tracking and assessments. But does she feel that it could really help with being in a classroom whereby it can help bring personalized questioning and plans for students in helping them with their math or English or Science? I know Singapore has done a lot of work in that area. Thank you. I'm assuming I'm done speaking.
Perfect. Steven, do you have a question?
Yeah. So as you can see from my profile, voice is very, very important to me, I teach the voice mastery class here, in clubhouse, and I'm a trained opera singer. And so I've, I've sung in about, you know, 12 or so languages. And I use something called the International Phonetic Alphabet, the IPA, and I'm sure that any or all of you, or at least some of you might be in a know about the International Phonetic Alphabet, I'm guessing that all of you probably do. What does fascinate me is that very few people do very, very few people in America, at least know what the International Phonetic Alphabet actually is. And it's literally a series of all the sounds of the world. And I'm curious where AI is going from a standpoint of understanding that we want to use AI to to communicate better. And that's, that's great. You know, I only care about solving the world's problems, which is based on communication, how we communicate as the average Joes and Josephine's of the world. And now here with clubhouse, we have, basically a network around the world. And it's basically a network of sounds the voice, which would include the International Phonetic Alphabet. And going back to your point earlier yesterday, when you were talking about how it can be used in education. I'm curious how the International Phonetic Alphabet is going to be used in AI. And if that is the case, then why don't we just start teaching everybody the International Phonetic Alphabet, because somebody even pointed out there earlier that you know, the average Joes and Josephine's of the world, they're going to get left behind because of communication, and so on, and so forth. But we all have access to the IPA, we just don't know about it. And I would be interested in having a conversation where with any of you about that, at a later time to understand how we can use that to educate the world. As we move forward into E ai, and communicate with each other. My name is Steven harms, and I'm done speaking,
great, Steven, that was very fitting to end with someone who's so connected to some of the themes here. So me is going to play us out in a minute. But let's go speaker by speaker, you don't have to address each point. But I know you took notes. And why don't you just respond to what you heard. I know we had a bunch of great questions and some overlap some, you know, sort of standalone. But let's go through the room. Tom, any Any thoughts? You want to jump? A jump on?
Sure, I'll try to synthesize several of the questions, essentially, you know, how do we get to this place where we could have human self understanding and collective intelligence at scale? Using AI? How do we get there? If there's all this power, concentrated in compute, and compute is the bottleneck to getting better AI? Well, there's actually, there's a couple of things that give me hope. One is that it takes a lot of compute to make the model, but it takes nothing to distributed. So Alex didn't, I think I spent I mean, 10 or $20 million of the compute time to build CBD three, and maybe it was maybe sort of 90 more, I mean, a lot of money, and a lot of very expensive hardware. But now Alex can just play around with it on his PC. Right. And in fact, it works on phones in rural India just as well. Right. So that's the thing to keep in mind as we talk about the power of AI development, we just need to figure out how to make society so that the wealth and the benefit created from concentrated resource like compute will be distributed. And then the thing is, what is going to be distributed? Again, it doesn't it you could imagine a model where we train as sort of a preventative medicine. collectively we pay for the datasets to be created to help us to to learn the all the tricks, it would be necessary for humans to stop harming ourselves with self self destructive behaviors. And what kind of armor could we give them? That can be the cost of that development can be aggregated and into like a government or a large company, and the benefits can be distributed for absolutely for free for everyone. So I'll put that out there. If anyone wants to shoot that down, but I have optimism from that.
Great Esther, did you want to comment on any of the questions and then I'll go to Rebecca. And then Alex will bring us home. Esther, you may be muted in that real
sorry. So I'm very practical, because I'm in a classroom. And I'm usually faced with about 50 kids. And I can tell you, you don't start floating, imaginary, unusual or challenging ideas when you're faced with a lot of kids at the same time. That's what a typical, typical teacher does, every day, you're faced with a group of kids. So you want to use something that you're pretty sure is going to work. And so that's one reason why, you know, while this might be great to teach kids, the International Phonetic Alphabet, before you teach kids, the International Phonetic Alphabet, you've got to teach it to teachers. And I can tell you, the main thing teachers don't have is time, you cannot buy time. So what I personally would do in the classroom, and what I've done, is use adaptive learning, adaptive learning, something like that comes from area nine, and it modifies, it's great because it modifies the questions that the student is asked, based on their response. So that is AI used in the best way it can be used. So, you know, teachers can't experiment, I did a lot of experimentation. But it wasn't in every single class, it was in just a couple classes. And then I ran into a lot of trouble for doing that experimentation. If you read my book, you'll see that I was not the most popular teacher with the administration, because they're like, what is that teacher doing down there that classroom, most teachers don't want to do that most teachers, you know, you have a life, you want to go home, you know, and want to eat with your family. And so I think when we're trying to implement this in education, we need to implement date products that already have tested and worked. Otherwise, teachers are not going to take the time to use them or to do them. And so one thing I would like to mention on my bio, if you check my bio, is the latest company that I'm working on this this startup called track dot app. And if you check the bio, there's a code a link there. So you can have access for free for a period of time. And this is not AI. This is peer to peer learning. It's kid to kid learning. And so far seems to be very, it's really exciting for the kids, because there's no one more exciting to a 10 year old or 12 year old than a kid that is just a couple years older. So that's the draw. And so I think it's very exciting. Talk by everybody. And I still have my concerns, as I said about AI being used in the commercial area, where you know, where it isn't up to par yet. So thank you. And this is Esther.
Great, Rebecca, do you want to answer anything? And then Alex, will bring you up closer and then Emi will play a musical piece to shut the celebrate Today's Room? So Rebecca?
Sure, um, well, maybe I can just start with a comment on the international phonetic alphabet. I think that the person Stephen, you said it's all the sound in the world. I would argue to that. For me, it's a mapping mapping. That's very clever. It's mapping. That's interesting, because it's really take into consideration how how's the voice box work houses, the Vice President cabbie keys work, but it's just one way. In a lot of the work I do on the voices. What are all the other facets what are all the other ways to look at the voice because ultimately, like, like Tom and Alex and Esther said, I also do Caesar voice as a way for self reflection for self understanding. The reason why I dive into the inner voice is because there's very tight and interesting connection between outer and inner voice can really be seen as a membrane between the outside environment between our society between how we interact with others, and between the deep corners of our mind and that's for me, is one One of many, but I think one pretty nice way to gain some self understanding and to at least find an entry door to looking at something deeper in our who we are in individual in how we connect with others. So, yes, finishing on that.
So Alex, anything you want to comment on the series of questions?
Yes, I want to highlight the questions that were in dast. So the questions that I didn't hear that I would like to leave dangling, perhaps for future conversations, I didn't hear anyone asked about skeuomorphism. And whether it's a good idea for API's, to to resemble humans in their language or to, to not resemble historic patterns, didn't hear very much questioning about epistemology, which is I know, a topic of interest to some of us appear on the stage. Whether AI can help us know what's true and false. Tom, you alluded to that a bit, but didn't hear very many questions about that, and didn't hear very many questions about collective memory or collective self sense making and how AI can empower the collective to discover truths. So so maybe rather than responding to questions I heard, I will choose to not respond to questions I didn't hear, and and maybe leave those for future conversations. Alex,
well put, and if you ever want to know what a child prodigy looks like, grown up, trying to solve the world's problems. Alex Wisner gross. So I want to thank everyone for helping with this. David serves as like the COO David Chang. I know, we're just put 135 speakers on our homepage. So you can see all the speakers for the next year. David, thank you. And Alison, it's been great facilitated moderating this room with you. And you know, we have there's a bunch of other people here on the team. Thank you to all of you, and then Emi, can you bring us home?
Absolutely. Thank you all for such an interesting discussion. As a flute player. Our goal is to imitate the voice as much as possible. And so you know, we take elements of speech to create the sounds. And so this piece that I'll end with is that older friends
Emi Ferguson, you'll see on our website and she'll be someone we interview and you'll understand why she travels the world playing this beautiful music. Alison, anything you want to say in closing, again, everyone, this is imagination, action. Thank you all for coming. And thank you to our speakers for an amazing conversation, Allison, anything you want to say?
Well, I just want to add that it's such a joy to be here on Tuesday nights from six to eight plus plus. And we hope you join us every week. It's imagination and action. But even if you track something like AI machine learning, I just found it breathtaking tonight to realize how much faster it was going than I thought it was going. And thank you to our amazing speakers, thank you to our incredible musicians. And of course with clubhouse the magics also in the audience. So thank you to all of you that stuck around and your great questions, the
speakers, actually. So just before you go to our four speakers, we will mail you a Green Lantern ring as a symbol of how we think you're awesome. And it'll remind you of this night. And so be ready to receive it and and use those powers wisely. Sorry to cut you off, Esther,
thank you so much. Well, thank you, john. And thank you for all the time and energy you put into creating this. So we all appreciate it. But I wanted to leave everybody with this possible book they might want to read it's called Clara and the sun by Kazu issue guru. And it's a novel about this guy, who is or it's a it's a person he has. Clara was an artificial friend with outstanding observational qualities. So this is something that maybe people would find interesting because maybe it isn't AI, but it's talks about AI quite a bit. So I just wanted to recommend it and
We'll put it on her website, Esther and we'll put a link to it. So thank you for suggesting that imaginationinaction.club. All right, well, I'm gonna officially call this evening wrap. And thank you, everybody.