power as well. I mean, it doesn't the artificial intelligence applications of today don't always work perfectly. Almost nothing ever works perfectly. It's getting better. I mean, I often tell an anecdote about a weakness called hallucination. Some of you heard this story. I decided to test one of the chat bots by asking to write an obituary for me. Some people think that's kind of sad, but you know, I figured that the system would have seen the format of an obituary, because people die every day and their obituaries appear on the internet. And if you're trained on the content of the World Wide Web, you see an obituary, and you know what the format is. And I figured there's at least some stuff on Wikipedia about me, so asking it to write an obituary seemed like a reasonable thing, and indeed, it started out the way you'd expect. It says, We're sorry to report Dr sir passed away, and it gave a date, which was much too soon. I didn't like that at all. And then it got to my career, and it conflated things I did things other people did. I got credit for stuff I didn't do. Other people got credit for stuff I did, and then they got to the family members part, and it made up family members that as far as I know I don't have. So I remember scratching my head thinking, How the hell did that happen? I mean, if this thing had been trained on factual information, how did it produce counter factual output? I'm not an expert in the implementation of AI, but I imagine in my little cartoon way, one reason for this kind of failure is that when the system is ingesting content, may not have enough context to understand that this paragraph, for example, Steve's bio and mind could easily show up on the same page. We've both been previous chairman said ICANN, we both been involved in ISOC we've worked on the internet. Steve's bio and my bio could be adjacent to each other on a web page. And while this thing is ingesting worldwide web content, it can be words from his bio and words from my bio and then generating what's the probability that this word would follow that word within a certain number of other words in a cartoony way, how some of the AI systems generate output. Here's a string. What's the next like, most likely word we put out in that string? I could see how the lack of context of these two paragraphs could cause it to conflate our BIOS and produce hallucinated output. I will say, though, that as I continue doing these tests, the systems are increasingly able to avoid those problems. Some of it comes from pre training. Some of it comes from having a much larger context window in which to generate the output. So in my experiments with the most recent of Google's language models, I haven't been able to induce the same kind of hallucination that I was able to do a year ago. So I'm becoming increasingly persuaded that these are going to be powerful, enabling people to use online resources. Our CEO Sundar estimated that about 30% of all the code that's being generated by Google is being generated by AI media. My first reaction was, I sure as hell hope there was a human being that looked at that software before it was introduced into operation. And indeed, there is a whole process for doing that. But nonetheless, I think we are just beginning to see an era where AI tools of various kinds with various specialties will be enabled. They will make people more productive. To give you one example that I read about recently, there was an you know, that we have this Google Deep Mind team. They were the ones that invented AlphaGo, that played the go game some years ago and won four games out of five against an international Grand Master. AlphaGo was succeeded by alpha fold. Alpha fold figured out how to shape, how to how to predict the shape of 200 million proteins that could be generated by interpreting the human DNA. So we have a big catalog now of what the proteins look like. You could use the catalog to figure out maybe not interfere with the disease process of COVID being a good example, a small molecule that attaches to the prong of the COVID virus interferes with its ability to dock with the ACE two receptor, and there inhibits its propagation. So we now have things, most recently, literally, on the 14th of May, just a few days ago, another report was released. It's in the artificial intelligence is called Alpha evolve. This is a really interesting piece of software, because what it does is generate programs to solve problems. It's not solving the problem, it's writing programs that can solve the problem. So it's a step away. It's like the difference between functions and mathematics and functionals. So they got this thing, they posed a problem, which was to make more efficient matrix multiplies, because that's how these large language models work. They do matrix multiplication. 56 years ago, somebody figured out that the most efficient way to do a two by two matrix multiply involved 49 multiplications, and that was it. For 56 years, nobody could find anything better. Alpha Evolve is set go and find a program that will help discover whether there is a smaller number of multiplies to do the same thing, it comes up with an answer. It gets 48 multiplies out of 49 now I'm sure you're not levitating out of your seat because we got one less multiply, but you have to remember that when we're running these models, we're doing billions and billions and billions of those multiplication is to exercise the model, either to train it or to do inference unless multiply significant in the context of scale, and this program figured it out. So I'm, you know, now, super excited about the capabilities of some of these systems at scale. No human being would ever be able to do. I'm much more enthusiastic about this stuff than I ever was before there.