I'm so thrilled to have Peter norvig. Join us here today. Thank you so, so much for joining. You're not only the Director of Research at Google, but you also co authored the seminal textbook on AI and modern approach. And together with Stuart raffle, it really inspired an entire generation of AI researchers to pursue the field and then also educated them and promising paths to take. And it's used by over 1500 universities in 135. continents, countries published continents will be, I think, a little bit more thinking in the long term future when we plan to continent no one, I think even that number is probably outdated, because it isn't from this year. So it's probably many more at this point. And you're also the head of computational science of the Computational Science Division at NASA Ames, where you oversaw over 2000, scientists in robotics, software engineering, you were engineering, collaborative systems, research, and so forth. And that must have been a really, really, really special time. You recently work with a central to develop an online AI course that was taken by lots of lots of people all over the world, and you're quite gung ho on the promises of online teaching, you will start with a brief intro on your thinking of AI, before we move into more of a guided q&a. And as I said, because I'll have a chance to speak to you later tonight. I'll think as many as many of the group questions that we can get into our during the virtual conversations are that the better. So thank you so much for joining, really, really honored to have you here? And yeah.
Okay. All right. Thanks for having me, I'll send and welcome everybody. So I'm going to structure my remarks around the history of the textbook, because that's sort of my take on on, on what AI is like. So take it back to ancient history in 1990. I was at Berkeley as a research faculty and a doc with some of the other AI faculty, and we all not very happy with the textbooks that were available at the time. And, and I, you know, grown up on these textbooks. And they were great in their time, but it seemed like AI was changing in in at least three ways. So we were moving from focus on logic to probability focused on hand coded knowledge, to machine learning. And from a focus on expert systems that tried to duplicate human thinking, towards normative or optimizing system that tried to get the best answer no matter what. And there didn't seem to be any book that could do that. So we would gripe about that, if we went out to lunch, and nothing happened. Then in 91, I left Berkeley, I went to Sun Microsystems, I thought it was a good time to get to, you know, change my top level domain from.edu to.com. We didn't quite have the word big data back then. But I knew that's what I wanted to do. And I knew, it's hard to be successful writing grant proposals to put together a big team. But that industry was happy to give you the resources you wanted. And so that's what I did. A year later, I ran into Sarah Russell at a conference and I said, Well, you know, that the textbook we were always talking about doing, you guys must be halfway towards writing it by now. And he says, Now, we never did anything. And I said, Well, why don't we do it. And so even though we were in different locations, and different organizations, we started working on it, we got the first edition out in 95. And we got the fourth edition out last year. So I think some of the things have held up well, so we structured the book around a couple of ideas. One is ways of representing the world and reasoning, and I think that's still t to the field. We thought it was particularly important to pay attention to reasoning with uncertainty. And I think that's really right. You know, sometimes I think it's, it's hard to distinguish what's the line between AI and just regular software engineering, because, as I see it, you know, both are trying to do the right thing to make programs that work. And to me, the main differences in software engineering, the main enemy is complexity is that as systems get big, you have to deal with the, with their interactions. And an AI, the main enemy is uncertainty. They, you know, a typical software engineering program is you know, write the software for a bank. That's hard because there's a million rules, but you know, with the right answer, so you know, that if you withdraw a certain amount of money from one place, it should end up in the other place, they shouldn't be off by a penny. So that complexity is is the hard part. They're in an AI it's uncertainty because the data isn't good, never. Right? The world's always changing, and so on. So that was the main part of the book, machine learning was, was a main part. Although it's become much more important over the years and and then interacting with the environment we thought was really important. And and I think that holds true. Still, we thought in doing the fourth edition, we thought about changing the whole structure of the book to be more machine learning first, rather than to follow through talking about the environment and talking about the structure of agents and so on. But we decided, one, there'll be a lot more work to reorganize, everything was still plenty of work to do with it the way it was. And two other people were doing that. So Kevin Murphy, as of now a new comprehensive textbook that takes this ml first approach, and we figured let him do that work. And I will stick with our point of view, and people can read both, and they should be both. So the main differences for the fourth edition from the previous editions is, obviously, deep learning has come a long way. And and so we had a lot more to say about that, both in learning chapters, and in natural language and computer vision chapters.
We still defined AI as maximizing expected utility for class or problems that require intelligence and that that's a little bit uncertain. But in in, in the first three editions, we said, Yeah, okay, we're trying to maximize expected utility. And here's a bunch of cool algorithms that can do that. And let's analyze those algorithms and look at their strengths and weaknesses. And for the fourth edition, we said, Yeah, you know, we're still gonna tell you about the algorithms. But that's the easy part, you can just download those from GitHub, and let's not worry about them so much. And the hard part really is you have to decide what is it you're gonna optimize? and getting that right, it seems like now that's the key question. And then that's a big change. And again, our focus. So all of a sudden ethics and fairness and privacy and diversity, equity, and inclusion and lethal autonomous weapons, and all those become part of AI. And yeah, they were kind of mentioned in the previous edition, we had, at the very end, there was a chapter on philosophy, and we threw in some ethics in that philosophy chapter. But now that's all changed. And, you know, and I'm happy, because, you know, the philosophy chapter used to have things like, Searles Chinese room, and I was never very impressed with that argument. Anyway, so now it's much more media, I think in covering these issues of ethics and fairness, and so on. So that so that was really, to me the biggest change of all. And then I think another thing is that the audience has changed in the previous editions, people who took an AI class that was an elective, and you know, a normal CS person didn't take that class, you had to actually choose Yes, I want to take this AI class. And we sort of felt like, a lot of our job is to excite some of those people, so that they want to go on to grad school in AI, and then become professors in AI. And now that's completely changed. So now, every computer science major is going to take a class that has either AI or machine learning, or data science in the title. And a lot of STEM majors and other fields are also going to be taking that class and you know, our book maybe is appropriate for some of those glasses, or other glasses or other books that are good for that. But the audience has certainly changed. And we didn't want to dumb down the math too much, right. So you know, there's still integral signs and partial differential signs and so on. But we wanted to make it a little bit more accessible to a broader range of students. And then, so part of that is, is trying to explain it in a way that works for them. And then another part is that there's many more applications and less algorithms, right? So when we did the first edition in 1995, you could have a homework assignment of, you know, write an algorithm diplomat, blah, blah, blah. You can't have that assignment anymore as a homework problem, because anybody can search for those germs, and then a few seconds, they can download something that works. So instead, it's much more kind of project based now. So it's not implement this algorithm, but it's explore this algorithm and apply it to this data and seeing the strengths and weaknesses for what works and what doesn't. And that's a big change. And I think that's that's a good one. So why don't I stop now and open it up for you guys questions. So Google now has a AI tool that removes dog barking from the background and a new audio stream. Can you tell us more about how that works? whether whether or how to apply it in this in a certain circumstance? Yeah, I'm I'm just I'm just kidding. My dogs have spent a lot of time barking around the world to various destinations. But actually those algorithms are getting pressive and getting better. And so it's an
interesting application. Yeah.
And yeah, Peter, I would be I mean, I'll start. And this is an invitation to make me stop asking questions by just asking your own. Oh, well, I already have one. Okay, I'm going to stop immediately. You just do you go. Hey, hi,
Peter. First off, thanks so much for coming. I really appreciate it. So I've heard of a, an argument that's maybe defined on artificial intelligence differently, or intelligence differently than you did? Which is that like, there are these organisms or entities, even if they're not, you know, properly living organisms that exist. And they're trying to maintain some kind of homeostasis, or some kind of interaction, let's say with their environment. And then there emerges this complexity, which we view as intelligence, but that assumes intentionality. And it might be better to just say, emerging complexity. I'm wondering what you think about that?
Yeah, let's see.
I guess that's a good way of describing it sort of from the outside. But we're, we're trying to think of it more from the inside, right? So if you want to understand how do we how do we get to where we are? And what is the world like, then that's probably a good thing. But if you want to say, you know, I want to solve a problem, I want to remove marks from the background, then I'm not sure that that definition helps as much. So so maybe that over, you know, we're focused, more practical, rather than scientific or philosophy, philosophical.
Okay, next up, we have john.
Peter, thanks so much for this I. So resonate with your point about moving from algorithms to what to optimize. I'm on a panel on to help figure out how we can make our diversity and inclusion, more comprehensive and holistic at MIT. And there are so many dimensions, there are the traditional dimensions of diversity, physical and identity dimensions that are have been well established. race, gender, and so forth. But there are all the cognitive dimensions of, of an intellectual dimensions, things like concrete versus abstract thinking, and short term versus long term, high time horizons, relationship versus transactional orientations, and so forth. And then there are all the other still other dimensions like I call them Richard Dawkins, extended phenotype of household income, hobbies, organizations joined geographical location, and so forth. So anyway, this is a real world application of the challenge that you're talking about if in case that could possibly be useful.
Yeah, I think that's right. And that's, that's certainly something a lot of people are grappling with now, and can different organizations figure out ways to take steps towards it? You know, I, I've been working with that. And, you know, we think of it this diversity as it's three parts. So there's the pipeline of how do you get more people involved? There's the hiring process of buddy, reach out and find the right people and evaluate them fairly. And then there's a retention? How do you make a working environment that's going to be supportive of them. And we're able to focus on all three of them. You know, as a big company, we feel like we could do some things to help build the pipeline. Smaller companies probably can't make it significant that Matt and then focus more on the other two. And, and I like your comment, that, you know, there are subgroups that are legally protected, and you definitely want to pay attention to that. But I think there's all sorts of differences that are important. And I remember something. So you don't normally think of physicists as being an underrepresented group. But we hire a bunch of those. And I remember being struck by what a great thing that was once when we had a reading group in machine learning, and somebody was presenting a paper and says, and this views the points and you know, and then there's this theorem in the paper, and I wasn't quite sure of the proof of how it goes. And the physicist steps up and says, Oh, yeah, it's hard for you because you dumb computer scientists. Have a run notation for major seats. And he says, here's how physicists write matrices. And in this notation, the theorem is just this cancels with this, and then it immediately falls. Right? So it was just come from a different background, you have a different way of thinking. And some problems are trivial from that way of thinking that are hard from somebody else's way of thinking. So that was an anecdote that reminded me like, you know, let's get lots of people from different backgrounds, and throw them all together. And we can solve more problems that by the way,
thank you all it should if we do college admissions today, and it's not just for MIT, it's everywhere. It's so archaic and away, because we consider just one candidate at a time rather than the entire pool of candidates at one time. And if we had the means through AI to look at the entire pool, we could better optimize, or whatever dimensions of diversity we are looking for plus, ensure that you know, we have a higher quality of students overall. Yeah,
yeah, I think that's right, you know, I was just reading or actually listening to economists book noise. And he talks about this, you know, it's really hard for people to do these types of comparisons and write things down. And then he says, the best way to do that is to sort them into groups, and then do A versus B comparisons within the groups. And that you can do whereas, you know, if you're asked to use an individual applicant, give them a score from one to 100. People are terrible at that. But they can make the comparisons within similar
groups. Thank you. Next up, we have Robbie. All right, our question of, you know, how do you think about your actual stealing again, being social? So questions like oh, recommendation, engines means filter bubbles, who more extreme orgy? Yeah.
So that, so that's certainly problem, you're breaking up there a little bit, but I'm not sure I got everything. But these issues of filter bubbles. So so I think that's really important. You know, for me, at Google, I'd focus more on the search side. And we kind of feel like we're getting a bad rap. From from our analysis, there's not that much going on with filter bubbles that most people get the same results for most things. And there has been research that says, people who are doing the research online more actually get a wider variety, variety of views, then people will look at traditional news media and just get one of them. So we think we're actually doing pretty well there in other areas, you know, so it had cat Google's something like YouTube or in Facebook, or Instagram, and so on. There, I think there's, there's more of this potential to get sidelined into into one silo. And that that is a big issue. And we want to deal with that. You see, it's been tough that companies are hard, taking some stands in terms of saying, here's the content I'm going to show you or not show you. And they get criticized no matter what choices they make, you know, any any choice you make somebody is going to complain that you either did or didn't get the answer here. I also say I think we have our jobs a little bit easier in Google search, right? So if we think something is questionable, but not, you know, how bright illegal or should be banned, we can just put it on page five of the search results, right? So we don't have to make a binary decision of is this banned or not, we can just say we're gonna put it down there. And if you really want to find it, you can find it. But if you don't, you're not gonna find it. And I think somebody like Facebook has a much harder choice. If I say I want to forward this message to my friend. It's it's much tougher choice to not do that forward, when they've asked to send that message. And there you have to have stricter criteria for what you're going to be on rather than just a softer criteria, but pushing down the stuff that you think is not as good. So so it's so it's a tough problem.
And next up, we have Rosie Vitara. Great to meet you.
I'm good. To your co author, Stuart Russell nowadays I think focuses a lot on AI safety and the risks of increasingly capable AI systems. But from my understanding, he didn't necessarily start out with those concerns. They kind of evolved as he learned more and more about the topic. So I'm curious if if you went through a similar journey, how has your thinking evolved over time? And like, specifically as it relates to the risks of AI? And any safety concerns? What you what you think of as, like, top priority issues there?
Yeah. So I'm not really in the same place he is. I do you think that AI safety is an important field? And you know, I think neither of us is really that worried about robots taking over the world. And, you know, that's the the science fiction trope that you always see. I guess I'm more worried about unintentional effects. And that's why I think it's great that that people are investing in AI safety now. And you know, I wish 100 years ago when the internal combustion engine was being created, you know, people said, Oh, this is great. Look at all the cool stuff we're gonna be able to do. And I wish there been more people saying, Yeah, but let's worry about the the unintended side effects. And so I think it's good that an AI we're starting to do that now. And I think we probably got it about right, in terms of the the amount of emphasis we have on on being safe. I think there, there are a lot of things to worry about. So I'm already worried about, you know, sort of surveillance and totalitarian governments imposing strictness on their citizens in a in a way that's cheaper and more effective than having to do it with henchmen. So that's something Stuart's definitely very much involved in lethal autonomous weapons. I'm more worried about the lethal weapon part than the autonomous part. And you know, I think your advice, I'm a peasant in Pakistan, and the missile is coming towards me, I don't really care if there was a pilot in that plane. Or if there was a remote operator in Kansas, or if it was completely autonomous, or I died, my work era that the missile is coming, but rather rather than Google ordered?
And are there in particular to those risks that you worried about more like the surveillance part? Is there anything that we can do with, you know, research and AI almost to hold those risks? Is there like other positive research areas that we can really develop and push more in? And to help us combat some of those with? Yeah,
I think it's, it's going to be primarily a social solution rather than a technical one. But the technical parts are important, right? So you know, you got things like, the research into how to create deep fakes, and then there's research into how to detect them. And I think a lot of these things that will be ongoing back and forth battles like that. And, you know, and, and I've been involved in some of those battles, and things like spam detection. And so getting pizza spam isn't maybe quite as bad as some of these other effects that we're worried about. But it's an ongoing battle. And those are the good guys and the bad guys battling it out. And, and I think they'll have to be more regulation. I'm not sure exactly where that's going to come from some some will come from, from laws, some will call come from self regulation. Some might come from third party certification. Right? So again, you know, you go back deep into history. And when electrification happened, people were worried about this new technology that was going to kill them. And so underwriters laboratory came along and said, You know, we're going to certify these devices as safe. And that wasn't a governmental entity, but it was independent of the companies that were producing the these devices. And so maybe we'll end up with with the things like that, those kinds of certifications.
Okay, great. We'll take maybe perhaps like one more question on the sharp end, and then we'll move a little bit into further out topics and AI, and by heat, based on you've had your hand up for a while.
I thought Adrian had his hand up before me, but I can I can go first if he doesn't mind. So Peter, I have a quick question about the definition of AI. It seems that you've maintained the same stance throughout the different editions various editions of the textbook. I was wondering if your view on the definition has changed over the years? And what do you think about the argument that does the definition of computational maximization of expected utility is just too broad and leads to a not so cohesive cluster of topics? Yeah. Yeah, I think that's a fair criticism. And, you know, we have to put down some definition. And this is what we've come up with. And, and I guess I'm, I'm, if I was forced to do a definition, I'm happy to do that. But I always feel like fields are, aren't really defined by their definitions. And fields are defined by social constructs. And, and different people can make different choices on that. So you know, you go to some universities, and, you know, one university will have one biology department, and another university will have six different departments still have biological chemistry and Chemical Biology and neurobiology, and so on, and so on. And it doesn't really mean they have a fundamentally different view of the fields, it means the politics evolve differently in their university. And, and how you really tell what's going on in the field is you go into one of the labs, and you look around and you see what kind of, you know, beakers and Bunsen burners they have and what the lab smells like. And that tells you what they are. It's, I think the same is true of AI, that we can make definitions. But what really matters is, you know, what are people doing and you know, there'll be one group of people that are doing robotics and another people set of people that are doing neural nets in a certain way, and so on, and so on. And then I think the, what the field really is, is determined by those communities of people and how they interact with each other. And it's not really determined by the definition we write down.
Okay, great. Maybe we'll take Adigun as a final question before we move into the longer facing AI.
Adrian, thank you. So
how do you address decentralization? in AI, and in particular, in both the business as well as the technical, or maybe even more in the business in the technical sense, thank you.
decentralization, and that not sure what aspect you're getting at,
in the sense of distributing technology to individuals, and to connote ease of individuals? Yeah. That I'm able to control policy
Okay. Yeah, so I guess I am in much concerned with the local policy versus global, but it's certainly involved in personal control. So one of the things we would have definitely been pushing is federated learning. And this idea that, you know, if you want to do something like speech recognition, and everybody's got a different accent, so you want to do better for individuals. But, you know, we looked at that, and we said, you know, there's privacy concerns, and maybe we don't want to hold people's private conversations in our data centers, because there's too much risk that, that that would leak. So we said, instead, let's keep people's private conversations on their personal devices will never hold it. But we'll give them some software to run that will analyze what they're how they're talking and try to improve their speech recognition for them. And that's great. And that works individually. But of course, it would work much better if we can share that. And so we'd said, can we figure out how they can share parameters of the model that they've learned, without sharing any of the data that we need to improving that model. So that's been a big focus for us. So that's decentralization, in terms of getting the full effects of having one person's data help somebody else without having to share any of that data without having to trust a centralized holder of that day. And I think that's been a big change for Google as a company, and then me individually. You know, I wrote I wrote this paper on the power of data. We always saw that as an asset. But lately, I was saying, Well, yeah, data can be an asset, but it can also be liable. And you have to be careful about how you want to use that and sometimes it's better not to hold something and have a decentralized could say so. hear a lot from people who say it's tough to compete with the big companies now that certain of the models that are being built in the system, problems that are being solved and so on, require these 10s of millions of dollars in investment in computing power. And that makes it harder for work to be decentralized that, that only a few entities have the power to do that. And I think it's true that there definitely is some of that some of the things that you're seeing published could could only be done in a few places. On the other hand, there's still lots of other stuff that can be done. And there's lots of interest in pushing things out to the edge and doing computation on less capable smaller devices. So there's plenty of work left to do. And I also see think that in the future, these cloud providers will be trying to do even reward to track people through their platforms and sharing what they have. Well, especially with that.
All right, luckily, now that we have the first batch of burning questions on the right, perhaps mark, you like to take us in a bit more into a book relevant a future facing AI topic.
So Peter, as you know, I'm in our book, we're looking all further forward into the future, to a future in which most of the cognition of our civilization, most of the cognition that's happening in the world, is not human is is the descendants of artificial intelligences, or is, is artificial. And we're tying one hand behind our back with regard to how we try to act now in order to, to set up a world that is good in that situation that the way in which we're playing one hand behind our back, is we're saying these are going to be incomprehensive mind architectures that are really incomprehensible to us. So we have to imagine a framework of rules, imagine a framework of interaction of cooperation, a constitutional framework, if you will, that's really incredibly neutral among mind architectures, and can still serve to enable them to cooperate. And I'm wondering how, if there's any bounds on the incomprehensibility, when you look forward, past machine learning, or past the dangers of AI, I'll look forward to a world in which we've all let's say, succeeded at avoiding the dangers and now we're coexisting with a genuine, genuine cognition that's exceeding our own capabilities, and is everywhere. Um, is there any, anything systemic, that that you would expect to bound the the incomprehensibility that we should be designing for?
Okay, yeah, that's, that's tough questions. I guess, you know, one of the ways I look at it is to say, in many ways, we're already there. Right? So we're already living in a world where a lot of what influenced us is, is done by superhuman non human entities. And we call them corporations and governments. And, and they're pretty incomprehensible, and they have more effect on us than, than most individuals. But we can't understand that completely, but we can have some understanding and some predictions of what they do. And I think that's also true for humans, you know, we don't really understand each other, we don't really understand ourselves. And yet we muddle through, and psychologists, you know, can understand something, like the types of common flaws and reasoning that that people have. And we can try to make sense of it with that. So, so I think that that will continue. And I think the biggest change will just be in the pace that, you know, governments now have big effects on us. But, but they make changes on the orders of months and years, and not in the order of milliseconds. And I think that's, that's what I'm most worried about. And so I would like to see, you know, when you think of, of constitutions and so on and rules and what are you going to do, I, I think mostly what what can be sort of a governing effect and, and I'm surprised we don't, we aren't better at that now. Right? So you see things like they're these flash crashes in the store. Stock Market, we could have stopped those, we could have said, you know, rather than having these trades where Wall Street firms try to get their computers a few meters closer so that they can save a few nanoseconds in their trades, you know, we could have said, well, we're not gonna have any trade faster than the minute level, or maybe the hour level. And, and we're gonna put a tax on every trade so that there's less of an advantage to high speed trade. So I think those types of rules just slow everything down, which would be one of the things that I would concentrate on.
Just Just a follow up a bit. One of those things we take all a lot of inspiration from is that the the US Constitution was designed to industrial revolutions ago depending on your depending on how you count those, but let's, and it was the degree of complexity of all corporations and and lobby groups and all sorts of organizations, the superhuman, on all, all dangers, that that that structure has succeeded at continuing to provide a cooperative framework among superhuman adversaries that are incredibly more complex than anything the founding fathers could have fought to anticipate. And nevertheless, they set up a structure that that has continued to serve to some degree. But when they were doing that, they were looking back at the history of computer science, they they not computer science of political science, sorry, looking back at history, political science, they were very, very well informed on that. So they had a lot of things to patterns to, to know to worry about. And psychology to project somewhat with regard to the nature of human institutions, are I think we're facing a deeper incomprehensibility barrier with regard to all the coexisting superhuman cognitions that are going to be the descendants of our current engineering work. So
So is there
once these things are much less constituted from humans interacting with each other composing the super intelligences? What can you say about the bounds on the difference in character of the resulting intelligence compared to current superhuman human organizations? Yeah.
I think it's still a science. And, you know, I was struck by how labels and said that computer science have become a natural science.
And by that he meant used to be a mathematical science that we could prove our programs correct. And for our non trivial programs, we've never actually proved them, but we sort of knew which direction we would have to go if we if we wanted to prove them. And we proved some of the properties. And for decades, we taught software engineering like that. And he says, Now, it's not like that anymore. You no longer write a program and think about proving it's correct. Rather, it's like being a biologist for the field manual. And you know, you you download some program, and the field manual says, its behavior is such and such, and then you observe its behavior, and you say, Oh, well, no, there's a missing didn't get it quite right. And then you, you know, you update your hypotheses and on how this thing actually works. And then you try it again, and then you never bothered to prove it correct, or actually understand it, you just make guesses about his operation. And I think we're stuck at that level, right? whenever it's so complex, we're never going to be able to, to get down to a proof again, rather, we're going to be like naturalists observing what's going on and forming hypotheses about where that will lead in the future. So I think we should we should get better about trying to, to do the those kinds of observations and, and make those kinds of theories. Then you make a good point about, you know, the founders of the US Constitution, they did a pretty good job. You know, they lasted for 200 years, a lot of changes, and they mostly did, okay. You know, we see things like saying, Oh, well, maybe the electoral college business becoming more unfair, and so on. There's this famous story of Kurt girdle When he was studying for his citizens and test, he said, Hmm, you know, I've discovered a contradiction that could lead the us into a dictatorship. And then Einstein told us, whatever you say, don't tell that to the judge. And then so that was lost whatever that loophole was, and when we don't know if it was a serious one or not. And so, you know, it's hard, it's hard to get these things, right. I think it's also hard to balance. How much change you want to allow? Right? So, you know, you do want to, you know, I was talking about prohibiting too much change too fast, but you also want to allow some kind of change. So, you know, what, what can you allow, in terms of how fast can you change these? And, and, and what sort of, you know, guidelines, you want to put in place, this idea of, you know, tie me to the math, so I won't be tempted to dive in with the science. We need these capabilities to to, to tie us down to stop us from doing things that seem great in the moment, but we know are law are bad in the long term.
Active any follow ups?
Okay. And Peter, I think had a question that's relevant maybe to this one before maybe I'll jump in.
that that naturalist perspective, that you're according that PDF seems both? Sure, in some deep sense, but also rather pessimistic. You know, although we can't get robust proofs of the properties of the future systems, or the even the present systems that we're working with. It does seem as though it should be possible to, to get particular systems that have very well studied properties. And even if they're not, like robustly proven, you can kind of say, Well, on the test data set, they did this and when we threw adversarial examples in the system did did why. And then once you've got a university, An example might be if you want a system that thinks about the future, you could say that you, you want question answering over its future plans, and you want those question answers to be truthful, something like that. And so you could you can imagine building systems like that, and then having architectures where we compose some systems that have very well studied properties. And so I'm wondering if, you know, there should be kind of architectures like this for future cognition that we should really be intentionally laying groundwork for at the moment, and whether you have any thoughts on on what those might be?
Oh, absolutely, yeah. So to me, I like the term trustworthiness yet, you know, so I think we should try to build trustworthy system, you know, other people, like dark DARPA has this initiative on explainable AI. And, and I don't like the term explainable, because I think it's, it's not quite enough. because anybody can come up with explanations. And, you know, every day the stock analysts will give you an explanation for why the stock market went up. But if it gone, gone down, they would add an explanation to so you sort of feel like, the explanations aren't that powerful, like kind of more post hoc? And I think a lot of our own reasoning is, is like that, that, you know, we sort of use our gut to come up with the answer, and then we use our head to come up with an explanation that justifies the answer that we had. But that may not actually correspond to the to the reasoning. So trustworthiness, I think, ties into the types of things you were talking about that you want to be able to have a conversation with a system, and you want to be able to pose it scenarios and say, what would happen in such in such a case? And why this? And what were you thinking when you did that, and so on. And you do want to have some guarantees that is not lying to you? And that's a little bit tricky, because, you know, the real answer is, well, I made this decision, because I did these matrix multiplications. And then I compared this summation to that summation, and it was rare. And so that's not a very satisfying explanation. So any explanation that is satisfying is also in some sense, lying and that it's not the whole truth, but you want it you want it to be the truth, if not the whole truth. You want to have some way to verify that what it's telling you is something that you can trust and then you have these conversations back and forth, is still not proved. And you know, there will be situations that nobody had thought of where the system will misbehave because it's kind of outside the boundaries of have what you expected. And we just want to try to minimize those in terms of building systems that are more robust. And, and building better testing tools to try to do to limit the the untested situations as much as we can.
Yeah, I guess I have a follow up that someone presents this. So it's interesting, I think, I purchased subtitle for at least sometimes called the intelligent agent book because it really takes the the premise that toddlers, progression action, always taking the best possible action situation. And the definition of an agent, do you know holds both for humans and AI? Right. And so Julian Hadfield, who we had on last time, was also making some comparison between, okay, what can we actually learn between human and human principal and agent alignment for AI agent alignment. And so I'm wondering if you, you know, have more of the parallels in which we can maybe already learn from a human action, she was mostly top in terms of contracting and how we feel in ultimate nine or social context for the things that are still left unsaid and come in contracts. So you have a hunch of like, you know, how to how to go best about doing this artificial agent?
Yeah, so So getting back to definitions of AI. We surveyed past definitions, and we came up with this two by two matrix, saying past definitions, either focus on imitating humans, or just optimizing results. So that's one dimension. And then the other dimension is are you looking at valid reasoning processes? Or are you looking at the decisions that come out of those reasoning processes? And so we came down in the quadrant of optimizing the decisions. But other people are in other quadrants? And and that all makes sense? I think. So I like that way of looking at it. I think it's, it's interesting that our, when you look at our societal and sort of legal type issues, we can kind of cut across different ways of looking at bets, right. So some of some of most of our laws are focused on outcomes. But some of them are focused on intent. Right? So you know, murder is worse than attempted murder. Whereas, you know, if you're only focusing on the actions, or they're the intent of the, of the, of the murderer, they should be the same. Good, they're both equally bad in terms of what they did in terms of their decision. And in most cases, you know, we only punish you for what actually happened not not for your intent. But we do punish you for drunk driving, even if you drove perfectly. So I think, you know, society as a whole has to has different paradigms for looking at these types of issues. And we can we can combine them together in funny ways to, to make it make it true.
Yeah, I think in one of the next treatment, interviews and news to say that I think one of our challenges for the future is to describe to our markets and to our high tech products, what it is that we really want. In AI, there's a common goal of maximizing expected utility, we spent decades on the expected and maximizing parts, but very little on protecting utility of the given. And until the public has the power to say what I really want the markets, which was fully. So here, you know, I guess in our book, we take a little bit more of a, you know, questioning approach to the concept of utility, especially as aggregated over multiple agents. And so I'd be really curious to see because, you know, you're just pointing out, you know, that as a society, we refine things, perhaps not really inconsistently, but in funny ways. So can there be something that the public as a whole really wants? And what would it mean to figure this out? And then how could we possibly, you know, communicate this to agents that may be very different? Yeah.
Yeah. So, you know, there's certainly lots of criticisms of the maximizing utility approach. And, you know, some of these Gandhian approaches try to get it that I think a lot of the criticisms ignore externalities, and that if you take them into account that you get a better approach for for maximizing utility to me. So one of the famous examples is, well, what if it's determine that I have a certain blood type, so that my organs could say by people from die and so therefore I should be killed in my organs harvest and give it to these five people because they think five is better than saving one. And, and there is if you find that reprehensible then you should not believe in maximizing that, did you sell it? And, and my response to that is, well, the difference between these two outcomes is one, whether I get cut up or not. And yeah, that's kind of important to me. But the bigger difference is, does it? Does everybody have to live in a society where they're constantly under the threat of being cut up? And and once you take that into account of the utility, then it makes sense to say, Well, I can maximize expected utility by not having everybody live under that threat, and therefore, it's okay. It's okay. Under under utility maximization approach to not kind of people like that. I think a lot of the paradoxes are that time where you could be taking a broader view,
it makes sense. Well, I think that, you know, on the one hand, one could say this is rule utilitarianism, as you know, like what we want to live under a rule in which this is true. And I think in there, it's sometimes really difficult, whether, like, and how far rule utilitarianism, how that relates to just the types of rules that evolved by us playing evolutionarily over many, many iterations, or many, many iterations with each other. But yeah, I think, you know, this is definitely more efficient in Africa. And you're, I think we had a question. And watching the time, because we'll have much time to chat tonight, right? In a more structured manner. But we had one from Robbie here, which was on economics and then bouncing off by roasty. Bobby, if you want to go
for it. Yeah, I mean, my question was basically the long run and economical. Do you care to complement? Or is it substitute for human labor? proportionate? economy?
Yeah, so again, a little bit hard to hear you, but I definitely see AI as a as a compliment rather than a substitute. And that's so one of the things that I think was wrong with the expert systems approach of the 80s and 90s, of trying to say we're going to replace a person, rather, I think AI should be a tool that helps people get their jobs done. And, and sometimes that tool can operate completely autonomously or, or all the time or most of the time. But we should think of building systems where the humans are in charge. And the AI is, it's helping them achieve their goals, especially quick.
Okay, Rosie had a follow up. That was taking a little bit more concrete if you'd like to go. Yeah, I was specifically wondering, Peter, if you've had a chance to look into some of the recent advances in sort of automating aspects of writing code. So GitHub co pilot and up near Codex API, and just generally what your thoughts are on those sorts of advances, both on the labor market also, just in general, like what that what the implications are for to the technological development.
Yeah. So I think that's really interesting. I've been following it, I am played with it that much. I was kind of interested in this as a research area probably about five years ago, and maybe it's just a little bit too early. I guess I'm a little bit surprised at the way these systems have come out, that they're at sort of just how little knowledge they're incorporating, right? So they're really just looking at strings of tokens. And then they're able to be pretty successful using that. And when I was thinking about it, years ago, I was also saying, well, let's try to look at, you know, like with human language processing, we actually know the grammar of all these programming languages, so maybe we shouldn't we take that into account. And, and they are there or they're only doing that in a in a latent way. Which is, you know, maybe it's sort of a really cool hat that five years ago, it definitely didn't work, but now it does. Maybe just because you have enough code. And so people are looking at these issues. And you know, there certainly are problems of since it doesn't understand what it's doing. It's gonna make some errors. People also worried about copyright issues, and who owns this generated code and so on. So I think it's, it's really interesting. I think we could do a lot more And, and I think we'll see a change pretty rapidly in terms of what someone's day to day life as a programmer is going to be. And, and I think we've already seen that. And I noticed that in, you know, in working with my, my younger colleagues, you know, I feel like a dinosaur sometimes, right. So we'll say, you know, we're trying to do something and, and here's this new package that helps. And, you know, I'll sit down and I'll start reading documentation. And three hours later, I feel like I sort of have a pretty good understanding of ready to get started. And my younger colleagues will come back and say, I'm done. You know, I implemented it, it works. Let's move on to the next thing. And I said that, but but I have all these questions. How does this work? How does that work? How does this other thing work? And they said, I have no idea. But But I got the answer. So I'm not going to care. And, you know, and sometimes my approach of saying, you know, maybe you really should understand things before you say you're finished. Sometimes I'm right, but sometimes they're right. And it was a waste of my time to try to understand something, when I could have just gotten the answer and been done with it, and been on to something else. And I have to try to train myself to say that these are two different modes of operating, and figure out when, when, when it's right to do one or the other. And that's been hard for me to, to give up that level of control.
Well, I was a, I think you wrote a fantastic essay on this, like teaching yourself, I think, how to program and over 10 years, arguing for this more, I think, understanding approach. And I think, just in the interest of time, with one minute left, we had a child who I think was as away from China, if that's correct. And with another final question, and then we'll get into more of the deeper discussions tonight. IP.
Thank you for your talk. And I have two questions. One is how can we exchange more effectively, the ideas and results among many disciplines, such as philosophy, logic, mathematics, economics, etc. to shape a better version of AI area? And question two? is aligned AI possible from your perspective?
Yeah. I think that's a great question. And we've certainly seen examples of people in AI rediscovering things that were already know, right? So operations research, had a lot of these approaches already figured out, nobody in AI had read any of that. And then they started publishing them. And then finally somebody said, Hey, you know, this work has already been done. Let's use what somebody else has. So I think we need to do a better job of that, you know, maybe we can have some automated tools to say, if you've been there's been some work to say, you know, this literature intersects with that literature. Here's something else you should read. So so I hope we can do that. I also wanted to jump in there and say in the comments, john Chisholm had this point about common line that's really important. And I think part of it is, you know, it's hard to be a programmer, it's hard to write things down formally. And it's hard to write down laws formally and get them right. And maybe it's easier to write down prototypes to say, you know, not here's the exact limits on what's legal and not, but here's one example of something that's good. And here's another example of something that is bad, you know, and so that's what we do in machine learning, rather than in regular programming. And, and maybe the law in our, our ways of understanding and dealing with technology should be driven more by these prototype examples, rather than by trying to write down a law and get it exactly right. So so so I agree with that.
I'm really well, we're getting back into the, you know, point on constitutions made earlier. And I think, you know, explaining how they are like exploring that legal contracts with AI experts, I think will definitely be I think, an interesting field for us, as well over the next few years. Okay, we're now a one minute overtime, I want to be really mindful of your time, especially because you were gracious enough to grant us more of that later for an in person after show meeting. And thank you so so much for everyone who joined virtually, we tried to get as many questions from all of you as possible. And tonight we have a little bit more of a focused discussion with a fireside chat with Peter and a fireside chat with Mark Miller and Tim Tebow from a gorrik. I look forward to seeing a few of you there. And for those of you who filled out the type form when I go back to you, I can't wait to see you later. And thank you so so much for taking time to join us virtually It was really, really much appreciate it which really walk through a bunch of different areas and you're in charge and hoping to focus in on a few of those tonight. Okay, thank you, everyone. I'll see you many of you tonight or at the next one virtually. Bye bye.