Will AI Diversify Human Thinking or Replace It? with Chris Ategeka (UCOT), Timnit Gebru (Google AI) and Ken Goldberg (UC Berkeley) | Disrupt SF (Day 3)
9:24PM Sep 10, 2018
We have one more fascinating panel, I'm going to do this more jauntily. I think one more fascinating panel now about will AI diversify human thinking or replace it. And to discuss this, we're joined by Chris Ategeka from UCOT. Timnit Gebru from Google AI and Ken Goldberg from UC Berkeley. They're going to be caught conversation with Devin Coldewey from TechCrunch. The one the only and I think we've just about we just need one more chair. Do we need one more chair? We're not one more chair. Let's get one more chair up
Has everybody who goes to the party last night, guys? Did you go to a party you enjoy that so fun? Everyone's just I was like, too tired to talk.
One of the issues about this, this conversation actually is, you know, we were actually been discussing the, you know, killer robots and AI driving cars, etc. In the last day or so a couple of days as well. But the particular sort of angle that we're going to go into with this is the question how to be responsible stewards of advanced technology and tackle these problems before they change or possibly take lives. So without further ado, let's get our panel on stage last one of the day. Come on up, guys.
Well, thank you guys, for joining us. Thank you for joining us. Normally, I'll just have a little intro here. But I think I would like to shake things up just a little bit, just by asking a simple question is the singularity bullshit?
I think we may have some surprising answers.
Feel free to jump on this grenade. Well,
okay, I'll say one thing I don't, I'm not. I think it's important to to separate the singularity from Singularity University. true is that is very cool. I like Singularity University. I think it's a great organization and doing a lot of interesting things. But I do think the Singularity is overrated. I think it's a it's something that is sounds great is science fiction, but it is causing a huge amount of anxiety out there and freaking people out. And it's counterproductive.
I also think that, you know, if, if you're don't feel you feel invincible by some of the problems that are more imminent than maybe you might think about things that are potentially may happen, like, thousands of years from now.
like considered it is not only causing anxiety that I think it's distracting us for some of the more serious and more imminent things we should be thinking about.
We should actually
be thinking about it, though,
about the, the, the imminent singularity, well, why do we? Why do we focus so much on the ascendance of humanity, when so few of us actually even have access to the resources of today that, you know, there's billions of people around the world who have no internet access now, clean water. And meanwhile, we have people arguing about whether the, you know, whether we will upload our minds to the cloud in 10, 2050 or 100 years. Like, why does that arrest our attention? Why are we even talking about it, rather than the problems that are facing billions right now?
Yeah, I can go first. I think
most of our world today is driven by the bottom line. So if someone can make a buck quick, they just go for that and the whole idea that people just keep creating and you know be frugal create go fast and and fix it when it breaks. I think that's part of the mindset that's getting us in this position in the first place.
Well, I think one thing is also that it's a very old fear that goes back to the ancient Greeks. There's been the fear from from atheists and Pygmalion Daedalus, right? Always when you mess with technology, something goes wrong and somebody gets hurt, right? So that and we've seen that up through Golem Frankenstein x market Terminator. It's a it's very old as deeply rooted. We just have this fear in our, in our DNA. And I think it's just the latest form of that. Yeah, but we can't get it we were fascinated by it. And it's really something that psychologists and and and and humanists suit should you know, study and understand.
Yeah, so I, in my research, we wrote a paper called Data data sheets for data sets and one at the
things we did is look at other industries that are more mature. So like cars, automobiles, what was the discourse, like when cars first came on the road when we had no regulation and driver's licenses and things like this, and there was a court opinion on whether or not the automobile was inherently evil. So I think that's very interesting, because that's the kind of conversation
we're having right now. That's true, there's, and there's, there's sort of these they're sort of a techno utopian view. And then there's a sort of apocalyptic view, but you three have all started to push back in your own ways, with a sort of just a realistic view and say, like, Well, look, you know, it's good, it's bad, it's all of these things, here's how we can make it better, when did and how did you decide to sort of take that stance and push back and take this middle road,
I mean, there is innovation in general has been polarized, it's the idea that there's some people want for it, and some people are against it. And you and I know any, you know, members in the audience know that any technology is just a tool. And it's the values that people put the people who use the technology, the values, they have the turn into good or bad thing.
So for me personally, there is this whole hysteria of a lot of people who are paralyzed kind of the, the, they see the problem, but it's really you don't see the action side. So I decided to start a company to actually create solutions for these
unintended consequence. And that is, you cut unintended consequences of technology, which has become a go to phrase of mine. Now, I just, that's actually it's just a great way of encapsulating the the problem or the the approach that you have. Absolutely, yeah, and when did you decide to sort of point out just like, Well, you know, what, there's stuff we can do, we can just make this better.
Yes, so I was studying computer vision. And that was, you know, and I was always an activist, but I wanted to separate those two things, my activist life was very separate from my vision life. And I always wanted to make sure that I was primarily known for my technical work, and that those two were, you know, I wasn't put in a box. But I read this 2016 per public article about crime, recidivism rates, and how machine learning algorithms are being used all over the country and judges to to determine someone's likelihood of committing a crime again, and that judges would use that as one of the inputs to determine how many years someone should go to prison for. So that really scared me because I didn't know how widespread the use of these algorithms were was and being a black woman I could see exactly the ways in which it could perpetuate bias so that's when I started thinking about kind of working on that on that side of the problem as well and then again I would go to these conferences in machine learning or computer vision I wouldn't see any Africans and I wouldn't see any women and so then because of that I wanted to start black and I and kind of
you know to make sure that a there's a diverse group of people creating the technology and be the research questions that we ask our actually thinks that many people in the world care about right like of course everybody cares about the singularity but when you think about you know are we actually creating AI that helps farmers you know in in developing countries are we addressing their concerns so that's how I got into it
well for me I was I was I was feeling anxiety about the the singularity talk I think was around the time that Kurtz was book came out and my one of my colleagues agree right that this is really exaggerated and that's the takeaway from from from Carswell production. But I was reading delusional atari and they had this term multiplicity. And this is from from decades ago. But I started it's just struck me that that would be an interesting contrast with singularity, because it's really about the idea of multiple viewpoints being simultaneously legitimate. And I think that's something that's very interesting and also already happening. That's not science fiction. It's not something we have to positive in the future. And it's actually much more inclusive than these these these very exclusive exclusive exclusionary visions.
Can I add one example there, you know,
people have means we always get what they want. And poor people will always get what algorithms think they want. So when you on budget, you purchase things or you are going to buy your daily activities, the algorithms are learning and believing that that's the kind of things you like to do. If you could be an addict or you know, Video game addict or you could be a shopaholic. But the machine learns that you love doing these things. And it keeps recommending the same. And if you have means, and you're buying good food and great things, then keeps recommending the same as well. So people have lower income status, don't get what they aspire to get. They normally get what they can afford. And then that creates these bias that no one is doing and then like a malicious way, but that it's happening. Yeah, exactly.
That this kind of this bias and AI, which is something that you have your research has focused on, is this bias something new? Is this a new qualitatively new thing? Or is this just a new face on an old problem of income disparity racism? Is it just a new face of an old problem, or is this fundamentally
new, I don't think it's a new problem. It's just a the same problem in a different form. So to add to it, what Chris was just saying,
Kathy Neil and Virginia Eubanks wrote in their book so Kathy and Neil and weapons of math destruction and Virginia Eubanks and automating inequality, exactly this phenomenon of people in lower socio economic statuses are under more surveillance and go through algorithms more. So if they apply for a job that's lower, you know, lower status, they are more likely to go to through automated tools, if you're looking for senior leadership positions, or something like that, you're less likely to use automated tools, right. So I think this is just the same problem in a different form. It's just that just like one cars were on the road, we didn't have any regulation or was just kind of everybody can drive everything anywhere. We're right now in a stage where these algorithms are being used in different places. And we're not even checking whether they're breaking existing laws, like the EEOC, you know, the Equal Employment Opportunity Act and things like this. And the other danger, I think, is that when humans can have so many different biases, if you have just one algorithm that's being used by everybody, then you have a likelihood you're creating one point of failure. So this idea of multiplicity that you're talking about would be a good way to maybe start counteracting that. And, you know,
and speaking of something that's old,
the ideas in machine learning of having multiple algorithms working together, ensemble theory is very well, no. And so the idea of random forests, which is a technique of using different classifier errors, and combining the results together has been proved over and over again, that it works better than a single classifier. So we know that that's, that's been known for 20 years. And it's very relevant today to addressing these issues. But it's, it's important to bring that back up, because I think it's very relevant into thinking both about machine intelligence, but also about human intelligence. So having people with different viewpoints is extremely valuable. And I think that's starting to be recognized by business people in business. And it's starting to disrupt the conventional wisdom, which is that, okay, we have to do some, we have to have diversity on this board for, you know, just for PR reasons or something. And I think that's exactly wrong. It's not because of PR, it's actually because it will give you better decisions. If you get people with different cognitive, diverse viewpoints. I want to
give you another simple benign example, soap dispensers in the bathroom. If I go in the bathroom, and do like this. I'll get soap. If I do like that. I won't get the soap because those, you know, machines have been calibrated with data that doesn't recognize this, but recognizes that Yeah, and it can get soap under my feet, but not on top of my hands. Yeah, so those are kind of examples like saying how old age
been around. Yeah, been around a long time, we've had a couple of really interesting panels along those lines, we had the head guy at Kairos was up here talking about, he's going to make synthetic data sets for, you know, to make their datasets more inclusive without violating privacy. And we had, we've had, we've had this, this has come up a lot, we had one panel specifically on AI and bias earlier today.
And but I feel like
a lot of times when when somebody points out of problem with new technology, like, for instance, in this case, machine learning, there's like, Oh, it's too specific on algorithm, the solution seems to be what will make a better algorithm?
Like, how do we know when
the solution to bad tech is more tech? Like, we're just throwing tech at Tech? Is there? Is it possible that that's not the solution? How do we how do we make that determination?
So I recommend that Well, those of you who like to read papers to read them moral character of cryptographic work, have you read this because I just recently read it. And I don't, and I have not yet but
it looks really like
repeating myself, okay. But I love this, this this paper because it's it's a about 30 pages. And the author is basically discussing how people should think about the context under which the technology exists. So he talks about techno optimism versus pessimism, and how there are many ways that we remove ourselves from the context. And so I think that, to answer your question, we have to work in an interdisciplinary matter, right, with, with social scientists, with lawyers, with doctors who, with with people who understand what we're trying to do as researchers or as practitioners, with our technology with our algorithms, what the characteristics are, have really good documentation, intended use cases, etc. and work with them to see if this use case is actually appropriate for this particular technology, or if it's not ready yet, or if it's just should never be used in a particular context.
Oh, sorry. Go ahead. I wanted to add,
I think the problem is deeper than that a goes, You know, I just wanted to add on, on what she just said that the bigger issue we need to look at is our capitalistic framework that the the capitalism that drives technology in the first place,
everything in the tech world today is driven by the money, power politics, right? And so long as those things exist, and there's money to be made, it doesn't really matter for those forces that benefiting from this technologies. So it has to take a few individuals with conscious decision making and being intentional about the kind of technology they going to create.
And are you trying to collect those individuals with your organization or your community, you have an event coming, you're doing incubator?
Yes. So the incubator is looking at companies that are more focused on the shared economy, shared prosperity and thinking about us, as opposed to I because most of the, this idea of a unicorn mindset is about, you know, zero sum game, it's all about me, and, you know, crush the competition, but being able to look past that and look at their humanity element of that, and no one can do this alone. So I've decided to create a community of individuals who are looking at unintended consequence of technologies from different angles, different perspectives, and we have an amazing conference coming up at the end of October. And the whole goal is to really look at the dystopian side of tech and you will look around and you'll not find one conference that's literally focused on that major issue. And yet, it's affecting
all of us, I thought, you're gonna see, you're gonna look around, you will not see one to happen, say
nothing good? Well, I think that I think this point, you know, the Chris is making, what could possibly go wrong, right, is important, because we, you know, especially in the Bay Area, we're very optimistic, right? We like to try, we're very, we like to try and explore things. And here's one example we were just talking about recently was that, you know, when the spreadsheet came along, it was a really great piece of technology, right? It was, you can start to, to play around with scenarios. And you can change numbers and see what that what happens. And that's still how it's used for the most part today. But imagine that you would have a version of that, where what it would do is, it would constantly be trying to probe the, the, each of those cells and try to find out combinations where things would actually go very wrong. Right. So it might drive you crazy, right, I'm some sort of paperclip Gone Wild, but but it would be would be something that would essentially be like a devil's advocate. And as your as you're going through your scenario, it would pop up and say, but what about if this happened, yeah, then things could go really wrong. And, and I are in the survey that we just completed, we found that many times executives are very, very open to that they want that they want that kind of pushback. And sometimes their their employees are a little bit hesitant to do that, for reasons like we're seeing in the white house right now, the, you know, but get good, great leaders like to get pushed back.
So when we draw, and I wrote this paper about as how face recognition, you know,
there's error disparities among different, you know, skin tones and stuff. So now, now, it's good, because, you know, there's a lot of people talking about our policy changes, different companies are bad, but it can be, I think, it can be hard, if you're in a particular company to try to figure out, I think the external pressure helps. So it's like your people internally working to do things. And then that external pressure, publicity gives the people internally working to do things, more visibility, I think, but I can see how, you know, when you're employed by some company, it can be a little difficult, which is why I think if the industry as a whole moves towards more transparency, kind of like how the security You know, when people had security breaches, they didn't want it, they wanted to hide it. But then it became kind of the norm to kind of
make it known as soon as you have a security breach. I think if the industry as a whole moves towards that, then it'll make it easier for the people working in individual companies to bring these this push back, right? And sorry,
go ahead. No, no, it's in the past, innovation is mostly driven by policy, you know, you go to, you know, the house and you raise an issue that you come up with a budget, people have to vote on it. And then, okay, let's go to the moon, right? But today, most innovation is driven by the giant companies, you have most of the data, right. And if you go to IKEA to buy a simple chair, a simple desk there manual that comes with that, right. But when it comes to developing AI, everyone is on the right. And, and that is something that needs to be looked at. And she has worked a
lot in that space. That's your, your paper the data sheets for data sets, right?
Yeah. So that that kind of stuff. I think, like Ken said, people are very open to it. Because, you know, people want guidelines, like, the worst thing you can do to an engineer is just go and be like, Look, you got to make this system fair, you can't ship it, what does that mean? You know, so have a process, some sort of guideline, that kind of stuff I found that people are open to it is just when when it becomes when it looks like something is a PR disaster, then, you know, then it might be a little more difficult, which is why I think if the industry as a whole moves towards kind of notifying everybody, when they have some sort of this kind of bias, then it will be a little easier. I think,
what what aspect I'll just add on to that I said, many engineers, Mike's experience have a little bit of a tendency toward a little bit of Asperger syndrome, which means that, you know,
we're not the first one to say that just today, by the way, really, okay. I'm guilty of it, too. I mean, George, I think,
Oh, really, okay. Because I essentially just a, it's a, it's an occupational hazard, it just seems to be the case that, you know, a lot of times, I'll have students just describing something, and they're standing in front of the board. And they don't realize that people can't see what they see. Because they, they lack a little bit of this empathy instinct. And so I think that part of it is that when the engineers are building these tools, and software and products, they're not always capable of stepping back and seeing it from these different perspectives. So exactly what Tim Allen and Chris are talking about is having events like these these discussions, but also tools that can help remind them and as soon as you mentioned, hate to think of it this way there as you said, the very open to that, but it just didn't occur naturally. You
are sorry, I was gonna say you mentioned when we talked earlier, the Google kind of Maven revolting as an as an example, right? where a lot of times and accompany you think of this is not my pay grade, I do this, they decide that, but then somehow there is this kind of internal uprising going on, right. So that's a good example of how people can take ownership. Mm
hmm. And in this study, you did the survey a lot of executives, tech executives all over the world, right. And in various industries, how do they seem to in what I saw, they were very bullish about, oh, I want to integrate AI, I'm happy to do it in this way, in this way. But you know, it's one thing to say it, it's another thing to do it, how do we hold them accountable to those statements? Because it's, you know, it's very easy to say, Yeah, I'd love to integrate something like what you just talked about finding those worst case scenario is finding those biases and making them public, but are they actually going to do it, like, Who's taking those first steps, because it seems like the employees are having to, you know, ring alarm bell, and when the company's refuse to do that,
well, you know, what's driving a lot of it is, you know, because everyone's in companies are going to be driven by self interest, right. But what I think is happening, and what was very surprising about this study with Tata Communications, which is a very forward thinking international global company organized it, one of the ideas is that managers are already starting to realize that they are, there's a, there's a problem with employee morale right now. And when they started hearing about AI, their people are worried about their jobs. And they're, they're thinking, I've got to leave, I've got to find a new job. So they're worried about retention. And so what what managers we were surprised, but they're increasingly sensitive to this already. And so they're countering it by coming back and thinking about how can I make my employees more empowered and talk about AI in a very positive, inclusive way, it's not going to disrupt them from their jobs, but actually disrupt them in a positive way, and make them better at what they do.
So to take a concrete example of where AI and automation will be disrupting jobs, disrupting them, by mean that people will lose those jobs. They're not disrupted, they're out of a job, let's just say for a for a hypothetical, let's say, we know that by 2020, 100,000
jobs in, you know, whatever state Illinois are going to be gone because of automation. If we just know that, I'm not saying that it's a true or anything like that, let's say, we know that, what are some actual concrete steps that we can take what you like, because we can all read about it. But what concrete steps can we take? What can people do? What can companies do? What can the government do, if we know that that's happening? Because it may be happening, and once we find out, it's probably too late. So how can we prepare,
I would say,
50 years from now, 100 years from now, we may have a world that's very well optimized, and we're living happily ever after, and everything's great, I hope so.
Or right there.
Or we may not have any world left to optimize. So it's up to two three things that drive most innovation, its its its consumer, you vote with your money by buying the product. So you're kind of voting and saying it's great, or by voting the people in power. So the people who are getting affected by these things, they need more education, to know how they can spend their dollars, and how they can vote, the right people can put the right policy in place, so that the mess doesn't happen. So
education primarily, so we only have about half a minute left. But because this is the last panel, we can go over with no consequences. They're very angry at me back here, but just for just for a minute, and then we can then we have battlefield stuff upstairs, which I hope you'll all join us for. But to continue.
Yeah, just to add to what you said. Yeah, I mean, I I don't think it's, you know, I don't think it's a technical solution. Really, it's more of integrating our technical abilities with a context political, you know, what are we what are we thinking, when we're talking about capitalism, you know, only thinking about when, when your where your next billion customers are going to come from, right, what are only thinking about
in when we're thinking about what, what kind of AI we're building who were building it for trying to make sure that we build it for a large group of people, we think about problems that affect a large group of people so that they can benefit and the ways that Ken was talking about right, rather than, you know, if you if you think about automation, just let's just think about the US, right, which I just think about truck drivers, or something like that. Some people bring up, you know, what, what is their demographics, where, where do where are they going to, Where do they live? You know, it's usually people from certain social economic status. So these are things we have to think about. And so how can they get retrained into a different career? or How can their kids go to a different career, right. So these are things all of us have to think about. And there's actually
an amazing precedent, which I recently learned about, called the the high school movement. And in 19, 100 years ago, when automation at that time, it was steam engines were coming on to the farms. And they started to have these combines, and people started realizing that they weren't going to be as many farm jobs as they thought in the future. And then it was a small group of people who, by the way, are somewhat unsung, I think somebody's going to have to figure this out and write a book or do a movie about these people. But they came up with this idea, that high school movement, and they said, We need to encourage everybody to go to high school graduate from high school. And so they started developing curriculum organizing, and multiple states in the farm states, often and building high schools, literally, all over America. And interestingly, at that, when they started, their only 10% of Americans graduated high school. That was just the norm when they finished by 1950, 80%
of Americans graduate. It's, it's one of the most amazing social movements of all time, and nobody's heard about it. That's fabulous. Yeah, so I'm thinking, could we do something analogous, in other words, maybe a small group of people who can change the world who come up and say, Hey, we should we have to start changing our our education again, with the with the future in mind and can we maybe, you know, come up with a new movement that will will really change the way we learn and and and that will have a huge for effects for the next
century. Well, something really quick,
really big 10 seconds we need
we definitely need to go away from this juicing economy, like squeezing everything out of like, you know, someone creating a company and all they want to like they work people to the bone, they get the profits to the bone to getting away from the squeezing economy to the more of a soup economy, because everyone
so from to the to the soup economy. I hope you all join us. All right. Well, thank you very much for joining us. Thank you were thoughtful contributions on top.
Thank you. Thank you.