"What is Machine Learning and Why is it Important to Philosophy?" Why? Radio episode with Guest Emily Sullivan
9:20PM Sep 12, +0000
Speakers:
Jack Weinstein
Announcer
Emily Sullivan
Keywords:
computer
machine learning
explanation
algorithm
machine
models
learning
people
question
philosophy
understand
human
programmed
problem
philosophers
machine learning model
emily
philosophical discussions
life
result
DISCLAIMER: This transcript has been autogenerated and may contain errors, do not cite without verifying accuracy. To do so, click on the first word of the section you wish to cite and listen to the audio while reading the text. If you find errors, please email us at whyradioshow@und.edu. Please include the episode name and time stamp where the error is found. Thank you.
The original episode can be found here: https://wp.me/p8pYQY-jg8
Why philosophical discussions about everyday life is produced by the Institute for philosophy and public life, a division of the University of North Dakota's College of Arts and Sciences. Visit us online at why Radio show.org
Hi, I'm jack Russell Weinstein host of why philosophical discussions about everyday life. On today's episode, we will be exploring the philosophical issues around machine learning with Emily Sullivan.
I'm pretty confident that my dog Oscar and I understand each other. It's perfectly clear when he wants to play, eat or sleep when he's scared. And when he wants affection. his motivations are very much on the surface. Oscar also knows what he's supposed to do and what's forbidden. He may not always care like all scavengers. He's an opportunist. But not caring and not knowing are two very different things. My relationship with Oscar has two advantages millions of years of coevolution and body movements, humans a dog's partner to hunt, farm and socialize. And we did it while looking at each other, the position of an arm the tilt of the head, the sticking out of a but these are all clear signs that illustrate both intention and motivation. We coexist because we are transparent to one another. The same cannot be said for human beings and computers. machines of all kinds are fairly recent inventions, and none of them have faces. George Lucas's genius may be best illustrated by the fact that we develop affection for our two D two and C three po even though they have no expressions or micro movements. We project onto them without really understanding. We know their program goals and we're familiar with their patterns. So we extrapolate but does our to believe in the force we don't know. And what's really going on when threepio calculates the odds of admission success. What logical inferences is he making? Does he revise his calculations when a tactic with poor odds turns out to work? Oscar the dog is transparent, but the droids are opaque all computers are and herein lies the problem. Our newest and most intimate companions in the age of microchips are black boxes, uninterpretable mysteries we build to achieve results. For 1000s of years, philosophers have struggled with the problem of other minds the fact that since we don't have direct access to other people's thoughts, we can never really be sure how what or even whether they think people have made do and gone on with life anyway, because our advanced communication skills create bridges between our self contained mental lives. But computers can't do this. This might have been okay if we were limiting our use to calculators and thermometers, but we're not, or programming computers to learn. And when they do, we want to know not just what they've concluded, but how and why they got there. If they can't tell us, are they really learning at all? The ultimate test of whether someone knows something is whether they could communicate the why behind it. Plato taught us that. On today's episode, we're going to dive into machine learning the process by which computers are able to adapt without following instructions. I got that definition for the internet. And we'll ask our guess what it means shortly. But on its face, the phrase is ambiguous. Who is doing learning? Is it the machine or is it us? Is there a teacher in this scenario? Or is it just a group project with someone who doesn't need to sleep? Learn put this another way? It's one thing to know that Bismarck is the capital of North Dakota. It's another thing to know who Bismarck was why the people of North Dakota felt loyalties towards Him and to grasp the role of symbols in the human experience. A fact without context is just trivia. doubt this. Just recall that time The Simpsons tried to answer the Jeopardy question. The capital of North Dakota is named after what German ruler Homer shouts out Hitler and Marge baffled by his answer mutters, Hitler, North Dakota. The amount of background knowledge required to get this joke is incalculable, including understanding the nuances of long term marital life. laughter is one of the most sophisticated things we do. In short, machine language is plagued by the fact that coming to conclusions is not enough on its own, thereby carrying with it numerous philosophical difficulties. Today's guest is trying to sort this all out. First, she articulates and explains the problems machine learning introduces and then she tries to solve them. It's no easy task. Our job at the moment is to recognize just how essential these conundrums are. When we complain about algorithms, impairing our Facebook feed, when we lament inaccurate weather reports when we distrust the result of computer models, we are acknowledging the deep unknowns in the machine learning process. Contrary to what it appears, though, this is not a reason to despair. It's a cause for celebration. Few technologies have advanced as quickly as computers and our philosophical burden comes from success not from failure. Wouldn't it be more surprising if we did all this without encountering philosophy would To be odd if suddenly we had complete clarity and all of our understanding. with computers, we're trying to create a new kind of intelligence. If this doesn't involve the great questions, could we call it learning at all.
And now our guest, Emily Sullivan is Assistant Professor of Philosophy at Eindhoven University of Technology in the Netherlands. Her research explores the intersection between philosophy, data and computer science. Emily, welcome to why. Yeah, thanks for having me. If you'd like to participate, share your favorite moments from the show and tag us on Twitter, Instagram, and Facebook. Our handle is at y radio show, you can always email us at ask why umd.edu and listen to our previous episodes for free at y Radio show.org. Alright, so Emily, let me begin with a really basic question. What's the difference between artificial intelligence and machine learning?
Yeah, I mean, this is a good question. But I think first, it's useful to take a step back even further than that. So first, consider just like what is an algorithm, right, so a lot of people throw around the word algorithm a lot in these discussions as well. So an algorithm is just a series of steps to execute some task. So this really doesn't even need to be computerized. So most people follow an algorithm every morning to make a cup of coffee. So but in my case, a cup of tea. So first, what you do is you fill pot of water, you boil the water, place, a tea bag and the mug etc, right? So computers and programs follow algorithms, right to execute various tasks. So AI uses certain types of algorithms where the system does something that seems intelligent, right? Yes, very vague definition. I know. So one of the first programs I wrote was playing the computer and tic tac toe. So I had to make an AI where the computer would follow various rules like searching for an empty space and looking at what was in the space next to it, right to determine what it would do. So that's just a really simple AI. And it's easy for us to write because we know what the computer should do, right to seem intelligent. A more sophisticated example of an AI is like IBM's, Deep Blue, right, which famously beat the world champion in chess in the late 90s. Basically, that AI just searched for all the possible moves available and just chose the one that had the highest utility.
The word algorithm is is is bandied about all the time now, especially when it comes to Facebook and things like that. And it's the new evil. It's the new Spectre, it's the new thing that's denying us access to the truth in politics and to true friendship. I'm assuming all of that is overblown, but algorithms have that kind of power is, is everything established in advance by how you create the algorithm?
Yeah, so I mean, basically, the algorithm is just a series of steps. So in some ways, yes, it is pre determined in the sense that the computer follows an algorithm, I mean, the interesting thing about machine learning is that the machine then learns the algorithm. So if you have some machine learning system working in the background on Facebook, or something like this, which is most likely the case that they do, then the system is kind of learning an algorithm, perhaps maybe in real time, or they might have already trained a machine learning system, and then the algorithm is then preset, in some sense.
Can you change an algorithm? You know, every six months, when you learn something, you know, can you adjust the parameters? Does the I assume we'll get to eventually the possibility of the computer adjusting the algorithm itself? Or is this so hard wired, I know wrong word in this in this context, I guess software is so essential to the running of a computer, that once you create an algorithm, it can't be changed without redoing everything.
Yeah, you can definitely make make changes. And it depends on how complicated the over arching system is how easy those changes are made. So the way that like Facebook works in recommending, like what you see on your newsfeed or what how it determines what you see in your newsfeed. I mean, they're changing that algorithm in small ways, constantly, right? Trying to get better and better engagement from the people who are on the platform. I mean, this you do this too, right. So if you want to make coffee in the morning, you might change the algorithm. them to be more efficient. Right? So maybe you place the coffee pot on different stages of making breakfast, right? And you pick the one that's the most efficient. So you can definitely adapt these algorithms for sure.
Okay, so before I interrupted you, we went from tic tac toe, tic tac toe to deep blue and, and, and beating human beings to chess. What's the next step?
Well, yeah, so So as I was saying, so, with tic tac toe, it's pretty clear to know what to do to make the computer seem intelligent, right to have that kind of AI. But for but sometimes, the things are just so complex, that we have no idea how the computer should go about seeming intelligent. So in those cases, we want the computer the program like to learn for us right to find what the best algorithm is to follow. And so the example there is AlphaGo. Right? So another game example. And AlphaGo was able to beat the world champion and go in just pretty recently in 2017. And it was because it was trained using machine learning. So go is really like visual on an intuitive compared to chess. And so we we can't like write down a series of rules, or the computer can't just look through the possible moves, because it's just so complicated, that we needed the machine that computer to find right what the best series of steps is, with the best algorithm it should use to follow it to, to play go.
And for those of you who've never played go, I've tried a couple times, I find it completely indecipherable. It's a, it's a, it's a board game that comes out of China. And there's a black and white pebbles. And the goal is to I forget whether it's to get all the same color pebbles, or the most of the same color pebbles, and you have to understand what the edges are. Because when you do do one flip, it does five flips, and all this kind of stuff. I have tried to figure it out. And I can't fathom it. So does that, you know, does that make it harder to teach? Because it's not unintuitive game? Or is it just really just reducing it to that same kind of? Well, first, you do this first you do this, this this algorithmic approach?
Right. So yeah, I mean, I'm my, my husband tried to teach me go and time. And so I'm with you there. But yeah, I mean, so go is just like humans are really good at pattern recognition. And they're really good at like, intuiting patterns and finding patterns and things. Um, computers aren't as good at doing that. And so go really relies on pattern recognition. So chess does to some extent, but it's also a solved game, right? But go, I don't think it is. So. Yeah, so one of the things that they really wanted, or before 2017 people were saying, oh, computers could never beat someone and go, because go is just so complicated, and so intuitive. But they were able to do it through machine learning through the computer, finding these patterns by looking at previous examples. And, yeah, and just in just finding what would work best, and in a way that we don't really understand how it did it.
Okay, so, in a second, I'm gonna go back to the original question, which is to ask for a clear definition of machine language, machine learning and how it fits in AI. But the other vision of other than Facebook and arguments about those algorithms, the other visions that that we have right now in the pop culture, is the Marvel character and Avengers vision, who is started out as a simple artificial intelligence computer goes into human form and then develops not only a life and perspective of his own, but the ability to love and the ability to, to make moral choices. Does a an idea of a true artificial intelligent being, is that really just an algorithm so complex, that we don't understand how it works, but it works, or does something have to liberate it from the algorithm and it has to be independent in a way that programming can't communicate?
Yeah, I mean, this is a big question of what they call a general artificial intelligence. So having some computer or robot or machine that's just able to do all the sorts of intelligent things that humans can do, including like feeling emotions, or something like this. I mean, one way you can think about the brain is that you know, the brain executes an algorithm by Firing certain neurons in a certain sequences at certain times in a certain type of pattern. So if that's it, that's really how if we can reduce mind to the brain, right, then you could conceive of it as as a kind of algorithm. But that gets into questions. Philosophy of mind, which, yeah, isn't really my area. Yeah.
Well, and and we had a last season, I believe we had Patricia churchland, on the on the show. And we had this very specific discussion. And philosophers distinguish between brain and mind where brain is the physical material. And the electrical impulses that the body creates in mind is this abstract thing that comes out of but is somehow qualitatively different. And when you hear Patricia and I talk about it, you can hear two worlds colliding. I'm trying to get her to say things in terms of mind, and she's answering things in terms of rain, right? But machine learning, isn't this right? machine learning isn't the attempt to create a sentient being that we envisioned in science fiction, machine learning is something else. So So what is it?
Yeah, so in most cases, that's right, machine learning is, is doing something completely different. So machine learning, is just trying to mimic seemingly intelligent behavior in a broad category, in a broad sense, doing so without explicitly being programmed to do it. So in just regular AI cases, you can explicitly write in what the computer can do, and they can seem intelligent, but with machine learning, it's not explicitly programmed. So the machine kind of learns what to do. So one example is just with like, chat bots, right? So on all these websites, now, this little like pop up, box happens, and you can like, talk to a chat bot if you need help. Right. And oftentimes, those chat bots are just pre programmed by a person. So if the person writes Hello, they write Hello back, right? But you could make the chatbot using machine learning. If you didn't explicitly say, what the chatbot should say back to you when you said hello.
So how does that work? Do you do you give like a universe of possible things to say, Hi, howdy. When I see us, you know, go away? I'm, I'm tired, right? And then the and then the machine chooses those? Or does machine language involve creating its own language and sort of interpreting what the words are? How much preparation? Is there for something like that? Right,
yeah, I think this is the clear difference between like a simple AI and machine learning. So with a simple AI, you would just have like, choose from these, like five ways to say hello, right. But with machine learning, what you would do instead is you would just give it a whole bunch of texts of conversations. Right? So that that would have the the Hello part and the goodbye part. So just hold conversations. And then the computer would then kind of figure out what to do at certain steps in a conversation once once it was started.
Okay, so So this reminds me a couple years ago, there was a series of memes that that was all over the internet, you know, I plugged all of American sitcoms into the computer, and it wrote this script, or I watched, you know, I had the computer watch all of the romantic comedies, and it came up with this, and they were always absurd and funny and cliche and unsatisfying. That's what you're talking about. Right? That that basically, yeah. So, I mean, this is a stupid question. And I know it's hard to, to answer it simply. But how does that work? I mean, what what what does a computer have to look like to engage in learning?
Um, well, so I mean, there are different types of models or different yet different types of model architectures that, that just follow some kind of statistical, statistical ideas, and it just figures. It tries a whole bunch of different instances and lands on the one that that seems to work. I mean, so one way of doing it is that you just you is through what they call supervised learning, which is you train it by labeling certain data. So if you want to have a machine learning model that just can predict whether something is a dog or Husky or Wolf, rather Then you give it a bunch of images of dogs. And you say this is a dog, then you give it a bunch of images of wolves. And you say this is a wolf. And then the computer just tries to minimize errors as much as possible. Right? So it just tries to get everything right, and just tries a whole bunch of different iterations, combinations, different ways to different aspects of the image. And then once it settles on a certain way, certain algorithm that gets most of them, right, then you have a machine learning model that in theory could then predict with high accuracy, a new image of a dog or a wolf and would get it right.
So the machine is using a basic process of basic algorithm to test different variations of the algorithm, so that it's rewriting the algorithm itself. And then when it is able to identify these things, then it calls that algorithm quote, unquote, finished. And it's learned is that is that basically Yeah, yeah, and, and this is clearly this is notoriously difficult, because we have captures right on on on the internet, you know, click all of the buses and the picture, click all drains the lampposts right. Yeah, exploitation at its finest. Right, right. Exactly. Right. So So if this is so right, why is why can't computers do these captures? Is it just that we're not there yet? Or is there something about this pattern recognition that is really beyond the capacities of the computer?
Right, so the computer can't learn from nothing, right? So it needs some kind of input, and then some kind of feedback. So you probably noticed over the course of several years, how more and more difficult those tasks have become. Right? The pictures become more and more grainy.
Yeah, I have a really hard time with it. And I yell at the computer, I think, Oh, come on, is probably my most frequent thing on the Internet at this point.
Yeah, well, it's because they're using they're crowdsourcing, the labor from all of us for free to train these classifiers. Yeah. So it used to be that they were the images were simple, because they didn't have a sophisticated computer vision available. So it was just a really cheap and easy way to get input data and feedback that whether or not this image was a card that you could then train a classifier on. But then as the classifiers get better and better. You need the humans to then do the things that were better at which is to identify from a really grainy picture, whether or not that's a crosswalk or the sidewalk, right. And then that gets feed as more input to for the computers to learn.
Okay, so when we get back, we're going to make the transition. This was this was a introduction to machine learning. And it was for lack of a more sophisticated way of saying it fairly computer sciency. But when we get back we're going to dive into the philosophy we're going to ask about what learning means the role of of conclusions and explanations and all of this. And we're going to go into the central difficulties that mark your career as a researcher But until that you're listening to Emily Sullivan and jack Russell Weinstein on why philosophical discussion, but Herbalife, we'll be back right after this.
The Institute for philosophy and public life bridges the gap between academic philosophy and the general public. Its mission is to cultivate discussion between philosophy professionals, and others who have an interest in the subject regardless of experience or credentials. visit us on the web at philosophy and public life.org. The Institute for philosophy and public life because there is no ivory tower.
You're back with wide philosophical discussions with everyday life. I'm your host, jack Russell wanted, we're talking with Emily Sullivan, about machine learning and what that means and its context and I'm thinking about last week I was at a Green Day concert. I was vaccinated I was wearing a mask my whole family was were like the only ones wearing a mask was horrified and that's another issue. And before just before Green Day got on stage they played queens Bohemian Rhapsody and the whole stadium star Singing, it was really actually quite moving. And my 15 year old daughter, who you've all heard me talking about 1000 times is singing along and she knows all of the words to Bohemian Rhapsody. And I have this moment where I think, where the heck did she learned that? You know, how does she know all of the words that may be something that is programmed into Americans at birth, but nevertheless, I taught her how to blow her nose, I taught her how to sit up, I taught her all these things. And as she gets older, she has all these capabilities that I have no idea where they came from. I compare this to my first experience with these sort of responsive computer learning games, the original Hitchhiker's Guide to the Galaxy adventure came, where you would answer certain things to get to the next level. And if you didn't get it exactly right, it wouldn't work. If you didn't get the words and the phrasing, right, you got stuck. And now of course, as Emily has been telling us, computers have a much wider range of what they can recognize, and they can harvest our answers to get better at recognizing them. So, Emily, I want to start off by asking, what's a vague question, and I can rephrase it if you don't really understand what it means? Or if, if it's too vague for our listeners, which is, what philosophical jumps Do you have to make, in order to go from that limited algorithm to the computer that can teach itself? Is there something philosophically that you have to do to think about computers differently? To have a different way of programming and anticipating and describing their actions? Does that question Make sense? Yeah,
I think so. I mean, I don't. So I take. So I think some philosophers would disagree with me with what I'm about to say. But I think that the more common machine learning models that we're seeing and deployed, there is no novel, philosophical issue of, of learning. I don't think because I don't think we should be thinking of them as, as learning in the way that we learn. So they're really just trying a whole bunch of stuff out, just trying a whole bunch of different iterations, really simplified view, of course, but just whole bunch of iterations until they get they've minimized their loss function, which just means they got the most possible answers, right? as it could find.
So what does learning mean, in that context? If learning isn't, I guess you just you answered that minimizing the loss function, but but how would you define the kind of learning that you are aiming for if it's not modeled on our colloquial, human understanding of learning? What What is learning? mean philosophically? Or is it just meeting the prompt the immediate prompt at any given time?
Yeah, so I think that that most of the example, the examples of machine learning, if not all of them, are adjust to succeeding and very specific task that we that we give it. And so it means the prompt of learning once it reaches the benchmark rate that we set for it.
Does this mean that machine learning is always practical, that it's always creating just a result? And however the machine ends up getting to that result is irrelevant? Because what we want is, you know, true propositions, factual statements about the world accurate predictions? Is it only the end? That's a concern for us.
So I think that I don't think that that's a concern for us, I think we should know what the machine is doing. I think there might be a number of computer scientists or people who are working in industry like at Facebook, or Google, who really who do just care about the result.
what's the alternative?
So I think that we need to be careful especially if we are using these systems, in medical diagnosis, or other kinds of social situations like determining whether or not someone should be up for parole, we should know how these are why the machine gave the answer that it did.
Okay, so so you just you just in passing, used a compelling and part of me thinks a horrifying example, right, determining whether someone is eligible for parole? Could you talk a little bit about that talk about how that's a machine learning process? And what and why we would want to rely on a machine response that as opposed to human intuition. Yeah, so
people have this idea, right, that you can have a machine learning model or like any kind of mathematical model, right? And in some way, it's going to be more objective, right than a human, because humans have these types of biases. And maybe in some ways that might be true. So that's why people have moved to, to these models. Maybe also, it's, it's faster, or makes things more efficient. But yeah, so the compass algorithm in particular, has been used for sentencing and calculates like recidivism rate, so how likely you are to to reoffend. And it uses a whole bunch of information, including the zip code in which you live to before you're in prison, or if you live if you're in sentencing where you live in. And then it gives some kind of risk score, right? And then those scores or judges have been using, they actually have been using in these types of decisions.
But isn't, and I know this isn't your field, what I'm about to ask, but but I'm curious in terms of solving this problem in terms of machine learning, isn't giving someone a zip code in the United States, basically, using their race? Basically, aren't you saying if you are black, you're more likely to be a recidivist than if you are white.
Yeah, this is one of the big criticisms of the compass model. In particular, there was a big expose a that I think pro publica did. And claim that yeah, it was just using these racial proxies and had different accuracy rates for for whites and people of color. Yeah.
Does is there? Can you imagine a machine learning process where a machine can learn not to be racist? Is that possible?
So this is a pretty loaded question. So I think in these types of cases, the answer is no, because you need to get the data from your getting the data from the real world. And since the real world struggles with racism and institutional racism, the data is going to have that imprinted on it. So no matter what you try to do, it's going to have those underpinnings. And if you do try to remove it, then it's going to be so far removed from the phenomena that I don't even know what it is, you'd be modeling.
So so since the the model is built on the real world, the only way to make a non racist compass algorithm is to have a non racist world.
Yeah, or to just use data that just synthetic data.
Okay, so now this goes, I think, to the heart of the material of yours that I've read, which is about how people understand and learn the processes that that that the computer is engaged in, in modeling. And when and when I started out the show and I talked about the sort of the blackbox model and the computers without faces. The way that I initiated the conversation was to say, look, we know the result, but we can't understand the reasoning. I wonder if you talk a little bit about how this is set up your research agenda, that what you're interested in as a philosopher and how you establish different both criteria and methods for determining the understanding of machine learning beyond simply the result. Can you give people just a sense of what your projects are and how they fit into this discussion?
Yeah, so um, so I'm interested in Well, first just, there was just all this hype around these models just being black boxes. And so I was just trying to think about well, is there something to this hype? And what does it mean for this to be a black box? And in what ways is this problematic and what ways is not problematic? So I have an epistemology and philosophy of science in training. So what that means is, is that I'm interested in how we can gain knowledge about the world and how it is we can get scientific knowledge and scientific understanding, in particular. So how much of these models? Do we really need to know about how they work? In order for us to get knowledge about the world to get understanding of various phenomena? And how? And how is it should that we should explain what they're doing to satisfy various norms or interests that we have? So when these models are being used to determine sentencing or parole, or if they're being used to another case is used to assign grades to students during Corona? What How should we explain that answer to the people that we're using to? And how much of them do we need to know about how the model is making a decision to make sure that explanation upholds their rights as knowers, or just as citizens? So
as an epistemologists, and the way that you're describing things, you're human centered, rather than computer centered you're interested in, in what humans need to know how to explain to them. But because the object is that where you're initiating from is a non human. Does that require that human beings change their definition? Or their expectations for explanations? Do we have to cater to the computers? Or to the computers have to develop to cater to the humans?
Yeah, so I, I think that we should, we should have the computers catered to us, I don't think we need to put up with computers that are uncooperative.
Alright, so so when we explain things to other people, there are different kinds of explanation, there are different levels of explanation, that count is successful or not, and that this is essential in both day to day life, but also in understanding machine language. So what counts as a successful explanation? And what are the different, you know, levels of success that you think would be satisfying to the people who are trying to understand computers and machine learning?
Yeah, so I think this is a this is a big question. So my view is that, in order to answer that question, we need to know what the purpose or the function that explanation has in the in the context. So in order to derive whether or not the explanations are even possible in the first place, from these opaque models, we need to know what the purpose of it is. So the explanation that you receive from your parole board, why you didn't get parole? the norms of that explanation is going to be quite different from just trying to explain how, like climate systems work by using like a machine learning model. Right? So the norms of that explanation is going to be different. So I think that's the first. The first thing I mean, one way to look at it is that, at least in Europe, they passed the the GDPR. So the general data protection legislation and in that, some people argue it gives you the right to explanation from the system's right. So anytime a machine learning system, or any kind of algorithm makes a significant decision about you, in some way, you have a right to an explanation why the decision was made. So since they are using opaque models, the companies can go two ways with this. Right? They could say that, well, we don't have to provide an excellent we don't have to follow this legislation, because it's technically impossible to do so. Right. And I think that that's the part that we really need to push back on. So yes, these models are opaque, but we can still provide explanations, and they still should provide certain types of explanations. And if it's not possible to provide the type of explanation then yeah, they shouldn't be using that model at all. They should be using one that's more interpretable.
How deep does to the explanations go? I'm thinking of Um, let's say someone spouse passes away, and they get $100,000 from from the insurance or you're applying for insurance and and and they are willing to give you $100,000 for insurance and you ask, why are they willing to give, and they give, they look at the actuary chart and they say, well, you're a white male, 43 years old, you smoke, you don't exercise regularly. This is what your life is worth. So we're giving you this. Now, that's one mode of explanation. But a deeper mode of explanation would be, you know, as an insurance company, we are only concerned with the financial value of your life, we're not concerned with your moral impact, we're not concerned with how you're teaching your children will eventually lead to their income. That's a deeper, more complete explanation. Do we need those kinds of explanations from computers to and are they capable of giving those things? Or is the kind of explanation that you're offering? simply just these are the data that the programmers put in? And therefore, these are the answers to these questions.
Yeah, good, I think it's gonna go back to what we want the function of these explanations to have. So in the case of the insurance loan, or the bank loan, or something like that, what's the purpose of getting the explanation? Is it for you to contest the decision? Because one of the criteria I gave you was was biased, right? Or was it inaccurate? Or was is the explanation for you to be able to improve your application for next time? Or maybe the purpose of the explanation is to kind of reveal the economic social realities of the society that you live in? Right? And depending on what the function of the explanation is, is gonna depend there, but also what it should be. Right. So I think there's that more meta normative question, right? What should the functions be? So if that's a question, perhaps an ethical, social, political question of what, you know, what we want them to what to do with explanations,
it seems as if especially the more limited explanation creates a culture of distrust of computers, a culture of distrust of of models. Has this been your experience? And and what's the role of machine language explanations in terms of either fostering, or healing? Trust and distrust?
Yeah, so I think that if explanations have are following certain norms are good explanations, I think that they can, there's a real role to play in making sure that we're trusting the right systems. Right. So one thing that the kinds of explainability methods for machine learning models can do is that they can point out where things go wrong. So one example really recent example, a paper just came out, analyzing some of the machine learning systems that have been used in Corona diagnosis. So looking at x rays of lungs, and these systems claim to be able to accurately diagnose Coronavirus. But by looking closely at these models, these researchers found by using really simple methods of trying to figure out what the models are doing, found that they're relying on factors that have nothing to do with the lungs. They're highlighting areas of the X rays, specifically the areas that don't have the lungs in them. So they're just highlighting these completely confounding factors. And once you realize that, then you can realize like, okay, we shouldn't trust these models at all, and we shouldn't be using them. Or you might think in a, in a good case, right? You can use that type of explainability method to show to people No, look, this is using the kinds of features the kinds of things that we would expect it to use, and that's why it's a good model to use in this case.
Okay, so in that scenario, how do you know the difference between the computer let's say they're supposed to look at the lungs, but they're looking at the clavicle right in my life, my anatomy is very minimal. So right so looking at the color How do you know the difference between the computer is making a mistake and thinks the clavicle is relevant? And it's not? Or the difference between that or the computer has looked at all of these images and realizes that Coronavirus impacts the bone density of the clavicle? And so in all of these things, it's noticing I'm making all this up, of course, I'm noticing that that the clavicle is 7% less dense, and therefore, they're more likely to have Coronavirus. How do you tell the difference between a computer mistake and a computer trying to tell us something that we don't yet know.
Yeah, so sometimes you can't without further research so. So it in my opinion, in my view, right, you need to, in order to get understanding from these models is you really need to have significant amount of background knowledge connecting the phenomenon to the target. So in that kind of case, we need to the reason why the researchers thought, Okay, well, this is just confounding factors is because we have independent scientific evidence pointing to the fact that these regions shouldn't be affected at all, with Corona. So therefore, it's not a good models. And it's through that independent knowledge that we have that bolstering up the importance or the usefulness of these models. Of course, you could use models as a kind of like, exploratory way. So you feed it a bunch of data, and you see Oh, wow, look, the clavicle, something's going on there. So then what you would need to do before you could, I would say, before you can gain an understanding from that model, you'd have to then do traditional medical research to see if that was actually a true, a true thing was picking up or just an artifact?
So and that's, and that's a really interesting sort of segue, because I guess the question is, do we think about traditional research as coming before machine learning? Do we think of traditional research coming as a result of machine learning? Or is it such now? Is it such a partnership that traditional research is indistinguishable or I should say, inseparable from machine learning? how widespread is this, to the day to day research that the scientific community is engaged in?
So I think that there can be, ideally, there's a nice given take there. So it's kind of like a rotating, kind of part of the pipeline. I mean, one thing is, in the medical case, it's being used a lot, because, you know, it's really hard to get, you know, medical knowledge, it's just hard to do it. And we have all this, all these, this data, and it's just an maybe a nice way to improve efficiency of certain diagnosis or, or hospitals or something like this. So it's been a lot of money has been put into, to building certain models to diagnose cancer, like breast cancer or skin cancer, or to diagnose certain cardiovascular problem. It's just by looking at the eye, you know, things like this. And it could have breakthroughs, right, it could have breakthroughs in terms of doing better than current doctors. So in that case, more people could be screened. It could be deployed in areas where doctors are hard to find. Or it could just be by generating these hypotheses or what I call like, how possibly explanations, and then, you know, fuel further traditional research. So maybe there is a connection to the AI in cardiovascular health. Right, and we should do research about it.
So I want to ask a question that's filtered through a question that was actually sent advanced by one of our listeners, Dave keefer. He's a analytic chemist at Salisbury University. And I mentioned him because he's actually the one who suggested this topic and gave your name as one of the possible guests and so you guys should hang out. But he asked how machine language sorry, machine learning might change science and the goals of science. And the way I want to ask this question is just the practical consequences that you are talking about, diminish the importance of pure science of pure research, where Sciences is done for solely for the sake of understanding and for exploring the you know, The beauty and the intricacy and the interestingness of the creation that we live in. I'm just wish, does machine learning, nudge science ever more to the practical and reject that sort of pure research model?
Yeah, I think that, I think that it probably probably does. So, I mean, it's a joke in academic circles, especially in Europe, where it's academia in Europe is largely built on external grant funding, is that you just have to put AI right in the the grant proposal somewhere, and then it'll get funding. So So in essence, it is pushing in that direction, because the types of machine learning models are best, right now are best at like predicting certain types of like classifiers and predicting things. And so there, you really have a practical task in mind, right? And then there's this deeper question of well, since it seems to do well predicting something, maybe perhaps we can get at mechanisms. But that comes kind of secondary
is I'm trying to figure out how to ask this question. Is science built on machine learning, always backtracking. And what I mean by that is, you get a result. Therefore, your time is spent reconstructing the process? And the end the the information that got you to the result? Or is there a way to park to use machine language so that you are cognizant, I keep saying machine language don't know, I keep doing machine learning. So that you are cognizant of what's happening every step of the way. And the researcher and the mesh and the machine can be sort of true, contiguous partners.
Yeah, so I think that people, like have the vision of it being that way. So there is this kind of vision, like if you keep with the medical cases, this vision of having these systems in real time, and then there's this kind of interplay between the doctor and the machine and the patient. But I think that we're still further away from from that, because what, what tends to happen is that you have a model that's been trained with a subset of data, and then maybe, perhaps, is deployed in some instances, and there is no kind of going back and retooling. It's just kind of static then or just, it's just set in place. So in that sense, it might be just kind of backward thinking like you're suggesting
is, is Is there any? I mean, is there any reality to this, the the science fiction trope of, of the computers becoming more powerful than human beings? And, and and you know, the Skynet and all that kind of stuff? Is is? Is the structure of machine learning such that it really precludes that? Or is that really something that people are worried about? Because the results are set? Because the process is so result oriented? Yeah, so
I mean, I'm worried about it. I'm worried about it, but I'm worried about it not because I think machines are likely to, like, surpass us and the kind of way that we think about in science fiction, but I'm worried about it, because we will, as society will place them in that role, whether they deserve it or not. What. So if you think of the social credit system in China, they're just collecting so much data on everything that you do. And they're making, giving out, I don't know credit in various ways, and we're making decisions that really affects people's lives in many senses. And so the computer is kind of given this, the system has kind of given this dominion over over your life.
I, I think about a story that was in the news, probably about five, seven years ago, where a young woman was pregnant and hadn't didn't tell her father but she was at Target. I think she was like 16 and oh, yeah, target and she was buying vitamins and she was buying this that the other thing and the target marketing computer figured out out what was going on and started sending her pregnancy? You know, diaper coupons and all this kind of stuff. And her father figured out that she was pregnant from the coupons that target sent? Um,
is that?
Is that the kind of thing that a philosopher can help with and stop? Or is that just the kind of collateral damage that happens when you engage in Data Mining and Machine Learning in a marketing context?
Yeah, so I think that I mean, I think that philosophers do have a role to play. So for sure, I mean, emphasis, of course, I have a role to play. But I think that, you know, epistemologists also have a role to play as well, by thinking about what what makes this system trustworthy? What kind of knowledge can we really get from from the system? So one way to avoid society, putting the some machine that makes all these decisions in charge of our lives, right, is to say that the type of knowledge that this machine gets, is no better than the knowledge that an individual might have. And perhaps in some cases, it's even worse.
Is this something that is solved by um, you know, you have a checklist on the computer that says send coupons about these category of things, but don't send coupons related to intimate by body issues or or controversial political issues? Is it solved in that sort of checklist way? Or is this an example of something that a machine can learn by looking at the newspaper coverage and the complaints and that eventually, it develops an algorithm that says, I am not going to send coupons to people under 21 for controversial social issues? Because I have all this data, that there's more negativity than positivity in the press, can it work that way? as well.
Right. So I think that I mean, as a kind of a stopgap and this is some ways that people have done it, right? Is that you, like have these checkboxes to say you don't want, like cookies to be analyzed, or something or like, yeah, I think like Facebook had something where you could check, you don't want your things to be analyzed, but they're still going to analyze them, you just won't bear the fruit of it or something. Right? See you on See, see the algorithm that gives you things in a certain priority, but they'll still analyze it. So that's the kind of that doesn't really work. It's just a way to make people feel perhaps better about it by clicking those checkboxes because the data is still available. So you know, I think the way to solve the problem of people of these machines, extrapolating, obviously, personal things about you is to restrict the data that it gets. That's not necessarily a practical solution at this point. But I think that that's the way to do
I need to I want to ask a question, it's going to start out a little inside baseball. But I'll I'll explain to the listeners Why. Why I'm asking the question. Historically, philosophy tends to be a single author, paper, what a single person writes a book, a single person writes a paper and sometimes a couple people write it. In science. You have what's called multi authored papers, and you have a lead author who's who's responsible for the study who gets the most credit who's done the most work. But when you read science papers, there's like 10 authors, when I looked at your papers, now, some of this is just being in Europe. But when I looked at your papers, these were multi authors, you were the lead author in a serious complex study with with 10 authors, or five authors or whatever. And that surprised me because I know you're a philosopher. So the question I want to ask is, to what extent is the epistemology the theory of knowledge, stuff that you're doing the philosophy stuff that you're doing? To what extent is it welcome and an equal partner in the scientific exploration of machine learning? How much do the scientists listen to you other other philosophers working on this? Or are you really siloed and this is one aspect and the sciences, the other and the scientists are super practical, and you're super philosophical. And they're really two separate conversations. And you rely on each other for you know, information about the technology and and mutual understanding as opposed to joint projects.
So I think that there is a great opportunity for philosophers to engage with computer scientists on a lot of these, these problems. Part of it, though, is just to, they don't really understand philosophy. So part of it is to, to sell them what the method is. But I mean, I think the one paper that you're referring to, we were building a explanation interface for a company in the Netherlands. And all I said, the first thing I said was the most basic thing that you get in philosophy 101 on explanation, which is that explanations are answers to a question why questions? So when I said that, they their heads just like exploded, because it was, like, seemed like such like an obvious thing that they would never have thought about. Right? And just that simple idea, really shaped the way they're thinking about and got really excited about how you could build this kind of explanation interfaces just to being an answer, to a why question. And so that's something that, you know, philosophers can can definitely bring to the table, because we've looked at these concepts. You know, for a long time, I've written a lot about what it what an explanation is, what about what certain duties are, what responsibility is, you know, what knowledge is, right? We, we know a lot about this stuff.
So if the philosophical problem is or if let's say the practical problem is, in order to foster trust, and in order to learn from the results, the machine has to be able to provide some sort of explanatory function as to why the philosophy and the philosopher has to be there to establish what an explanation means. What whether it why an explanation is a why question, as opposed to a What question? And what even it means to offer a satisfying answer of why so it sounds like philosophy really is not just in the foundation of the process, but that the very architecture of machine learning is built on a foundation that really has to have some philosophical commitments at their at its base. Am I right, in interpreting that it that way?
Yeah, yeah, I think so. I mean, like, another example is, in a lot of these computer scientists who work on explanations and whether it's been able to aim, whether it's satisfactory, they'll do these, like user studies, will they'll ask people? Like, did you find this explanation trustworthy? Or like, Do you trust it or something? So last one, like a series of questions that make it seem like, okay, since people trust the system, now that I got this explanation like that, we're good to go. But what people fail to realize is that kind of questionnaire doesn't get at the heart of the matter, which is, is the system trustworthy? Is the system actually giving us reliable knowledge or not? And that's a philosophical question.
Are you and I, and this is sort of a wrapping up question, but are you confident about the future of explanation in machine language? Or do you feel that it's really not being nudged in the direction that you think is necessary?
So I think that I mean, I think there'll be good progress being made. But I think that it's not at the place that people assume that it is already at. So a lot of these methods are very flawed and have lots in or it can be error prone. So I think we still need to treat them with a grain of salt and and make sure that the, the external problem with the model that I talked about, so connecting the model to the actual real world phenomenon is something that we're still doing. So there's not just this kind of internal problem of opacity, but a kind of external problem of opacity. Yeah, so I think that nudging, nudging the direction in the sense that explanations have certain goals and functions and that they're situated in a larger context, I think is really important and something that is, is missing a little bit at the same Yeah.
Is this a problem for computer scientists, or is this a public policy problem? Is this something where The researchers can lead or is this something where there need to be established legal, ethical, moral standards that are really the product of, of, of public policy decisions and leadership rather than than the participants in in the research.
So, unfortunately, I do think that we public policy needs to play a really large role. And I think the reason is, is because, especially in computer science, a lot of the way that the field is driven is through these big tech companies, because they sponsor a lot of events. And, yeah, the research agendas is pretty much driven by by their agenda, to a large degree. And these companies aren't necessarily interested in solving the real problems, except maybe just by solving it in some kind of shallow sense for for PR, or something like this. So if there were public policy changes, then I think that these companies then would be forced to have certain types of models, and then researchers would then all of a sudden get money to, to research those aspects. And I think, I think this is what happened a little bit with the GDPR. So once the GDPR was passed, in explainability, really became a hot topic. I mean, not that people weren't talking about it beforehand. But it really became this kind of buzz around it. And people were really motivating a lot of research papers by starting out using the GDPR. And the right to explanation is why the research was being done.
And so so I'm, I'm sure everyone can tell by your your, your heavy Dutch accent that you are a native of the Netherlands, you're originally from New Jersey? Do you find that the the more the stronger governmental regulation role of Europe is further advanced and more responsive to this, then Then, in the American context? Are they equal? Or is? Is this kind of conversation more vibrant in the States? Or is the whole thing so global that that national identity doesn't matter? How do you how do you with the stuff that you do and explanation and global privacy and trust in facilitating scientific understanding all kind of stuff? Is there a difference between a more regulative public policy and a less regular of public policy?
Yeah, there's a big difference. For sure. So in Europe, compared to the states with the GDPR. So like, one thing is that you need to get consent to from users to like, share their email address to to other companies or something like this. So I think there there was a lawsuit in Europe, because some European company was using an American company that did some kind of email service. And since their servers were in the States, they, the court ruled that it couldn't be compliant with GDPR, because there wasn't any privacy to it, because the servers are located in the States. So I mean, there probably might appeal this decision. But yeah, they're taking these ideas of data privacy and consents really seriously. I mean, does it solve any, all the problems? No, no, of course not. There, there's so many more problems, but I think it does take a step in the in the right direction.
So I think sort of In conclusion, the thing that I'm taking the most from your work, and from this discussion, is that we think of machine learning as a computer problem, but it's really a human problem. And that the epistemologists, the person who works on theory of knowledge, is there to recognize that what counts is results, what counts as explanation. What counts as as a progress is defined by human experience and human norms and human expectations, not the computers, and therefore, it's the humans that need to be the focus of all of this, and the machine is still the tool. Am I getting that right? And is the if so, is there anything that you would like to add? And if I'm not getting it, right, what am I getting wrong?
No, I think that's right. I think that we need to make sure For the computer is a tool, right? That's, that's what it's, it's there for. So we really need to think about what the purposes we want to use these things for. So not just for epistemic purposes, but also these ethical purposes. And yeah, maybe there are some purposes that the model shouldn't be used for at all. And making sure that we think about the systems, you know, in the kind of complex social environment that that they're located in. Right. It's not just a problem about the model itself, but how it connects up to, to the phenomena that we're trying to understand.
Well, Emily, I want to thank you so much for this conversation, as I told you in our correspondences, this is way out of my field of experience. So I learned a tremendous amount. And I really, I really feel like I have a much clearer understanding of what not only machine learning means, but also how it connects to the to the great philosophical questions. So thank you so much for joining us on why it has been a pleasure. Yeah, it's been really fun. Thanks. Yeah. You've been listening to jack Russell Weinstein and Emily Sullivan on why philosophical discussions about everyday life. I will be back with a few more thoughts right after this.
Visit IPP ELLs blog pq, Ed, philosophical questions every day. For more philosophical discussions of everyday life. Comment on the entries and share your points of view with an ever growing community of professional and amateur philosophers. You can access the blog and view more information on our schedule, our broadcasts and the y radio store at www dot philosophy and public life.org.
You're back with wide philosophical discussions of everyday life. I'm your host, jack Russell Weinstein, we were talking with Emily Sullivan, about machine learning and the philosophical issues implicit in it. And I don't know about you, but one of the hardest things for me to understand in the contemporary world is how a bunch of zeros and ones become Facebook become Instagram become artificial intelligence and games, I have a lot of trouble understanding that. And from the conversation with Emily, I am beginning to get a better sense of how this works on the learning level, right? You have an algorithm, and a machine tests different algorithms that lead to more effective results. And as the results become more effective, the algorithm changes until in theory anyway, it's perfect, and they get all the results they need. This is fine if what you want is a consequence. But if what you want us to learn, if what you want is to understand if you want, what you want is to communicate the why, then that's not enough. And that's where the philosophy comes in, as opposed to the science. What does it mean to explain something? What does it mean to understand something? What does it mean to teach and learn something? These are some of the deepest philosophical questions that have permeated the tradition from its earliest days. What's interesting is how the machine language, the machine learning model is so similar to the human model and how each one learns from the other. So the machine learning model is going to be adjusted, the more we understand about what humans need, and humans are going to have to adjust their expectations, the more we have intimate relationships with computers. And the more we realize what we want computers to do, and what spheres of our lives we want to keep computers away from. Computers have advanced so much, and yet at the same time, they're in its infancy, we have learned so much from their programming, and they're calculating, and yet at the same time, we're at the same philosophical level, perhaps than we were 2000 years ago. We are a generation that has grown up with computers, they're not new to us, they're our siblings in a certain sense. And like all siblings, we have to figure out a language to communicate as best as possible. And that's what the epistemologists does. When it comes to computers. That's what a philosopher does, was interested in how knowledge manifests itself, in the interaction between human beings and can Peters. That's what Emily is trying to do. It's a fascinating discussion. And I can think of few issues that are more essential to the new world that we are building together trial and error and facing the great problems that we face. Machine learning is a human problem. And the human problems are problems with machine learning, and there's no separating the two. You've been listening to jack Russell Weinstein on why philosophical discussions about everyday life. Thank you for listening. As always, it's an honor to be with you.
Why is funded by the Institute for philosophy and public life Prairie Public Broadcasting in the University of North Dakota is College of Arts and Sciences and division of Research and Economic Development. Skip wood is our studio engineer. The music is written and performed by Mark Weinstein and can be found on his album Louis soul. For more of his music, visit jazz flute weinstein.com or myspace.com slash Mark Weinstein. Philosophy is everywhere you make it and we hope we've inspired you with our discussion today. Remember, as we say at the Institute, there is no ivory tower.