"How Does Misinformation Spread?" Why? Radio Episode with Guests Cailin O’Connor and James Owen Weatherall
12:07AM Apr 16, 2020
James Owen Weatherall
Jack Russell Weinstein
Disclaimer: This transcript has been autogenerated and may contain errors, do not cite without verifying accuracy. To do so, click on the first word of the section you wish to cite and listen to the audio while reading the text. If you find errors, please email us at email@example.com. Please include the episode name and time stamp where the error is found. Thank you.
Why philosophical discussions about everyday life is produced by the Institute for philosophy and public life, a division of the University of North Dakota's College of Arts and Sciences. Visit us online at why Radio show.org
Hi, I'm jack Russell Weinstein, host of why philosophical discussions but everyday life on today's episode we'll be talking with Caitlin O'Connor and James Ellen weather all about the spread of misinformation. So yesterday in class I mentioned in passing that some evangelical Christians support Israel to help usher in the End Times. These Christian Zionists are motivated by the prophecy that Israel's establishment is a precondition for the second coming of Christ. I didn't think much of it. It was just some background to another conversation, but one of my students did. After class he approached me to complain. He said that he himself was evangelical and that he had never heard such a thing. He seemed genuinely offended. By my comments when I suggested that he was the one who was misinformed, he doubled down. He had been to many evangelical conferences he insisted and no one had ever argued for Israel in the way that I reported. I had to be wrong, he said, So naturally, we googled it. The evidence was overwhelming. The first four results were articles from the Washington Post, CNN, Vox and Wikipedia, all backing up my claim. The post actually reported on a poll documenting that half of evangelicals support Israel to speed up the Second Coming. This is not a marginal doctrine, it's mainstream, and my student was deflated. At one point in conversation he even said to me, I should know better than to defend anyone I felt for him. It's hard to advocate for something so strongly and find out you're wrong. It's even harder when that argument is with your teacher. The student made a series of interesting mistakes. First, he assumed that his experience was inclusive, since he had never encountered this belief he thought it couldn't possibly be True. Second, he misunderstood the nature of expertise. Surely he thought as an evangelical Christian, he knew more about his own religion than an outsider, even if that's someone who was a professor in the Department of Philosophy, and religious studies. Finally, he acted as if his belief was normative, because his own commitment was so all encompassing his knee jerk reaction was that his faith was the default for all others. The student is not alone. These are just natural mistakes, and they're common ones. We all have a tendency to think that our personal histories and attitudes are the standard and everyone else's are variations. We are of course, the center of our own universes.
The issue is that knowledge isn't just individual belief. It's social in nature. What we know is as much a product of luck as it is our own effort. The movie Slumdog Millionaire illustrates as well sometimes we can only answer questions because we happen upon the information by asking in it. In fact, if I hadn't been lucky enough to have the student challenged me yesterday, this monologue would be about something completely different. his frustration turned out to be my good fortune. This leads to a problem. How are we to be held accountable for knowledge we couldn't possibly have? How are we supposed to doubt pernicious ideas when no one around us suggests we undermine them? We are constantly accusing people of being misinformed, stupid or even immoral because they hold steadfast ideas that we are certain are wrong, but this isn't fair. We act too often as though being mistaken is a character flaw and that ignorance is a moral problem, not an epistemological one. But there is something fundamentally absurd about holding individuals accountable for knowledge that the community chose not to give them. This tool is best captured by an example from the Christian tradition. In Dante's Inferno. Aristotle is condemned to the first circle of hell, he has to go to the underworld because he hasn't accepted Christ Christian theology demands it But punishing him for not accepting the divinity of a person who wouldn't be born 322 years later after he died, is also correct. So the philosopher is doomed to an eternity in limbo. There's no tournament there, but no hope either. Dante understood all too well, the paradox of social knowledge. We must be held accountable for even those things that we have no control over. The Human Condition is absurd, yes, but it's also inescapable. On today's episode, we're going to look at social knowledge from the perspective of the misinformed. We're going to examine the sociological forces that prepare us to believe that which we should reject. And we'll explore the complexity that deceives us about the deception itself. We're going to question the popular notion that our political opponents espouse beliefs other than ours, because they know less than they should, or I'm irrational, or just plain stupid. We're going to have to come face to face with the idea that sometimes we know the truth simply because we are lucky and that the very network We rely on to help arbitrator beliefs are themselves the vehicles for misinformation. And as we do this, I hope we can continue to recall Dante's Aristotle. If we don't hold ourselves accountable for the falsities we hold to be true, we too will live our days in darkness and without hope. My student did the responsible thing. He believed he was right. And he challenged me. I returned the favor by treating him with enough respect to offer evidence for my own conviction. Whatever the outcome, or knowledge became a partnership. The two of us may also be in limbo, but at least we're there together. And now our guests, Caitlin O'Connor and Jameson Weatherall are both faculty at the Department of logic and philosophy of science at the University of California, Irvine. They're the co authors of the book, the misinformation age how false beliefs spread. kaylynn Jim, thanks for joining us on why. Thanks, jack.
Yeah, thanks for having us.
If you'd like to comment on the show, please tweet us at why radio Show post on our firstname.lastname@example.org slash why radio show or email us at ask email@example.com our contact information in our complete archives can be found at www dot why Radio show.org? So Kaitlyn, Jim, the very topic of the book surprises me not because it's not topical, but because philosophers spend most of their time discussing knowledge, not mistakes. What made you turn this upside down? Why explore the philosophy of errors?
Well, so we first got
interested in writing this book after the the 2016 election and the Brexit vote. I mean, we were interested in the fact that there seemed to be a lot of public false belief, maybe even more than in recent memory. And I had been doing a lot of work using models of social networks to try to understand how scientific beliefs spread and false scientific beliefs spread. We thought, well, maybe we can apply these models and all this work that I was doing and other philosophers were doing to understand false beliefs more generally.
Yeah, I think, you know, as, as you say, what philosophers have focused on, you know, because philosophy is traditionally a normative discipline is the question of how should you form beliefs? What does it mean to have knowledge? And I think, what we're, we're working in a tradition that has tried to take those questions and recognize that they need to be completely reconceived because of, as you said, in the monologue, the social dimension to knowledge and belief. And what we realized was that, you know, if you look at the particular cultural and political moments that we're living in right now, the social nature of belief and knowledge seems very much on display. And it seemed like no one was talking about that no one was making this connection between this, you know, movement in recent philosophy to think about belief in a, as a social phenomenon, and connecting it to, you know, current events. And so that's, that's what we wanted to do.
Do you think that the tradition is misleading? And what I mean by that is, Descartes, Locke, Hume, all these folks are focused on knowledge and even and I'll ask about this in a minute certainty, and they make it seem like knowledge is a lot easier to acquire Locke as an empiricist. He talks about knowledge coming through the five senses, and you get a sense that if you don't know things, you're abnormal. You're somehow marginalized. But I think what we're discovering Is that knowledge is a lot harder than that. And knowing what's true is a lot more complicated. So do you think that that the history of philosophy is leading us astray?
Well, there's just maybe something about how the history of philosophy has focused on the individual as the seat of knowledge. So as you point out, there's been a lot of focus on Well, how can we know anything given the fact that we take things in VR senses and a lot of focus on reason and rationality, you know, how is man special as a reasoning creature and able to gain knowledge this way? But recent work in social epistemology and in the other social sciences shows that most of the things that we believe or think we know are not coming directly from our senses or from our ability to reason but are just passed to us from other people. You know, we tend to learn especially scientific beliefs from the people around us. things people tell us. And so maybe philosophy is just not focusing on something that's really important and understanding human belief.
Does this make the biological focus on the of the empiricists or the cognitive focus the rational focus of the rationalists that the idea that knowledge is is a product of the body? Does this make that obsolete? Is the body no longer relevant?
Well, I mean, look, I think that there are different ways of thinking about what this tradition is trying to do and how more recent work fits in with it. You know, I think that there's another fairly radical move that has happened in the philosophy of mind and in philosophy of personal Option, and in you know, theories of knowledge that has drawn on contemporary science to understand the cognitive science of belief formation, and the perceptual science of, you know, vision and addition. And, you know, the role, like the ways in which the senses are reliable and unreliable, that I think is ultimately continuous with work in the history of philosophy, by, you know, famous philosophers like, you know, David Hume, somewhat less famous philosophers like Thomas Reid, who are very interested in the details of our perceptual systems, and the role that they played in our ability to gain knowledge. But part of what's happened is that our science has gotten better. Part of what's happened is that we've realized that the individual body is only part of the story. You can of this as philosophy borrowing from recent advances in social science as opposed to just recent advances in biology or cognitive science.
And and one more question before we sort of attack the sociological aspects of it, you made a move, which of course makes perfect sense. I've been talking about knowledge, and you use the word belief. Your book is filled with the language of belief. Plato talked about true belief as opposed to knowledge. Why make that shift from talking about knowledge and you know, knowledge with a capital K? I don't want to say college with a capital N, but knowledge with a capital K and and belief. What is it that thinking about belief gives us that thinking about knowledge doesn't?
Well, there's a couple reasons that we really focus on belief rather than knowledge in the book. So we take this kind of pragmatic approach to beliefs and why we should want to have certain beliefs, which is that ultimately what we care about is action. And what it means to have a true belief. Well, there are a lot of things you could mean by that. But what we should really care about are having beliefs that successfully guide our actions in the end. So when it comes to climate change, what we want is a good enough belief to help us act to protect ourselves, our countries, our children, from the ways in which we're changing the climate. So that's one reason we really focus on belief. There's another very practical reason, which is that if you look at some of the history of industrial propaganda, and political attempts to sway legislation and action, sometimes people use knowledge has the standard that we should be going for when in fact, that's not really what we want. So people will say things like, well, it's, you know, we're not 100% sure that man is causing climate change, or we don't know enough yet about CFCs. And whether they're really causing the ozone hole, and so they hold up knowledge is something that we need before we act. But we don't think that actually makes sense. I mean, one thing philosophy has shown us is that when it comes to scientific matters of fact, we're never 100% certain about anything. So this is the problem of induction. And so that's not what we should be looking for. We should just be looking for good enough belief to guide our action.
Yeah, I mean, just as a maybe as a terminological matter, something worth emphasizing, you know, we aren't using the word belief in the sense of religious belief, nor are we using the word belief in the sense of, you know, mere opinions, something that you think without necessarily having any special expertise or evidence. We're using belief in the sense of some something that you accept, perhaps with good reason, perhaps with very strong evidence. the sort of thing that is a candidate for being knowledge. And we're now trying to address under what circumstances people come to form beliefs, how those sorts of circumstances can go wrong. And, you know, we're, of course interested in the beliefs that are both well justified and true that you know, classically the things that someone like Plato would would think of as knowledge. But we're less focused on, you know,
just that, those success conditions and much more focused on the dynamics of actually forming the beliefs and sharing the beliefs and having those beliefs change over time and having those beliefs be influenced by other sorts of factors.
Is is the intensity of belief relevant, I always tell my students that if faith is measured by How much someone believes that the terrorists always win? Because they're the ones who are willing to sacrifice everything. And I'm a Jew who eats bacon. Right? So when you're measuring the beliefs, do you measure? Do you concern yourself with how deeply someone believes it? Or is it really just a matter of which beliefs you're willing to act on? Or which beliefs you willing to sort of pursue in a different way? Where's the intensity relevant here?
Yeah. So you know, we're going to use the expression, degrees of belief a lot. And so let's just be completely clear right now. beliefs come in degrees. We're going to talk about that what we mean by degrees of belief is not intensity of belief. It's confidence, right? So a high degree of belief means you're very confident in something perhaps because you have lots of evidence in it. But there are things that you can believe or With a great deal of intensity, because of the emotional valence of those beliefs, that has nothing to do with your level of evidence for it. Now, of course, you know, these things can shade into one another. So there can be conditions under which you are completely convinced of something, you believe it very fervently. And so you you have a, you know, a lot of intensity to your belief, and that shapes your actions and all sorts of ways because of the strength of your conviction as well. But I think in principle, you can, you can pull these different notions of degree apart, right? And so we're going to focus on just how confident you are, in many cases, how confident you should be given the evidence that's available from the question of sort of how emotionally involved In the belief you are, which is I think what the intensity of belief conjures up, at least, at least for me.
So when we talk about confidence, are we replacing the Cartesian idea of certainty, de cartes notion of certainty with probability? Are we focused more on? How likely something is to be true? Or does confidence mean something else?
Yeah, so sometimes philosophers think of knowledge or belief as something that you either have or you don't, you know, you believe in a proposition or you don't believe in it. Here, we're focused on this other notion where you can think of yourself as having these different degrees, different probabilities that you think some belief could be true. You know, it could be what is your probability that it's raining outside right now, given the things you know about the climate, you're in what you saw this morning, did you check the weather, and you'd have some number between zero and one that tracks that probability?
Okay, so So for our listeners who haven't read the book, here's what you're going to do, you're going to look at the fact that belief is is social that it comes out of networks and from news magazines and from friends and from scientific studies. And you're going to map out the patterns of not just belief but misinformation, how people get things wrong, or how false ideas spread. But I think that a lot of our listeners probably don't know what it means to map out a sociological activity. So can you give us I guess, a little primer on knowledge mapping on what it means to create a model to track and explore how information or misinformation spreads?
Right. So first of might just be good to say what it means to create a model of a social phenomenon. So a lot of What we're doing in this work is using computer simulations and mathematics to represent something happening in the social world. And in order to do that, of course, we have to make a simplified representation because human social life is extremely complex. So if we think about something like the spread of belief in the social world, there are a lot of things that matter to it. What we're doing are building models that pull out some of the most important things. So our models will represent things like a network between people and this tracks, who communicates with who who interacts with who, who bumps into who at the grocery store. Then we also want our model to track what the individuals believe. And so we have different agents who have these degrees of belief in different proposition. So a degree of belief in how likely it is that smoking causes cancer, for example, could be something that we include in the model. And then Beside that, we want to have certain rules about how these individuals change. their beliefs in light of evidence they get. So when we're modeling a community of scientists, they might have a belief about whether they think smoking causes cancer. And then they might be able to test that belief in the world, we have ways of representing that test in the model. And then based on what they see, they can change their beliefs in light of that. And they can share the evidence that they've gathered with those in their social network and change their beliefs as well.
So they're, you know, there are different parts of the methodology in the book. And so I would think of it as combining historical case studies, right? And so there's a certain amount of work in the book that just looks at particular cases where there was a community of people who usually a community of scientists who were trying to learn about some aspect of the world who are sharing information with one another who are gathering information themselves. And we tell the story of how that works. What went wrong? What went right? We use the models that Caitlin was just describing, to try to isolate particular features of those stories, particular ways in which things might have gone wrong to get some mathematical control over that language, so we want to have a way of understanding these complicated social situations using these models. And then the third part of the methodology is, you know, more traditional philosophy, right? So, some conceptual analysis, what do we mean by belief? What do we mean by evidence? How are these things related to the sorts of, you know, the sorts of things we see in the world the decisions we need to make in the world? And, you know, we try to pull all these pieces together in the book.
So we're gonna take a break in a minute or two, and I think what might be helpful. And you can tell me if I'm misrepresenting This is to sort of draw a picture in people's minds of what the model looks like in your book. Because what you do, and it's the discussions, you have these pictures and the pictures, I think they look, I think they were called Tinker toys when I was younger, but they look like DNA molecules almost. They're these circles, connected by lines with numbers on them, and they're in different shapes. And so imagine, right and you can tell me if this is an accurate imagine I want to describe my path to going to the bathroom in a work day. And I work on a on a floor with with other offices, and there are people that I want to see and there are people I don't want to see their people I don't want to talk to their people I do want to talk to so maybe you would draw a thick line between me and the dot representing the person I don't want to talk to and me and a thin line representing me and the person who I do want to talk to and then I can talk about the likelihood of running into them. But because it's a model, there's also a line indicating whether the two people want to talk to each other and all of this stuff. And so what it does is it creates a visual map of people's relationships in order to document a certain feeling like wanting to talk to them, or in the instance of the book, true, a piece of true information or untrue information, or things like that. Is that a fair description?
Yeah, though, for the models in the book, the real thing you want those lines as connections between people to be tracking is who shares information and evidence with who, because these are models about the spread of belief. The network connections have to be about belief spreading, not just who you're friends with or who you want to chat with.
So the benefit of a model is that it can draw the connections between whatever your subject matter is and in your case it's it's the information And, and in my model, it was my attitude about human beings, but the model is a variable. And that allows you to focus on whatever subject you want it to be.
That's right. But, you know, I think the real value of the model is the ability to manipulate the model. So here's, you know, just a general problem that you you run into when you're trying to study historical cases or social cases, it's very difficult to ask, what would history have been like, had so and so done this other thing, or if you know, these people have been talking to these people instead of the people that they were talking to? And so what the models allow you to do, is to say, Well, here's a particular network of social interaction here are these people connected to these people and here are, you know, different ways of understanding the strengths of those connections, what sorts of information is being shared? And here's what tends to happen. But what happens now if I change that if these people are connected to these other things, People, or the strengths of the connections are different in some way. Does that change things? And sometimes it does. And sometimes it doesn't. But it gives you a a way of asking questions. How would this have changed had things been different, that allows you to give more compelling explanations and provide a different kind of analysis of, you know, real world cases that you can't really intervene on or manipulate.
So I want us to get a more concrete example of that. When we come back. We'll look at the language you use in the book and the examples of what exactly you're trying to articulate. But for the moment, we have to take a break. You're listening to jack Russell Weinstein and we philosophical discussion about everyday life. My guests are Caitlin O'Connor and James own Weatherall and we're talking about the spread of misinformation. We'll be back right after this.
The Institute for philosophy and public life bridges the gap between academic philosophy and the general public. Its mission is to cultivate discussion between philosophy professionals and others who have an interest in the subject regardless of experience or credentials. visit us on the web at philosophy and public life.org. The Institute for philosophy and public life because there is no ivory tower.
You're back with jack Russell wants to know on why philosophical discussions with everyday life, we're talking about misinformation with Caitlin O'Connor and James o and Weatherall. And as I was reading the book, and I was thinking about the experiences of sharing information and believing or not believing Various things. I was reminded of an event in my life when I was very young that actually, I think about ball too often. I was probably about in the fourth grade. And for years and years, I had a crush on a girl in my school, Ellen Davis, we're still friends on Facebook Island. And, and I had been told at some point in my life, that scientists didn't know why cats bird, because every time they cut a cat open, the cat stopped parent. And I believe this. So I remember having a conversation with Ellen about cats. And I mentioned this to her. And she looked at me with such sweetness and condescension, and she said, Oh, that's so cute. And it was incredibly humiliating. And I never believe that again about cats. And that memory is intertwined with all of these feelings about wanting approval from the person I had a crush on. And the embarrassment of getting things wrong. And so, Jim kailyn, I guess the question I have for you is, is this an uncommon experience? And I don't mean having a crush on Elon, I'm sure everyone did. But But I mean, this idea of wanting to believe or not believe the people who are important to you, or wanting to appear smart, or have similar beliefs with your friendship, how important and how common is the network, our feelings about the people we're with, in our in how in what we believe?
Well, so everything about that experience is pretty common. Actually, a first of all, I'd point out that one thing that's extremely common is just having a false belief like that, and then finding out you were wrong. I mean, we've all experienced that so many times, and it's because of this fact that we learn most things from other people. And we trust other people. And this kind of trust is really deeply important into determine in determining what we believe and why. So if you think about it, this is a really successful thing for people to do. Because when we trust what other people tells us, this allows us to learn about all these things that we could never go find out about ourselves. So in your case, you presumably are never going to be doing the science that would allow you to find out why cats per buyers, I hope
So it makes a lot of sense when someone tells you about that to trust them because often they're telling you something true. Of course, as soon as you do that, you sometimes also adopt false beliefs from them like that. We don't know why Casper because they stopped when we cut them open.
So So while I got the belief From Elon, afterwards, there's an analogous relationship with scientists, right? Since I'm not a scientist, I have to believe the scientists the way that I believe my friend. And I have to believe the journal article, the way that I believe my friend, that there's a parallel even though the scientists are experts, and we were in fourth or fifth grade, there's a central parallel in the fact that because we don't acquire the knowledge ourself, there is an essential trust relationship that goes on in the transfer of beliefs.
Yeah, exactly. I mean, so even scientists tend to find out about only a very, very small subset of information on their own via their own research. So even for a scientist, the vast majority of the things they believe about science, they believe because someone else found it out and told them and they trust what that person told them.
I think that their act I mean, so they're There are two things to emphasize here about science and the social nature of belief. One is the one that we've just been discussing, which is absolutely right that scientists to rely on others for most of what they know even about science. But that's not the only social element of science. Because part of the process of science is a process of argument and disagreement and criticism. And so it's not a situation where scientists say things, they publish articles, and then every other scientist just believes whatever they've published. So on the one hand, most of what they learn, they learn from other people, but it's also the case that their job is to criticize what other people say and what other people you know, argue based on their results or the results that they they publish. And so part of what makes science a powerful and successful Practice for learning about the world is this social element of disagreement and criticism and, you know, evaluation of other people's work. Even though a big part of, of, you know, science, being social in that way means that scientists are getting a lot of their information from, you know, not directly through their own experience, but from others.
So you would think hearing that, that every scientist would be inclined to disagree with every other scientists that that being the lone wolf, would be a badge of honor. But you talk about something called conformity bias. And this is different than confirmation bias, conformity bias as being a problem in the system. Can you talk about conformity bias, why it is a problem for scientists? And then what you talk about one of the experiments you mentioned in the book about how people reacted to the debate over how big the cost Was it at President Trump's inauguration?
Yeah, so there's a lot of experimental evidence showing that people in general like to conform with each other. So we don't like to stick out from the pack, usually. And this applies to beliefs, too. So, for example, if you're in a group and everybody is espousing one belief, it's really uncomfortable for you to then say something that disagrees with everyone. And as we point out in the book, this is something at work in our everyday social networks. It's also at work in scientific communities. So scientists, like other people are humans. And it's often uncomfortable to disagree with scientific peers. And sometimes this can have an effect and not always a positive effect on how science is done. As you mentioned in the book, we talk about a recent case of this in politics, which is that after President Trump was elected, there are all these pictures of his inauguration crowds and various claims that the crowd was much smaller than for the Obama inauguration. That is, in fact, true. It was smaller. But there was the study done, where these social scientists showed people pictures of the two crowds. And they would ask them, which one has more people in it. And they found that Trump supporters were much more likely than Clinton supporters to say that the picture that just clearly had fewer people in it, in fact, had more people in it. And we argued that this might be a case of conformity where all the people who are Trump supporters want to stick with their team. And what their team has said they believe is that there were more people at the Trump inauguration crowd. So they're willing to ignore the evidence of their success to say the thing that goes along with their friends.
But it's also deeper than that, right? Because you have two other examples that I was thinking that while you're talking The first is an example of slides that one are longer than the other and that the People would get it right, if they were just asked, which was longer. But if they were told that other people agreed that the short line was longer, they would change their answer. And also the story of the idea of washing your hands before handling childbirth. Would you talk about that and show how? Because it's not just right. I think that our listeners will hear you saying, well, Trump supporters are going to pick the picture. That's wrong. But it's not just the Trump supporters going to pick it. It's that when they hear what their cohort believes, they're more inclined to at least say they believe it, if not actually believe it, right.
Yeah, so we draw connection to this very famous experiment known as the Asch conformity experiment, where basically, what the experimenters did was they would have subjects in a room and they'd show them three lines on the left, and then one line on the right, and the line on the right was the same size as one of the lines on the left, so maybe it'd be the same as Line A, then they would have seven Confederates, so actors in the room and they'd have them go one at a time. And each one would say, Oh yeah, it's the same as line B, it's same as line B, all seven of them would say the same thing. But the line was the same as B. Now, the last person who's the subject has to decide, am I going to say what everybody else said? Or am I going to say what my eyes are obviously telling me so in this experiment, it's totally clear when you look at it, that the line is in fact the same as a, and they found that about 30% of their subjects would just say, it's the same as line B. So they would agree with everyone else, even though they could see that wasn't true. And even the ones who wouldn't agree if you look at the transcript of these experiments, we'd be saying things like, oh, gosh, darn it. I don't know why I always disagree with everyone. So they would be feeling really uncomfortable about saying something different than the other seven people had. So this is this tendency to want to conform even when you know Or at least say you're conforming, even when you know something isn't true. In the book, we describe this kind of famous historical case where Ignace Semmelweis was a physician in Vienna, he was put in charge of an obstetrical hospital where a lot of women were dying of child bed fever. He was really concerned about this. And he had this kind of breakthrough when one of his colleagues died of something that looked like the same sort of fever after conducting an autopsy and accidentally cutting himself. And it got some of us thought, Well, I have all these physicians in training, who are performing autopsies and then going and delivering the infants, maybe there's something on their hands. So there was no germ theory of disease, but he thought maybe there was something he called cadaverous particles, and if they wash their hands, maybe they would wash off these particles and wouldn't transfer whatever it was. So we had them start washing their hands and the death rate Totally plummeted in his clinic. He published his findings. But what happened was that other physicians didn't pick up on the findings. So they, first of all, were really offended at the idea that there was something unclean about their hands. They thought his ideas were weird. But basically, they decided to conform with each other, rather than kind of sticking out there next to pick up this new practice from this other doctor. And so although there was really good evidence that the practice worked, it didn't spread until much, much later.
You know, I think there are maybe two points to pull out here. One is just to emphasize. When we talk about conformity bias in the context of science, we are not claiming that scientists are just conforming with one another. And therefore, you know, scientific beliefs don't have some, you know, strong, evidentially supported status, we're pointing out that look conformity bias is something that is well documented in humans. There are case studies historically where it seems to have affected scientific outcomes. You should expect this to be something that is part of science and something that may make it somewhat more likely, in some cases, for scientists to take longer to, you know, get to the right answer, or maybe end up in the wrong place. That's the first point.
The The second point is that
there are really good reasons
to have a kind of conformity bias. Look, if you walk into a room and everyone is doing one thing, and you're not doing that thing. You probably don't know something that they do know. Right. I mean, like, you can imagine the physical discomfort you would feel but in part, it would be because Like, what are they like? What are they trying to do? And like, why don't I know to do that to, you know, imagine, you know, historically, you know, you know, our ancestors that are these, you know, various around no one else is eating these berries, it's probably a good idea for you not to eat the berries either, even though you don't have any direct experience of whether or not the berries taste good, whether or not they're poisonous and so on, you know. And so you can see why there would be cases in which you would want to conform or where you would want it to be the case that we had a tendency to conform. And now the question is, well, how does that interact with other things that ought to matter to belief like evidence.
So first off, this is one of the places where modeling is really important, because you can create these pictures in these hypotheticals where you say, Well, what happens if this scientist doesn't conform or what happens if this person is a bit of a gadfly, how is that gonna change the possibility? But also, what you're pointing out is that this conformity bias is both good and bad, right? One of my favorite expressions, and I've probably used it on the show a handful of times, is an old Yiddish proverb that says, When three people tell you, you're drunk, go home. Right? There's, there's, there's something really important about listening to what people say. And so, since there's no neutral standpoint, since there's no archimedean place, to indicate those moments when conformity is good, and when conformity is bad, the conformity bias becomes a factor in the model, but it doesn't become good or bad in and of itself, right. It just depends on the situation, and therefore it's really complicated.
Well, I think there's something more specific you can say about when it might be a good thing and when it might be a bad thing. So conformity can be helpful in cases where there isn't another way to spread knowledge or a way for you to get evidence by yourself. So Jim was bringing up this very case. So in that case, what conformity is doing is allowing someone to get information in a way from other people without hearing it from them. So if no one's eating the berries, you think there's probably some good reason you get information from their behavior, about what might be a good behavior for you. Where conformity is not as good is a case where there is good evidence and information that you can get. So for example, if you can look at studies about something, if you can perform a study yourself, if you can test the world, if you can look at the lines that actually see which one is better, then you don't want to be conforming, there's a better way to form your beliefs and conformity can sometimes lead you the wrong way.
And this, this leads to a strategy that you cite called volkman effect, right? What's the zone and effect and why is it relevant here?
So that actually is kind of a separate issue. But I there are relationships for sure. So the zoom in effect is something that can happen in this sort of model, where you would think in general, if you had a bunch of scientists who are gathering evidence gathering good evidence from the world and sharing it with each other, you would want them to be more connected to get more evidence from each other to share more. Paradoxically, Kevin's olman, a philosopher of science, showed in these models that sometimes more connection is actually worse, for the reason that, you know, when evidence is probabilistic, sometimes it's misleading. So if we think about I keep using tobacco as an example, but if you think about studies about cancer, sometimes they show that tobacco smoking doesn't cause cancer, because the evidence is probabilistic. Not everyone who smokes gets cancer. Some people who get cancer and never smoke, right? When you have this probabilistic evidence It can be misleading. And if you have a really connected community, a few pieces of misleading evidence can get everyone to believe the wrong thing. So there's too much social influence in a situation like that. What you might prefer in that situation is to have some people who aren't that influenced by others who are less connected, who stick with a different belief for a longer period of time, and then can eventually share their evidence back with the community.
You know, there's a historical case, you know, similar in some ways to the very case, but you know, a real real example. So, for many years after tomatoes were introduced to Europe, they weren't eating because people believed that they were poisonous. And in part there was you know, good reason for this because they were clearly related to other night shades and Bella Donna in particular, which was native to Europe and which was poisonous. But you know, it's a case where a belief that ultimately was not true tomatoes are not poisonous, spread widely, in part because of conformity, in part because it seems that, you know, wherever tomatoes spread, that, you know, the network of people who were spreading the tomatoes were also spreading this belief that that they were poisonous. And so this ended up meaning that there was a very tightly connected group of people in Europe, who had access to tomatoes, all of whom also had this belief that tomatoes were poisonous.
So, I want to shift in just a second to the disconnect from the discussion we're having where error is accidental and unintentional, to your discussion of propaganda, and how misinformation is intentionally created. But this is also getting a little upset. So I want to ask a very basic question that I think a lot of people might be wondering about when we talk about the tendency to conform. And I'll put it in the most colloquial way. Does the tendency to conform mean that people are just as the internet calls them, sheeple? I think you use the word at least once in the book as well, that people are just going to follow the crowd. And people don't have that. Ultimately, what is happening is that our tendency to follow others is stronger than our tendency to stand by our convictions is the conclusion of this part, that we're just sheeple and we're easily manipulated.
Well, I think we, we would conclude from the book that we're pretty easily manipulated. But it's clear that conformity isn't the only thing that matters to people. Evidence matters to people too. And people want to have true beliefs about the world in general because it helps them out In more successful ways, one thing that we point out in the book is that conformity seems to matter a lot less in some cases. And in particular, when the belief is really, really important to successful action, people tend to conform less. And when it's not as important, they tend to conform more. So if you think about, like evolutionary theory, if someone doesn't believe in evolution, there are almost no day to day consequences for them. It just doesn't hurt them not to believe that. So that's a case where wanting to conform can be really important in shaping people's behaviors and their stated beliefs.
And if you don't mind me interrupting for just a second, you contrast that in the book, with the idea that the doctors who weren't willing to try washing their hands, they weren't at risk, right? It wasn't their kids who were giving, who were being born. This was a hospital for the poor, and there was no danger to them or their family. So There was no risk for them. And so they didn't try anything new. But if it had been their own kids, or if they had been themselves at risk, that personal risk would have been much more powerful than the conformity. So some of it is just how much skin you have in the game.
Yeah, so something different might have happened if those physicians had been, you know, working on their own families, people that actually cared about right. And that would be a case where the belief matters a lot to them. We discussed another case in the book about smallpox variation. So the idea of very leading for smallpox This is a lot like vaccination, but just involves kind of an arm scratch and introducing a little small pox pus into it. So this idea was spread in England by an aristocrat named Lady Mary Montague. And when she was trying to spread it, a lot of physicians were resistant for the usual conformity reasons, you know, They had never heard of this. They were all gentleman and she's this woman trying to get them to believe something, it seems weird. But that practice ultimately did spread. And part of the reason might have been that there was an actual smallpox outbreak in London at the time. And so there were real consequences to people from taking this action or not taking this action, it was really important to make the right choice in that case.
So then let's let's talk about that, because obviously, it touches upon vaccination and the anti vaccination movement. But it also talks about the way that people can use authorities and famous people and gender to sort of manipulate the process. This leads to propaganda. And one of the things that's really fascinating about the book is that when you talk about propaganda, you don't talk about political propaganda per se. You talk about for lack of a better term, scientific or in Industrial propaganda you talk about the the tobacco companies again, and you talk about other organizations that are intentionally manipulating to sort of, and you don't use this phrase, but but also just to to teach the controversy. So how do you intentionally manipulate the scientific process in order to communicate misinformation?
Yeah, I mean, so I want to, you know, emphasize first that what we say applies just as well to political propaganda. It's interesting and striking, that, you know, so political propaganda often has to do with, you know, your political values, your moral values, things that don't necessarily have to do with scientific facts. It's really interesting and striking that even in cases where the facts are at or what is what's at stake, that you can come up with very effective Historically very effective strategies for shaping and manipulating public opinion on matters of fact. And, you know, tobacco is really the best example here, in part because there were a series of lawsuits in the 1990s, in connection with secondhand smoke, were during the discovery process, just boxes and boxes of documents from the tobacco industry, going back to the 50s, were made public. And so in fact, we have, you know, historical documentary evidence of who knew what, when, and what decisions they made to try to shape public belief about science. And, you know, what happened was beginning in the mid 1950s, there were a series of very powerful, very tangible studies that seemed to make a completely clear and convincing case for a link between tobacco smoke Cancer. These were widely reported in mainstream media, no New York Times articles, Reader's Digest articles, things like that. And the tobacco industry realized that this was a huge problem for them. That, you know, cigarette sales for the first time in decades started to decline, that public perception that their product was killing people was going to affect sales. And they explicitly adopted a strategy to try to combat that by using science to fight science. And the idea which in all of this is explored in much more detail than then we give in a wonderful book called Merchants of Doubt, by Naomi rescues and Erik Conway. What they show is that what the tobacco industry did was a few different things. One was to argue over and over again, that we weren't saying certain that there wasn't enough information yet to make decisions about whether or not we should regulate tobacco, whether or not tobacco was really dangerous. And then they came up with a targeted effort to produce science and share science that would create the impression that tobacco smoking wasn't as dangerous, as other people said.
So let me just interrupt for one second, because this brings back one of our earliest parts of the discussion, which is, we talked about the shift from certainty to probability. And we talked about the fact that certainty was a very, very high standard, and it was perceived as the norm. And if you weren't certain about stuff, maybe you were sort of something was wrong with you. And so, the tobacco companies are using this as a weapon. They're saying, look, knowledge is certainty. We're not certain therefore we don't know it. So you don't have it. Anything. thing to worry about and there and part of what you do in the book is say, this is why certainty is probably not the standard that we want to investigate belief with, because certainty is too high and it and it and it leads people astray.
That's exactly right. There was an inference that that you just made that right. It's true. We don't have certainty. certainty is arguably, I think, I think, just right to say it Look, certainty is impossible. That is not something we can get about scientific matters of fact. But that doesn't mean we don't have anything to worry about, right, having a whole lot of evidence. Having reason to think that the probability is very high, that it's true that tobacco causes cancer, that should be all you need to make the decision not to smoke or to at least regulate how tobacco is, say marketed to minors or under what circumstances it's sold and things like that.
So Okay, so we have we have the first move from the industry, which is okay, only certainty counts, that doesn't work. And then there's a second move, which is to create or to emphasize science that brings in more doubt and moves it away from this standard of certainty. And you talk about a couple in the book, you talk about bias productive and select production and selective sharing and industrial selection. Can you talk a little bit about it? Because I think the way that the lay person tends to think that science is corrupted, is there, you know, from the movies, someone shows up with a briefcase with $500,000 and $10 bills and says, Hey, buddy, you know, make me a study right? And they make them a study that says the thing because the scientist is really a criminal. And of course, there are certainly people like that, but that's not The main way or the sole way that the scientific process gets usurped. So could you talk about the the less dramatic, less Hollywood esque ways in which the industry can sow doubt?
Yeah. So this was something that in researching the book, it just sort of came up again and again that there are these subtle, surprising, insidious ways that industry influences belief about scientific matters of fact. So one that Jim was getting on to we call selective sharing in the book, which is that you go out into the scientific community, you find real data that because of probabilistic findings happen to support the thing you want, you find the actual studies where people painted mice with cigarette tar, and they didn't happen to get cancer, and you share those really widely. So you haven't even influenced a single scientist. All you've done is shared a subset of real scientific data, but a misleading subset So that's something that's been done a lot. Another thing we discuss in the book is what? Two philosophers of science Justin Bruner and Bennett Holman call industrial selection. Again here, no individual scientist is influenced to change their practices. But what industry can do is look at the different methods and assumptions being used in the scientific community. Pick scientists who are more likely to generate findings that are good for them, and then give money to those scientists. So maybe someone's using a method in their study that's going to be more likely to find that tobacco is safe or that a certain drug is effective, and industry can flood that person with money, and then they generate all these findings and then all those findings are out there in the literature influencing people. A third thing we describe, which is kind of related to this is that sometimes industry will fund basically distracting science. So tobacco will fund research on asbestos and asbestos causes lung cancer, or big sugar and the 80s funded a lot of research into the connection between fat and heart disease. So again, this is real science done by real scientists. They don't have to change their practice. But now everyone's looking at this other cause, not the one that originally they should have been focused on like tobacco causing cancer or sugar causing heart disease.
In fact, you know, the asbestos cases is fascinating because in early lawsuits, it was very important for the tobacco industry to be able to identify other possible sources of other possible environmental causes for cancer. And so having this now much larger body of research on the health effects of assessed this played an important legal role in questioning whether any particular case of cancer was caused by tobacco.
So the integrity of the process is such that when you have a probabilistic disease, right, some people get cancer, some people don't. You can always find the examples of the people who don't, and you can exploit it. And so, again, science and even the scientists involved think that they are virtuous and following the method. But they're there. There are cracks. And so that leads in some instances, to an ironic conclusion. And I wonder if you talk about this, that there are some instances and this is not a general rule, but there are some instances when more small studies or more studies are worse than fewer large studies, right. Can you explain why that is? Because again, I think that the average person when they think about science, they'll think, well, if this one person had 50,000 people and did a study and found this guy inclusion and these 10 people had 5000 and found these conclusions. Well, surely the 10 studies are better, because more is better than one. But that's not always the case. Right?
Yeah, that's right. So when you have probabilistic evidence, it tends to be the case that the smaller your sample size, the greater the chance that it's misleading. That's why scientists often emphasize getting really big samples if you want to be sure about some effect. So what we point out is that when you have a bunch of people doing these low powered studies, ones with small sample sizes, a higher number of them are going to be misleading, so are going to point in the wrong direction show that maybe tobacco smoking look safe. And once those are out there, they become weapons in the hands of industrial propagandists who can use them to mislead the public
and this means that okay, Let me ask the question that was in the back of my head, someone may hear what you say and say and interpreted as this way. Well, you just said all of these small studies show that these mice didn't get cancer. And yet you insisted in the same sentence, that tobacco isn't safe. Isn't that a contradiction in terms? So how would you respond to someone who hears what you just said and say, Well, look, she's just being biased, she already has decided that tobacco isn't safe. And so she's dismissing these other studies that say otherwise. And it's really just smoke and mirrors, and we really don't know.
I think that the the crucial point is that
it's the total body of evidence that should ultimately be what influences your belief, and what's going on in the examples that are being discussed right now. So suppose you have these these 10 studies each with 5000 Participants and say two of them suggests that, you know tobacco is safe, and eight of them suggests that it's not safe. When you take them all together, that looks like a pretty strong case that tobacco is not safe. But if you only share the two that show that it is safe, then you're going to get a pretty misleading impression about what the science has shown. Now, I think that there's a background problem here about how many people think about science, certainly how many journalists report about science, where individual studies are taken to establish something once and for all. articles are written newspapers were a study is the story. And so, you know, a great example just from this year, there was a big study that was published in The Lancet, a major medical journal that many journalists reported showing that no amount of alcohol is safe that they're always you know, any amount of alcohol you drink is going to have negative
health. Remember that on average?
Yeah. There was another study that was also widely reported that basically concluded from a group of nonagenarian, right? So people who are over 90 that drinking one glass of alcohol a day can increase your longevity. Now, the stories that were written that, you know, you can point to lots of stories over the last year and would write about one of these or the other. That's just not the right way of thinking about science. Neither of them has established something once and for all, what you need to think about is the quality of the the study, just what question is being asked how this fits into the context of other research that's been done and That's not the way that we are accustomed to thinking of science, which is, again, a way in which industrial groups who would like to get you to believe something can sort of capitalize on right to, you know, they just show you the one study that seems to show what you want to what they want you to believe, without giving you this broader context.
So, okay, so we're asking the question, how does misinformation spread? And we've gone through a variety of options. We talked about the ways in which people naturally make errors, and this is inclined through the tendency to conform the tendency to be resistant to new ideas, the tendency not to have skin in the game. We've also talked about the manipulations by the industry to selectively either fund or advertise different studies or you smoke and mirrors to distract. And this is a lot of information to balance out. And this is why you need a model, right? Because you can draw all of this and see how it interacts. And then you can say something really important about the sociology. And I want to read actually, and I don't do this very often, but I want to read a passage from the book on page 152. Because I think it underscores for lack of a better term, the scientific nature of what we're talking about. You say, emotion plays no role in our models. Neither does intelligence nor political ideology. We have one very simple, highly idealized agents trying to learn about their worlds using mostly rational methods, and they often fail. Moreover, they can be readily manipulated to fail simply by an agent taking advantage of the same social mechanisms that in other contexts help them to succeed. So at no point have you said the other side is stupid or evil. Although I suppose some of the corporate agents are evil, right? And it's not a political ideology, you have an appeal to the left or the right. What you're doing is showing how the the mechanisms of information transfer sometimes work well, and sometimes don't work well, because in a certain sense, the model is neutral on whether a belief is true or not. In order to, to make that judgment, you have to make the human judgment, but the human judgment isn't the model. The model is drawing or or mapping the human judgment.
Is that right? You know, I want to say like, 75% of the way through what you were saying, I was ready to just, you know, say amen. This is this is exactly it, right?
I'm okay with that. That's C's get
degrees as the kids say, right. So the thing that we're really trying To get out here is that it's, look, think about the example that we've already discussed of the picture we had for why scientists how scientists could be influenced by industry, right? It's a very straightforward thing. You give them money, they produce a study showing what you want it to show. And what we try to argue in the book using these models is that sure that can happen, but they're way more subtle, and ways in which this can happen, that if you don't know about them, you're not going to solve the problem simply by getting rid of the you know, the explicit fraud or the bribery type cases. And you know, what, the thing about the you know, our ideas that people who disagree with us are ignorant or stupid or somehow have a moral failing You know, probably some of them do have moral failings, probably some of them are ignorant. But that's not the whole story. And what the models allow us to do is pull out, like isolate the intelligence of people isolate, you know, moral stuff, isolate emotional stuff, say, let's not think about any of that, let's just focus on this very specific social aspect of how knowledge is spread, or how conformity works or how trust works. And ask, Is that enough to get these bad outcomes? And the answer is, in a wide range of cases, yes, just that is enough to get these bad outcome.
There's something really interesting here about this modeling strategy. So a lot of philosophers of science have asked these questions like, well, how could a simplified model tell us about real human societies or anything in the world, given that it doesn't include all these important factors in the world? But other people observe sometimes the fact that the models don't include other factors is really important in the way that they can tell you something. So the fact that in our model, that actors aren't stupid, they're really good at changing their beliefs on the basis of evidence. They don't have a lot of the psychological biases that we do. The fact that we cut those things out allows us to say, well, just this kind of conformity effect, or just this tricky information sharing by industry can really lead people astray.
So I'm gonna ask two questions in the future. I'm gonna tell you what they are right now. And then I'll tell you why I'm telling you before I do. I'm gonna ask you. What about Facebook? And I'm going to ask you, does this mean that the democracy is failing and we're all screwed? But in preparation for those questions, I want to say something really complimentary about the book. You Something I think that's magical that I'm sure you did intentionally, although I don't know that you would have used this word, which is you have these chapters, these large, accessible but complex and sophisticated chapters on science and on mapping. And then you get to the question of social networks in the last chapter. And you do such a good job of describing how misinformation works in science, that you almost don't need the chapter on Facebook, because the moment you said in the book, okay, well, let's talk about social networks. Now. I thought, Oh, you don't have to. You've been talking about social networks the entire time. I know everything that you're going to say. You almost don't need that last chapter. Right? what's, what I want to communicate to the listeners is that even though we've been talking about science and scientists, we are talking about social networks because they work the same way. They're just faster and more all encompassing. So can you summarize it Think, for us how the scientific model predicts how Facebook goes wrong.
Yeah, so Well, thank you for that. really flattering.
Yeah. So this was part of why we thought that writing this book would make so much sense now because these models of scientists do tell us about other kinds of information spread on liberal online social networks. Of course, there are other ways in which social networks don't totally map to science. So for example, one thing that's different is that, you know, social effects probably matter more, even more than they do in scientific networks. Another thing that's different is that often the individuals on social networks aren't doing experiments in the same way scientists are they aren't getting sort of direct evidence from the world. So there are Little more divorced from that kind of immediate input coming from the world. But we think that a lot of the lessons that we develop in the scientific context do transfer really nicely to understanding why the advent of social media, and the fact that now we have this social media connection, as part of our belief forming mechanism, has, in some ways led people wrong. It's an opportunity for conformity to matter a lot. And it's an opportunity for people to choose who they're interacting with and who they're conforming with. So for example, whereas before, you know, if you maybe had doubts about vaccines, you'd have to only be talking to all the people say in your family or in your local neighborhood or the grocer about that, and those people might push back. Now you can go online, create love, join a community of people who are vaccine skeptics and can With them instead, for example,
you know, we also we've been talking a lot about, about propaganda and misinformation. And, you know, the mechanisms that we focused on. So far in that discussion, historical cases that we focused on so far in the discussion have been sort of pre Facebook, right pre social media. And so those mechanisms, of course, still apply in the context of social media, but there are ways of employing them and strategies that you can use to spread information that make them even more sophisticated. And so, you know, I like to think of this sort of problem of how is misinformation spread on social media? How can industrial or political organizations manipulate public belief using social media as the next step in what you might think? As an arms race, and I'm borrowing this metaphor from philosopher science, named Ben at home at Yonsei University. So the idea is that there's a constantly evolving set of technologies that allow new strategies or, you know, new evolutions of old strategies to be employed successfully. And the sorts of responses that we've maybe, you know, come to have that protect us from certain kinds of misinformation, right? Think about old tobacco ads, right? You know, nine out of 10 doctors smoked Chesterfields. No one, you know, in contemporary American society is going to look at one of those ads today and say, Oh, well actually probably Chesterfields are pretty good for you because doctors are smoking them. But, you know, the 21st century version of that is, you know misleading memes on Facebook. That again package the way information is being shared and, you know, invoke figures you trust and use the same sorts of ideas, but in a new and more sophisticated way. And so much of what we want to do in this last chapter is ask what does the contemporary version of these problems that we've studied historically and using models look like?
So that leads us to the final question and you give a lot of answers in the last section as to recommendations but we don't have time for that. So I just want to ask the very basic question. If the scientific community can be manipulated in this way, sometimes for good sometimes for not, if people on Facebook can be so sophisticated, Li manipulated and so siloed and so polarized, does that mean that democracy is obsolete? Can we no longer govern based on the idea of the rationality and the knowledge ability of the masses. And can we no longer count on the irrationality and the and the misinformation to cancel each other out?
So let me be clear, we do not think democracy is obsolete. But we do think that there's a real problem for our democracy, which is that we often vote as if we're voting about matters of fact. So we vote for a candidate who says, I don't believe in climate change, and then we all act, you know, we legislate as if climate change isn't happening. And, you know, our beliefs are the way we legislate don't change the fact that it is right. So that's a problem, especially when, as we've outlined in the book, there are all these reasons that people become misinformed and that we're vulnerable to those who are trying to influence us, often not in our own best interest. So we don't want to get rid of democracy. But we think that it might be good for people to consider what are ways that we could have democracies that are more protected from these kinds of negative influence and from false beliefs of the public. For example, are there ways that we can aggregate the values people have, you know, I value personal freedom or I value protection from harm? Can we aggregate those values, but then use the best scientific information to shape legislation and policy to carry out the values that people would like to have in their democracy?
So, you know, we think of democracy as a kind of ideal, where government is representative of the people. You know, it has to do with, you know, where authority comes from and so on. But we also think that there are many institutions Then you can create to implement democracy. And you can, you know, you can just see the state, the state in the US, there are different ways of voting on things. There are different ways of implementing democracy. And of course, this is also true from, you know, the US to the United Kingdom to France to other Western democracies do it differently. And so, our idea is that, given the set of problems, what we need to do is think hard about which institutions are working, in which institutions are failing, given the current sort of informational environment that we're living in.
So for example, an institution like independent fact checkers are working very, very well, but maybe a referendum on the large scale where things are very abstract don't work, so well. Relying on on the masses to make conversate to make discussions about emissions on the national level doesn't seem to work well, but people do tend to have better beliefs or more accurate beliefs when they're talking about emissions right by their houses. Right. So these are some of the things you talked about. And I'm gonna, I said it was last question, but I'm always a liar. So I have one more. And and this is and what I'm going to say off the bat that this is a totally unfair question. And you are you are completely in your rights to not answer the question. But given the end, neither of your political philosophers or political scientists, so it's even more unfair, but given what you're saying, can sell and interpret you as making an implicit argument for the electoral college. And what I mean by that is, are you saying that because we have to aggregate these values, the one person one vote popular vote model is so susceptible to manipulate That we need some sort of professional representative to stand in or mediate the gullibility of the individual, you know, and in fact,
not for the Electoral
Look, this is the argument that Alexander Hamilton did give for the Electoral College. Right? I mean, it's not not an argument that one might give. It's the reason we've got the Electoral College. Right, which is why I asked
right. So but but it
doesn't actually end up doing that. I mean, no one in the electoral college or is extremely rare that they don't vote for whoever got the popular vote in their state. So it's not actually carrying that out.
Well, no, this situation is just one where there is an institution that was designed for a certain purpose, and the practical implementation of that institution doesn't really align with how it was intended to work. Because of basic contradictions in American democracy and the way in which we think about how votes should work and what it means to represent votes, right, so the Electoral College, in principle could play a certain role. But it doesn't play that role and it plays another role. Instead, write a role of changing the the weightings of voters votes in different states.
I mean, but also just our system of representative democracy instead of direct democracy is something that is meant for similar purpose, right? We don't have everyone in our country vote on every policy decision. The idea is that we're supposed to have leaders who are better at making decisions and better at forming beliefs than the masses are and they're supposed to be the ones who are able to then shape the desires of the masses into good policy and legislation.
And it doesn't work. It doesn't work in heart because the electoral college is not made up of people who are more rational or more informed. They're just electors that were elected. And then there's all the problems with manipulation on the political side, which we can't get into because we're out of time, and it's not your field anyway. And it was an unfair question. I asked the question simply because I think it's worth thinking about what you mean by reconceiving democracy and looking at the various institutions. But
well, you know, as
philosophers were always very happy to talk about things we know very little about at length.
I've made a career out of it. This is our 10th year the radio show if, if I had to know what I was talking about, I'd still be on the second episode. kailyn. Jim, thank you so much for joining us on Why thank you so much for sharing this novel, but incredibly important approach to what's going on in the world. Thanks, jack.
Yeah, thanks for having us.
You've been listening to jack Russell Weinstein. Wide philosophical discussions about everyday life. My guests have been Caitlin O'Connor and James o and whether all and I will be back with some more thoughts right after this.
Visit IPP ELLs blog pq Ed, philosophical questions every day. For more philosophical discussions of everyday life. Comment on the entries and share your points of view with an ever growing community of professional and amateur philosophers. You can access the blog and view more information on our schedule our broadcasts and the y radio store at www dot philosophy and public life.org.
You're back with wide philosophical discussions with everyday life. I'm your host jack Russell Weinstein. We were talking about the spread of misinformation. So why does Facebook spread so many lies? Why does Facebook condemn us to these silos that just reinforce our worst tendencies and our worst beliefs? Well, it's because we like to conform to other people. It's because we want to be liked. It's because we are selective in what we share. It's because we're easily distracted. It's because we trust authorities. And if they think they're if we think they're good at one thing, we listen to them in something else. Now, a lot of this isn't new. We probably could have articulated this. But what is new is that all of these things are not unique to Facebook. They exist in the scientific pursuit of knowledge. Facebook may make it faster and may make it more efficient, it may make it more easily manipulable But it's still representative of the tensions in the process of the transfer of knowledge and the process of knowing what to believe and not to believe. Scientists are human beings, and the human problems transfer to science. Scientists exist in a complicated world of reading studies trying to get funding, trying to get attention and promotion. We have lots of studies that don't offer us clear conclusions. Knowledge is probabilistic. All of these things that we saw on Facebook exists on the scientific level. Why is this important? This is important because it helps us re conceive of why we disagree. Sometimes people are dumb. I think that's just a fact of life. Sometimes people are close minded. That's just a fact of life. But more often than not, people are just saying acceptable to manipulation. And they mean well, and they have the right intentions. They just don't have the right information. How do we fix that? Well, the first way we fix it is to map it out is to draw a picture is to create a model of how it works. So we can create a hypothetical. And that's what our guests did in their book. I can't summarize the model here. We did as best as we could on the show, I suggest you read the book, The most important thing is to understand that Miss information is a problem at all levels. But that doesn't mean that science is corrupt. And it doesn't mean that science is unreliable. It only means that science is human. And we have to do as good a job as we can. to perfect science and to be aware of its shortcomings, Justice we have to do as good jobs we can to protect ourselves and to overcome our own shortcomings. You've been listening to jack Russell Weinstein on why philosophical discussions that everyday life Thank you for listening as always, it's an honor to be with you.
Why is funded by the Institute for philosophy and public life Prairie Public Broadcasting in the University of North Dakota's College of Arts and Sciences and division of Research and Economic Development? Skip wood is our studio engineer. The music is written and performed by Mark Weinstein and can be found on his album Louis Sol. For more of his music, visit jazz flute Weinstein comm or myspace.com slash Mark Weinstein. Philosophy is everywhere you make it and we hope we've inspired you with our discussion today. Remember, as we say at the Institute, there is no ivory tower.