"What does Science Policy have to do with Democracy?" Why? Radio Episode with Guest Heather Douglas
6:21PM Sep 29, 2020
Jack Russell Weinstein
Disclaimer: This transcript has been autogenerated and may contain errors, do not cite without verifying accuracy. To do so, click on the first word of the section you wish to cite and listen to the audio while reading the text. If you find errors, please email us at email@example.com. Please include the episode name and time stamp where the error is found. Thank you.
Why philosophical discussions about everyday life is produced by the Institute for philosophy and public life, a division of the University of North Dakota's College of Arts and Sciences. Visit us online at why Radio show.org
Hi, I'm jack Russel Weinstein host of why philosophical discussions about everyday life. On today's episode, we're asking Heather Douglas the question What does science policy have to do with democracy?
When my father thinks he's getting a cold, he takes a dietary supplement called airborne. Airborne is a fizzy pill containing vitamins, minerals, and herbs and claims that have been created by a school teacher. Since teachers have to deal with little kids who never cover their mouths. When they sneeze, the theory goes, they're best situated to leave the battle against germs, airborne cells very well. What is particularly odd about this is that otherwise, Americans don't seem to trust teachers at all. Teachers are blamed when their students don't arnaiz We are constantly told that parents not professionals are their own kids best teachers, and elections are won and lost on endless criticisms of so called failing school systems. How is it that we do not trust teachers to teach but do trust them to do the specialized chemistry and biology that is required to cure the common cold, the notorious illness no one else has ever been able to crack. There's overwhelming evidence that airborne doesn't work. The company that makes it has been fined by the FCC and settled the $24 million lawsuit. But all of that data comes from scientists, not teachers, the same people who are dismissed despite their virtually unanimous agreement on the effectiveness of vaccinations and the reality of human caused climate change. Airborne appeal to a questionable authority, and consumers rejections of reliable data is very odd, but not at all surprising. The fact of the matter is that people want to be optimistic, to take airborne because curing a cold feels small and manageable. They do eventually get better. But they reject climate change because it's just too large to do anything about airborne celebrates what we could do. While climate change emphasizes what we can't. our beliefs represent our values, not just the evidence before us, but we live in a democracy and all our beliefs affect everyone else. Those who take airborne have the same right to vote as those who work in laboratories. Does this mean that science is doomed and that our collective conclusions are destined to be inadequate, certainly feels that way right now. Now, the predictable path of this discussion would be for me to argue that scientists objective in a way that the average person is not, it would involve calling for better science education and demanding a more informed citizenry. But I don't want to do that. It would just be another instance of blaming the teacher. And frankly, such a claim just isn't philosophically interesting. Instead, I'd like to ask what it means to be objective in the first place. And I want to do so while considering the values that permeate scientific research, scientists are people to putting on a lab coat doesn't make someone infallible. But at the same time, I don't want to support the doubt that fuels anti science backlash, the stuff I already condemned, we should all recognize that whatever it means to be objective is achieved because of the very values that science embraces. objectivity is itself a value, and its intention with effectiveness, elegance, novelty, plausibility, and other criteria scientists can aim for the context of any experiment has as much to say about its success as the conviction that sciences above politics and free from the burdens and biases of everyday life, science and democracy feed off of one another. To give a brief example, some philosophers of science have argued that women bring different values to research that men do, and that the history of science is rife with sexist conclusions. Others claim that science is racist, that what it considers good evidence, cogent argument, and even meaningful results are brought up and bound up in that same racial baggage that we all struggle with. If we did not live in a democracy if we did not support diversity, these accusations might not matter. But we do. And we want our science to be just as well as truthful. It works the other way around. Science helps inform democracy to policies can only be inclusive when they recognize the best knowledge that we have. That proper nutrition has a profound effect on our ability to think, for example, or that certain diseases are tied to poverty. The study's evidence and conclusions we prioritize our choice, not a foregone conclusion. On today's episode, we're going to look at the relationship between science and democracy, and ask about the values and policies that affect research. We're going to consider what any citizen should know about science and examine how flexible scientists ought to be in order to respond to political realities. We're going to look at the scientific method itself and investigate just how deeply democratic values and form it. Yes, objectivity is important. That tells us that airborne does not prevent or cure anything, and people should know this. But airborne widespread use suggests that science is missing something very important about freedom of choice. Its users are not simply gullible dupes. Instead, maybe airborne illustrates what people have actually learned from science, that nutrition has significant health effects. And that proactive behavior leads to long term wellness. This isn't the complete story, of course, but completeness is only one scientific value among many accessibility as another so as understandability maybe people choose airborne simply because it's there. Maybe it's scientific enough, when the headachy sneezes, start. And maybe when we are sick, we simply value doing something more than doing nothing. When we feel sick. Truth just doesn't feel that important.
And now our guest, Heather Douglas is the Waterloo chair in science and society at the University of Waterloo. She's the author of the book science policy and the value free ideal. Heather, thanks for joining us on why.
Thank you for having me, jack.
We've pre recorded the show, so we won't be taking any questions. But if you'd like to send your comments, tweet us at why radio show or post a firstname.lastname@example.org slash why radio show or visit our live chat room at why Radio show.org. So, Heather, I've been taught that science is objective that its goal is to pursue truth that it reveals nature, and that once you get a conclusion, it is above and beyond the politics and the biases and the details. Yet I just suggested that's not true. Was I taught to simplistically, did I did I miss a day in school? What's going on? Why wouldn't science be objective?
Okay, so it all depends on what you mean by objective. I do think that science aims at the truth when properly done. But that doesn't mean it gives us the truth. So one of the things that studying the history of science makes really clear to us is that scientists get it wrong, and they get it wrong in really interesting ways. So when I was growing up, I was taught that you cannot inherit acquired characteristics, it was impossible. That was lamarckian lamarckian ism, and no organism inherited acquired characteristics. And then during my lifetime, scientists discovered epigenetics, and discovered that the environment can change methylation patterns of DNA, that changes how the DNA functions, and those changes are heritable. So something that was like a foundation bedrock truth of science, when I was growing up is no longer
what's an acquired characteristic, so that people know what we're talking about.
So so in some of the studies, there are things like predilections towards certain kinds of stress levels in rodents, for example. And those are characteristics that can be inherited in rats from one, you know, generation to another, but can also change in response to the environment. And those changes can be inherited by the offspring of the parents, regardless of whether the environmental conditions have continued.
So if my, if my father is a really stressed out person, even if I don't live in New York City with him, there's a chance that I will be stressed because he stressed because I inherited biologically or genetically,
yes, I don't know if it holds for humans, but it does hold for rodents. You know, doing these kinds of control conditions on experiments on humans are not so good. And so, you know, we still haven't worked out exactly what it means for us, I think scientists are still trying to unpack the complexity of epigenetics, because we're still trying to unpack the complexity of genetics, much less epigenetically. But it does sort of show how something that can be held as like, absolutely firm can be overturned by further study and investigation. You know, if you go back 100 years, scientists were absolutely sure the continents didn't move. And And now, we know nothing. Actually, there's like continental drift theory. And that drives, you know, plate tectonics and its foundation foundational theory of geology. But 100 years ago, it would have been considered ridiculous. So science changes. And I think that's a really important thing for everyone to understand about science that science doesn't give you fixed, permanent truths. It gives you the best theory at the time. But the theory can always be overturned and challenged by future evidence. So if you're taught science in school as if it's a permanent body, that will never change, and no evidence can challenge these views. That's incorrect. And you have to have a citizen that have that kind of open view of science.
But doesn't this breed distrust I mean, here on the media? Well, bacon is bad for you bacon. is good for you eggs are bad few eggs are good for you chocolate is gonna solve all your problems don't eat chocolate, right? I mean, it doesn't this, this changing conclusions make people skeptical of what the latest results are
against the backdrop of thinking that science should never change it does. But thinking that sign should never change is a huge mistake. Historically, we know that's not true. Currently, in my lifetime sciences change, I expect it to continue to change. I don't expect, you know, the the sort of extremes of lamarckian ism to prove out to prove to be true. But epigenetics does change our understanding of of how, how evolution might work. And that's super exciting. So it's only against a backdrop of thinking that whenever an expert changes their mind, they must have, you know, done something horribly wrong. And so they're not worth trusting that you end up with this sort of distrust coming from changing conclusions. I think if the public understood that science was always open to new evidence, that it was always potentially challengeable by evidence, and that when experts change their mind very often, it's because new evidence has emerged, that that would be like an inconvenience, but not something that would cause distrust. It shouldn't anyway,
I think most Americans, I think most people, frankly, are Newtonian, although they wouldn't call themselves that, in that they think sciences job is to reveal nature, that part of what science does is push away the clouds put things into focus, whatever metaphor we want to use, and that when all is said and done, we have the math, we have the diagrams, we have the predictability. And that the goal of science, and the way we should think about a conclusion in science is whether or not it corresponds to reality, whether or not it reveals nature, is that still a helpful way of looking at it? Or does that complicate things and bring up red flags?
Well, I mean, it does reveal aspects of nature, I think that is, you know, part of what you do when you do science. But the idea that somehow, the revealing, once it happens is done, as if you know, now we have the whole picture, as if what we think now corresponds to nature in a perfect way. Frankly, we can never actually even assess whether our current theories correspond to nature, because we can't access nature without going through our theories and our methodologies. So we can't, you know, if we could just hold up nature to our theories and compare the two, why would we need our theories, we just have nature. The whole point of developing theories is because we don't have that direct access to the Galilean book of nature. So so in some ways, I think that's a very misleading understanding to talk about correspondence with nature, as if you know, we just have this perfect image of nature. But I think the attempt to continually figure things out about nature, the drive to understand and to get better predictability and better deeper explanation about nature, is exactly what science is about. It's just that I don't think we can think we're done.
So this criticism of correspondence theory goes way back from Aristotle in this thing called the critique of the third man, but I don't really want to spend time on that. What instead, a few years ago, I think in our third series, we had a discussion about constructivism that this notion that science is incredibly social that theories are bound in social ideas. Is this the same thing? Is this a criticism that comes from the layered nature of scientific theory and how it's ultimately social?
So I don't take a stance either on realism or anti realism or constructivism or anti constructivism? Because those aren't questions that particularly interests me, partly because they don't particularly interest policymakers. So policymakers and people who work in government don't sit around and go cheese science constructed, or is the realist position, right? Those are just not questions that people tend to ask. And in terms of the philosophical debate on constructivism, I think constructivism shows how very often the social context influences science. Science also needs its own internal social context. That's part of what makes the knowledge claims robust is because you have a scientific community that provides criticism to each other and encourages that kind of criticism. So that's really important for the development of reliable knowledge. I'm interested in just thinking about science as producing reliable knowledge. It doesn't mean it's necessarily true or that's usually done. But it's the best available knowledge we have right now about the empirical world. And so if we're making decisions about what to do in our world right now, we really can't do better than to look to the science, first and foremost, to figure out what the scientists are saying, it doesn't determine necessarily what we do or determined policy, we still might take an airborne if we have nothing else that we can do. But turning to science first, because it's the most reliable source of knowledge we have is, that's what I'm interested in, and how to do that better. And how to think about that moment, when you utilize scientific knowledge to help make decisions in democratic societies.
I think this word reliable is a great word, because it is easy to contrast with the notion of objective. And so talk about what you mean by reliable and whether or not reliable is on this list of values. One of the things your work, does this articulate what values mean in society and these competing values that people can aim for that help us and we'll talk about this later in the conversation that helped us negotiate the values of society and the values of democracy. But so is reliability of value? And if so, what does it mean? And how do we contrast it with these other values such as objectivity?
So that's a great question. So let me talk a little bit about objectivity first, so in my own work and objectivity, I have argued that what objectivity when someone says something is, is objective, what they mean is, I trust this, I endorse this. And you should, too, for the same kind of reasons that I do. And it turns out that there are a number of bases, eight or nine, at least, depending on you know, whose work you're looking at, for making those kinds of endorsement claims. And so, objectivity is like a shared basis of trust. Reliability is more about sort of functionality, it's like, and reliability, different kinds of reliability might show up as a basis for making the claim that something's objective. So for example, if I use this knowledge, will it enable me to act more effectively in the world? That's, essentially what reliability is, is about. And science has, you know, been tremendous in producing reliable knowledge. The fact that I'm talking to you right now, over how many hundreds of miles reliably, using the kinds of technology we have in front of us is an amazing feat. I mean, this this is, this would have been unheard of 200 years ago. So the that kind of reliability where we can almost in some cases, blackbox it in some cases, deliberately use it and see whether or not in fact, something pans out. That's the kind of thing that I'm interested in. And sometimes that depends on the basis of objectivity. And sometimes, it just, you know, is it sort of functional predictive success?
So this leads to another question, which longtime listeners will know that I often refer to the turtles all the way down example. This is the turtles all the way down question reliability means effective, is effective of value it does science determine what effective means is that a personal decision? Is that a government decision? If reliability is a form of effectiveness, what determines effectiveness?
You have to ask effective for what? Right? So effectiveness isn't sort of like a blanket condition, it depends upon the particular context in which you want to use the knowledge. So to use your opening example, again, airborne is probably effective at making someone feel like they're doing something to burn a cold, but it's not effective and actually shortening or preventing colds. Given the evidence that you talked about at the beginning. So effective for what right, it seems it probably has a great placebo effect. We know that placebos are somewhat effective, sufficiently enough, they're not the kind of thing you want to prescribe in cases of serious illness necessarily, but they are effective in in certain cases, and certain context of making people feel better. And and so you know, great airborne has produced a wonderful placebo that people like to rely upon, and it probably makes some people feel better. Because that's what placebos do. There is the placebo effect. So So, you know, when we're talking about effective, then we have to actually talk about particular instances. And I think the thing that gets people sort of riled up about science in democratic contexts, is sometimes when people want to use science to do something, they don't want it to actually accomplish a goal like say, making someone feel better when they have a cold. What they want to do is use it to win an argument. They want it to be effective in an argument. And when that's the goal of effectiveness, that's when it's really easy to undermine the integrity activity of the science and the process, because then you're very much more likely to say, cherry pick data, or cherry pick conclusions, or cherry pick studies, or ignore evidence that goes against what you want to say, or pick your experts, because you know, they'll say the thing you want them to say. And then you're not looking at the body of evidence in a way that a scientist would find remotely acceptable.
You would mention the word integrity. And I had just written that down on my notes. And so I wanted to ask you, in this context, when things are so contextual, and things are so fluid, what does integrity mean? And there are people who are going to say, I, who are going to resist the idea of justifying airborne with the placebo effect and just say, you know, what, fine, but that's not what it claims to do. And we shouldn't support that. And then of course, there are all sorts of moral arguments, which we'll get into later, what is integrity in the sciences, especially when the values that we can choose from have such different goals and such different outcomes?
So this is actually something that I have written a couple of papers about. So I'm the the, let me talk about it first, from just an individual perspective. Because what does it mean, for example, for an individual scientist, or an individual who's using science to have integrity with respect to the science, and I think that first and foremost, it means having an underlying respect for inquiry. So you're not going to try to make the study say just what you want it to say, or have an outcome of just what you want, want it to have. So that's the sort of core idea of integrity. And in more detail, it also means keeping values in the right sorts of roles in the right places, and science.
What what is what does that mean keeping values in the right places in the right roles?
of ostracize science for the last 20 years has spent had a really good discussion on values and science. And the things that we've come up with are first of all, you know, since the 1940s, it's all been very clear that or even earlier, that values are part of it, what philosophers of science call the context of discovery that depending on what you decide to study, and how you decide to think about it, or look at it, at the beginning pages, stages of your research, when you're first thinking about the phenomena you're interested in values shape that and and that's very uncontroversial among philosophers. So why do scientists look at the phenomena they do? It's because they're interested in that phenomena or because society is interested in that phenomena. So why do we fund so much disease research because society is interested in grappling with disease. But then when you get to deeper in the scientific process, what Reichenbach used to call the context of justification. The thing that's come out in last 20 years is that that realm of science is not value free as well, that you can't decide how much evidence is sufficient or when your evidence is strong enough without considering value judgments. So um, but what I've done in my work is to say that the two different kinds of roles that you see in the context of discovery, say, versus the context of justification are different roles for values, so that in the context of discovery, you might have a direct role for values, the values actually shape what you want to do they tell you, I want to study whales, because I'm really interested in the majesty of whales, and I want to keep them from going extinct, or I want to study disease, because I really care about curing this particular disease, because maybe I have a relative who suffers from it. But once you're in just the, the the practice of doing science, and you're evaluating evidence, you shouldn't say, Oh, I my values. say that, you know, I really want the evidence to have this sort of implication. So I'm just going to say that it does, that would be a terrible violation of scientific integrity. Instead, what you have to say is, well, is the evidence strong enough to make this particular claim? And how do you decide when the evidence is strong enough? The way that you have to do it is you have to think about consequences of error, whether or not you're going to make a risk towards a false positive or false negative. And that involves social and ethical values.
And one of the things you point out in your work, and we'll talk about this after the break, is that scientists have to choose whether they would rather have a false positive or false negative if one is more dangerous or preferable than the other. But we'll get right into that and we'll get right into the relationship between science and democracy and policy specifically, after the break. You're listening to jack Russel Weinstein and Heather Douglas on why philosophical discussions about everyday life and we'll be back right after this.
The Institute for philosophy and public Life bridges the gap between academic philosophy and the general public. Its mission is to cultivate discussion between philosophy professionals, and others who have an interest in the subject, regardless of experience or credentials, visit us on the web at philosophy and public life.org, the Institute for philosophy and public life, because there is no ivory tower,
you're back with why philosophical discussion about everyday life. I'm your host, jack Russell Weinstein, we're talking with Heather Douglas and asking about the relationship between science, democracy and policy. You know, while she was talking, I was thinking about this thing that I end up saying to my students, quite frequently, we'll be talking about technology. And they'll be talking about cell phones and computers and the latest app, and they'll talk about spaceships and all of these other things. And I'll look at them. And I'll say, pencils are a technology too. And they'll be shocked. And I'll talk about how we think about technology as the newest thing, the cutting edge thing, but that's because they're novel. And they're interesting that pencils are incredibly important technology, just like wheels are incredibly important technology. We take them for granted, because they're so familiar to us. They feel natural, how could life have ever existed without a pencil, right? I mean, that that that seems baffling to us, despite the fact that most people are typing now, and not using pencils. So the question that that that this leads to is not just what is technology, but what is science? Do we consider it science when we open the refrigerator? Do we consider it science when we think oh, I have to keep this milk cold rather than on the counter? Do we consider science when we think I better walk my dog at a certain time of day? Because then he or she will go to the bathroom? And so Heather, I guess I want to ask you, before we dive into the question about the relationship, what do we mean by science and to science mean different things in the political context that it means in the research context, or even, let's say the academic context.
So I can only just say what I mean by science. And I think what I'm what I mean by science maps onto what what a lot of other people, philosopher, especially think about science these days, and that is that it's a systematic inquiry, that is very careful and rigorous in its treatment of evidence, that also takes place within a social context, the community of science, that is very critical, and does not allow for any sort of dogmas for views to be held in an unchallenged way. So there is no particular claim in science that cannot be challenged by other empirical, empirical evidence. Now, it doesn't mean that we think it might be likely that a particular claim will be challenged by empirical evidence. So, you know, we can think of, for example, the current expansion of the universe as something that is really well confirmed and not likely to be challenged by particular pieces of empirical evidence, I don't think we're going to find out, but the universe is actually somehow contracting, or that, you know, the rod scopes of evolutionary theory, don't turn out to actually be accurate. But no particular claim couldn't be challenged by evidence that is, you know, sort of empirical evidence that someone could go out and gather in a careful way. So I think that it's sort of the combination of the individual reasoning process around evidence and theory set into the social community of science that gives science its particular character.
Can someone do science by themselves, the legend of Newton, and this isn't entirely accurate, but the legend of Newton is that he isolated himself and did all these experiments on his own, and then came back to the community. He actually wasn't that far, and he had servants and all these other things. But is, is science impossible individually the way that say vidkun Stein thinks that that there's no such thing as a private language. Is there a private science?
Yeah, yeah. I mean, I've seen Newton's quarters in Cambridge. He was in the middle of a university. Ah,
well, faculty tend not to talk to each other, especially when they're angry. So maybe he didn't know anyone at all.
Yeah. And he and he was, you know, it is one of those early manuscripts they gravitate Sione, he's responding to a cart. So you know, he's, he's, he has intellectual interlocutors that he is thinking about, and that's really important. I mean, if Newton had just tried to, like, there's a legend of someone who decided to, you know, donate a bunch of money to university If only he could have an observatory. And he decided to go in the observatory to map the heavens and he produced reproduce, you know, a perfect Ptolemaic universe. 2000 years later, That is not science, it's, it's, it's great that you can go out there and collect data on your own. But if you don't interact with other people who are also working on it, if you don't work with even if trying to challenge existing theories, then you're not doing science. So I don't think that you can do science completely in isolation. I don't think Robinson Crusoe, for example, can do science.
But and he actually, Daniel Defoe needed another character right halfway through the book he needed, he needed to introduce Friday. So there, I don't even know that he could have existed in pure isolation, which leads me then to the start to ask the questions about democracy, which is if if science is part of this larger community, and part of responding to what came before and as and as these necessary conditions of integrity and thoroughness, etc. is science too complicated for the average person to participate in? is this notion that we want an informed citizenry to have a sense of what's going on in science and being able to make determinations about priorities and inform their legislators about what's important. Is this based on on a false assumption that that science can be communicated to non specialists? is science simply too complicated to understand if you are not fully immersed in it?
So I don't think so. Of course, I'm a philosopher of science. So it's my job to understand science without being fully immersed in it, because you don't really want me in your lab, I tend to break equipment. I'm not someone you actually let into the lab. And I spent sort of, you know, my career understanding and grappling with science, despite the fact that I don't actually do it myself. Now, that might be easier for me, because I am a someone who's spent so many decades trying to grapple with science. But I think it would help a great deal. If in our science education system, we taught science differently. So I think it's really important, for example, that science isn't taught as a body of fixed facts, with answers in the back of the book, but instead, as the process of investigation that it actually is a with that sort of process of investigation in mind of every student who graduated high school had a clear sense of that open ended investigation and how scientists go about being careful with data and being rigorous to the data and trying to answer questions that aren't yet answered. If they had a clear sense of that, then they would probably understand better when scientists, when science journalists reported news on new findings, and there was just speed between experts because there usually is, there's usually, you know, like, Oh, they found this new evidence, but expert A B over here says there might be another explanation for an expert c says the evidence might have a confounder. And you know, what, that's really confusing for the public if the public doesn't think of science as an ongoing practice of investigation.
And this is this is true for politicians, too, who need to be nudged in a certain way and given a sense, not just of what to support, but how to think about the debate about what to support them. So then we get this phenomenon of the science advisor, right, the person who's there to to help the non expert to help interpret data, is having a science advisor a good thing? And is this is this a new thing where they're science advisors in classical Rome, would
I have no idea. I mean, of course, the whole idea of a scientist, as distinct from a natural philosopher didn't develop until the 19th century. So it's really hard to say that, but but science advisors, as far as I can tell, were phenomena that became predominant, starting in the early 20th century. Usually around the world, you know, first world war had science advisory bodies, they got more formalized in the Second World War War is is one of the sort of catalysts that makes it really clear that you know, you have to bring the scientists in so they can help you figure out things about weapons and defensive strategies and and that can lead to some pretty horrific results. But that was sort of the first experiments with science advisory systems, at least in the US. In fact, you know, the National Academy of Science was started as an advisory body by I think, Lincoln in the Civil War. So you know, wars tend to create these things, and and then how the science advisor gets instantiated or plugged into the institutions of government that varies greatly across different democracies, even. So, you know, we could talk about like comparing the British and the Canadian and American Systems and how they have very different sort of institutional structures of the German system. And there's a group now called the International Network for government science advice that is trying to Figure out how to think about all these sort of different systems and comparing them, the individual science advisor can be really valuable, because that can be a person who is trusted by a politician, and has the politicians ear so they can really bring certain understandings of science to that particular politician. And when you have a good relationship, like FDR had with did with vannevar, Bush, in World War Two, you know, it can really make a big difference. Or you can talk about science advisory committees, where you have groups of scientists who sit and try to figure out how to think about complex problems that desperately need to be addressed. And that can be also a really successful strategy. So you know, what has happened in the last hundred years is lots of experimentation with different science advisory forums, to try to get information in a manageable way in front of decision makers so that they can use it effectively.
So I have to ask now, Wikipedia said this thing that I couldn't confirm anywhere else, and I'm really curious, according to your Wikipedia page, you were a science advisor for the movie transformers. Is that true? Or is that some weird thing?
That's that's not true? Well, what?
Too bad? Because I really wanted to ask what that was, like, compared to the politics, the first, the first sentence of your Wikipedia page says that you are a science advisor for the movie transformers. And what I was curious about is what how that would differ in a in a literary context to be generous than a political context. But I guess it's not true. Which actually leads me then to a question about science and democracy, which is one of the the standard claims about Wikipedia. And there's some evidence to suggest this is that crowdsourcing is more accurate than people think. And there was one study a while ago, and I didn't read the study, I just read the articles on the study that suggested that overall, Wikipedia was just as reliable as the Encyclopedia Britannica, that that there's variation at any given moment, right that especially when there's a controversial figure, apparently like yourself, and, and, and but in the end, crowdsourcing is, is really effective. Is that consistent? Can we call it a Does that ring true to you? And be if it does, can we call crowdsourcing a form of science? Or is it something different?
So I think crowdsourcing is a great way to gather information. And perhaps so I haven't seen this study comparing Wikipedia and the Encyclopedia Britannica, certainly in the age of the internet, where information is moving so fast. And I would shudder at the thought of the number of editors you would need to keep an encyclopedia up and running online and accurate. It would be a tremendous effort. And so Wikipedia crowdsourcing that is really smart. right to have these groups of people who are interested in keeping Wikipedia pages updated. I have no idea why someone thinks I was involved in Transformers. That's hilarious.
I guess I should go tell Wikipedia. That's not true.
If they said that about me, I think I was 10 times more interesting.
Yeah, but you know, accuracy, I care about it.
So this is this is this is crowdsourcing, the, the legacy of maybe Karl Popper, there's for our listeners, Karl Popper had a very famous two volume book called The Open Society, its enemies, in which he looked at democracies versus authoritarian governments and suggested that one of the keys to democracy was open inquiry, and that a democracy had to really preserve the possibility of debate. This, of course, also has its legacy in john Stuart Mill and classical liberal theory. Is crowdsourcing an example of that? Is crowdsourcing the open society in action? And they're and they're even if it's not science, it's it's supportive of science, and it's supportive of the culture of science.
So I think crowdsourcing could be but need not be. And I certainly wouldn't think that crowdsourcing is if it's the legacy of mill, and popper, we're in trouble. Because it's, you know, it's terribly thin, it just sort of asked the crowd to put information in. And some citizen science initiatives are similar to this. So, you know, there's a lot of citizen science that's done. They're all kinds of ways in which an average citizen can participate in science from having data analysis programs run on the background of the computer, to having citizens you know, report in bird counts, as I think we're coming up to the big annual Christmas Audubon bird count, which is a huge crowdsourcing of information. And those are really sort of great ways for citizens to be involved in science, but it's very thin. The you know, the system just follows the instructions of the scientist and puts information in the crowdsource Wikipedia, you have to sort of, I've never done this. But I guess people do this, they sign up to be a contributor. And they have to sort of make contributions. And there's a format. And there's a way in which you put the information in. But it's all already structured for you. That's not going to be the kind of open inquiry and debate that I think mill and popper were talking about. What mill and popper were talking about has to be a much more robust sort of participation where you don't just do what don't just gather data or put data into an already produced pre existing format. It's where you might actually challenge the questions being asked, you might challenge the format as being the best format. Those are the kinds of things that have to be part of a full open inquiry, so that the ability challenge has to go all the way down, and it doesn't with crowdsourcing.
So you mentioned that Wikipedia has a format, and it's structured in a certain way to make use of the data whether the data is reliable or not, as is a different question. And I would assume that the same thing happens on the government level. So I guess the next question is, what is science policy? When we're talking about that data science policy? Decide what questions we ask does it decide how we ask it? I know it at universities, there's this thing called an IRB and internal review board that that oversees the faculty when they engage in human subject research, for example, and make sure that that certain rules ethical rules are followed is that the role of the government in a democracy where we have as much freedom as the culture thinks is appropriate? What role does policy play?
Yeah, so um, there are sort of two directions that are thought about in science policy. One is science in policy. So how should science inform the policy decisions that we make, and the other is policy for science? So what kinds of politicians should structure the science that we do, and the IRB is a policy for science, it was actually created a mandated by Congress following the Tuskegee scandal. So when we realized
that the keaggy experiment was the injection of syphilis into African Americans to see the result against their now against their will, and without their knowledge, just
an even worse with with they were not treated, even though we had the antibiotics to treat it and syphilis as a fatal disease. So So watch the course of disease in these men without their consent. And it's an absolutely horrific story, there were numerous examples of actually this kind of medical research done without consent. So there are also cases where patients in hospitals were injected with plutonium in the 1950s. They were cancer patients who were dying anyway, was the justification. And the doctor didn't want to bother them, but they wanted to use them to find out where the plutonium actually ended up in their bodies. And so there were doctors doing these kinds of experiments, and they're very disturbing. And when the public found out about them in the 1960s, there was a huge public debate about what to do about this kind of unethical experimentation. Remember, this is after the Nuremberg trials. So you know, the public is was very sensitized to these issues, rightly so. And some medical scientists actually argued that they could not be expected to get informed consent, that it would kill medical research. If you demand it, they get informed consent. But I think rightly, Congress decided they were having none of that, and that they were going to demand that people doing research on human subjects get proper informed consent. And so that's why we have IRBs. So there actually was a really interesting debate, it's kind of disturbing to go back and look at some of the arguments made of why we have this policy for science. Other kinds of examples of policy for science include, what do we decide to find out? What do we not decide to fund? So it's some research and how much money we're going to give to the National Institutes of Health versus the National Science Foundation versus the DOB and the God. The US actually has a really complex science funding system compared to other countries, because we have so many different government sources of funding for science. Whereas like, Canada has insert, there's one. If you're doing any other car, and there's insert, I'm sorry, there are two there's one for health, and there's one for science engineering. But in the US, you know, if you're doing a certain kind of research, the God might fund it, do we might fund it, the NSF might fund it, depending on what you're working on. There might be multiple funding sources. So that makes our science policy some really interesting. And then you can also have policy decisions about patents. Like what kinds of things are patentable? How does patent protection work? So there are these different interfaces where science and policy meet and people who study science policy, have a really fun time trying to figure out how all those interfaces should fit together.
Or they're not sure how to ask us Are there internal standards by which we can determine a good science policy, for example, it promotes freedom or promotes stability or promotes accuracy. And one of the things I'm thinking of is the the legislation that says that Congress is not allowed to fund any gun research on research on gun violence in the United States. And actually, in California, there's there's research that says that state institutions, a lot of state institutions are not allowed to take demographic numbers related to phone of actual race into account and ethnicity. And my instinct is that these are bad science policy. But I'm not sure that I have any justification for that, other than if we're not allowed to look at the questions we're not allowed to find out if the stuff is good or not. Is is? Is that all there is? Or are there internal standards by which we can determine whether something is good or bad policy?
This is actually a really hard and fraught question. So a man to the gun in the gun research example, in the US is a really good example. Another example to think about is any sort of dual use research. So what do you do about research that what
is dual use of what is dual use research?
dual use research is research that is pursuing a particular question, but it could also be easily weaponized. Okay, in ways that might sort of create really deep in national security instabilities. So you might think about sort of examinations of pathogenic organisms, and trying to figure out whether or not it's easy to make them more virulent.
Right, so in the process of trying to cure Ebola, can you invent an Ebola weapon? Right.
So you know, there are cases where, or, for example, I've heard of cases where you have people doing research into what is sort of like, you know, basic physics questions, condensed matter, physics, or ion trap, sort of experiments, and realizing that if they keep pursuing the research, they will actually figure out a way to even make it easier to separate fizzle material, which is what you need to make a nuclear bomb. And the thing, we don't want us to make it easier,
right? And back when our students watch Star Trek regularly, I guess what the new movies they do, too. This is one of the questions that repeats over and over again, in Star Trek, and in many science fiction movies, where you know, oh, well, this is this great discovery. But we're not going to pursue it because it could produce a great weapon Doctor Who over and over and over again, this this question, but, but I do, I derailed you. And I didn't mean to do that. So you were talking about how the, the policy about the NRA is a really good example of how fraught this debate is, whether or not their internal standards is good or bad policy.
So I think in looking at cases like that, the gun debate and the gun gun research or dual use research, the same kind of thing, what you're trying to do is say, there's some areas of research that we want off limits. This is, you know, what some philosophers have called the question of forbidden knowledge, or there's some things that we shouldn't try to know. And I think in order to, to examine this question, you have to think like, what, what in general would make sense for structures that would help you think about this, given that what you're saying is, I don't want to try to go there, even though I'm not sure what I'll discover what, what I'll find when I get there, right, so. So you're essentially forbidding yourself from even trying to discover something, even though you don't know how it's gonna work out when you go in to do the process of discovery. And this is why this is sort of a really nice, complex philosophical problem, because you have to sort of make a decision about something that you don't know the details of, by definition, because the the pursuit of the knowledge is going to be somehow locked off. And in my thinking about this, I thought, like, Okay, if you're going to do this, number one, you have to make the decision after a robust public debate, because you have to come up with really good reasons to say that this research should not be pursued. So there has to be a good public debate about it. And number two, the decision itself has to be made public, because you're going to ask all of the scientists in the central community to not pursue it, which means they have to know that they shouldn't pursue it. Right. So in some ways, the very decision itself has to be public. And then it also has to be revisable. So like, it's not something that should be like set in stone forever, the conditions might change. And the reasons have to be persuasive to the majority of the scientific community that these are good reasons. And so I I'm not sure that the gun debate meets those standards. The decision is certainly public, and that's a good thing. But I'm not sure that we know why. We shouldn't research these things. I haven't heard anyone give any arguments about why we shouldn't research these things. And furthermore, in the case of the US, the decision to ban federal funding for research into gun violence doesn't actually prevent research into it, it doesn't forbid research in the same way that say, we have made illegal research into biological and chemical weapons. So regardless of your source of funding, if you are doing research to pursue chemical or biological weapons, that's illegal. Whereas in the in the gun violence case, you can pursue that research, just not with federal money.
I want I want to push you on this a little bit with with both a sort of philosophical and a rhetorical point because we are all to some extent, liberals in the sense that we all value individual freedom, we wouldn't be living where we were, if we weren't in some sense. And and and the notion of forbidding something that raises our hackles, right, yeah. When the moment you're forbidden something you want to do it. And so my instinct is that, while I get the point about calling it for the problem of forbidden knowledge, that begs the question to a certain extent stem, it already gives us the answer. If If people had called it the problem of Pandora's box, then people would be more reluctant. Oh, I don't want to get into that. I don't want to open that. And one of the things that I think about, and I've always meant to write this down, and I never did, I guess, I guess this episode will be the record that I thought of it is the the phrase home invasion really bothers me. It used to be called breaking and entering. But now we call when someone comes into your house, against your will or breaks the lock and comes in, it's a home invasion. And what that does is it makes it a war metaphor, and then justifies the use of guns in a way that breaking and entering doesn't. And so the phrase home invasion begs the question in the way that forbidden knowledge begs the question to a certain, at least rhetorically, and so I guess the question I have for you is, is it possible to have these public policy debates in a way that aren't already so steeped in a political context? That the results the consequences are already to a certain extent inevitable?
Okay, so I think actually calling it forbid, knowledge is the sort of the right rhetorical slant in some ways, because that makes the job for those who are arguing that we should carve off areas of research and not go there. Hard.
And it should also burden of proof it's a burden of proof issue. Okay.
Or burden of argument in this case. Right. So, so like,
your scientist, there's no proof. Yeah.
So, so, you know, forbidden knowledge means if I if I want to say like, this area should be an area that is declared forbidden to for research, then, you know, I think the burden of proof should be on me, the burden of argument should be on me. And the fact that we consider knowledge a general good, and we're saying that in this case, in this particular case, is this circumscribed case, we don't think it would be good to pursue this. And, you know, I can talk you through a bunch of, you know, horrifying science fiction scenarios where we can like talk about particular cases, where we're like, oh, yeah, no, that's, that's a really good idea to not go there. I think calling it Pandora's Box would make the job too easy. So in some ways, I like calling it forbidden knowledge, because it makes the job hard. And I think it should be hard. I think you're right about the the term home invasion. You can do anything to defend yourself. It's an invasion. And, and and we should probably think, like, well, maybe, maybe not, maybe that's a little too extreme. I know that for people who have had their homes broken into it can feel very invasive, like there's been a massive violation. So maybe it's trying to capture that. And this, I think speaks to, to your general question, I don't think there's a neutral place. Like there is no neutral place where we can have this discussion. They're always already fraught with the history we bring to it with the values we bring to it with with the understandings we bring to it. So I think we have to just muddle through as best we can, trying to listen to each other.
To what extent does this does the politics, the culture reflect the politics in the scientific community? I know that that there are a fair amount of philosophers who, despite their their claims, otherwise are fairly ideological. And their research represents not just their choices, but a particular filter of understanding. And then there's the philosophy blogosphere, which can be very interesting and very horrifying at the same time, which really presents very, very strong political boundaries for acceptability, both on the left and the right. It's not one or the other. Is there this same sort of tension in the scientific community, that there is politically in the culture as a whole, or are scientists able to bracket And compartmentalize the larger political questions when they're engaged in their research?
Oh, I I don't think scientists are able to compartmentalize that? Well, I mean, they, I think scientists are both beholden by their community and hold themselves to an internal standard to actually pay careful attention to the evidence. And frankly, that can be really hard in itself. But I think scientists are all over the place in trying to figure out when is it? When should they bring a criticism forth of a well established theory? So if you look back, for example, at the debate about what caused ulcers that I think occurred in the 1980s, and there was a view that, you know, before the 1980s, it was, it was all about stress and, and spicy foods. And those are what caused ulcers. And so your job was to like drink milk and Tommy Toms, and reduce your stress. And then these two scientists just thought that No, actually, they were finding those this particular bacterium that kept showing up in there also patients, and then they managed to actually, you know, cure some patients by giving them antibiotics, that killed this particular bacterium in their stomach. And the whole idea of a bacterium that can live in your stomach was, first of all, an anathema. And then the idea that a bacteria might be causing an ulcer was thought to be just crazy. And they eventually in order to, you know, gain traction within the scientific community, because they were just so that was not their claims, or so at landish. They gave themselves an ulcer and then cured it. You see,
I would volunteer for that experiment.
Well, I mean, you can imagine, like you're trying to get the attention side of committee and just being so frustrated, that you might sort of go to those lengths. After a certain point. I'm like, fine, we'll show you. Okay, I want
to interrupt for just a second because I laughed at that. But isn't this what Dr. Jekyll and Mr. Hyde is about? Right? Isn't there? hasn't there been a long standing moral condemnation of the idea of experimenting on oneself and that it really is, at most a last ditch effort, a last resort? I mean, the people who experiment on themselves in movies are almost always evil. Except the rare occasion of the hero who no one will listen to and then to prove That's right. So So I mean, I mean, I asked this, because this is this is also a question about democracy, about the integrity of the individual. And and the and the notion that that we really have to justify anytime we sacrifice an individual, for the good of the community that that is the burden of proof is on justifying the sacrifice, not not pushing it away. And so isn't that in some sense? opening up a Pandora's box? Isn't that? I'm not even sure what I'm asking other than to say that there's something really that that giving yourself an ulcer seems to run counter to the moral core of what the modern scientist needs IRB to do?
Well, I mean, giving yourself an ulcer is certainly better than in voluntarily inflicting it on someone else. Sure. Right. Because then you're the agent who's making the decision. And so that, you know, instead of, I think the thing that we worry about, in political context, is the imposition of a harm or burden on someone else, without their consent or without, you know, any sort of, yeah, without their consent. So, so, when you're doing self experimentation in these contexts, you know, I think in our stories, like Jekyll and Hyde, the scientist is, is doing stuff that is not well confirmed. So they're not being very careful. And, and, and they go off, go off the deep end. In this case, you know, they already had a lot of evidence, it just was so against the grain of what scientists thought was the case that it was really hard to get their attention until they did this very dramatic thing. Now, you know, you might think, like, that's exactly how the Scientific Committee should not work, they should not hold on to views that they, you know, think are well confirmed, but turned out not to be. Yes, this is the this is why, you know, doing science alone is so problematic, because you have to have a critical community to take those kinds of risks and to pursue these, these ideas that most the time don't pan out, but sometimes they do and they change how we think about things. And that's really important to have both the criticism and the sort of diversity in the scientific community both functioning so that you get that kind of results. And, you know, in this case, I think Yeah, okay. I I have a hard time seeing that. They did. the wrong thing. But that's probably because at that point time, I think they had a really good handle on the phenomena.
That just makes me think this makes me think of Thomas kuhns structure, the scientific revolutions, which if our, our listeners haven't read, it's a very, very wonderful and easily understandable book about how scientists think about theory and change. And one of his arguments is oversimplified, that this vision of the scientist, as an incredibly open minded person who is always willing to change is is not as helpful as we would think. Because what you need are these tenacious people who are willing to defend the theory until it is completely untenable. And this is the and this is the same debate, in a certain sense about what happens in a democracy, like the United States and an interest group based democracy as opposed to something in France, which which is more based on Rousseau and a whole different conversation. And this notion that a special interest group has to fight and fight and fight to get their attention, to get people's attention, and fight and fight and fight, because only by being tenacious and not caring about other interest groups, do you get your needs met? And so the question, which is it is the question of democracy and science, the same question, is it better to be open minded or closed minded? In terms of things that you believe that's a question of science? And it's a question of democracy? Are they genuinely parallel? As I'm describing? Or does the context change both what it means to be open and closed minded, and both the power of the balance of the two,
so I suspect that the context changes the balance of the two, quite substantially. So I think, for scientists, in practice, it's very hard for scientists to be fully open minded all the time, they have to think for example, their equipment is mostly working well, as as you know, they tune it and they, they check it, but that it's doing what it's, it says it's supposed to be doing otherwise, what they're doing doesn't make any sense at all, they have to think that, you know, their underlying methodologies are mostly reliable. And when evidence arises that challenge is that their first thought isn't, oh, well, the thing that I've always been relying upon is, you know, complete bunk, their first thought is, really, I'm not so sure that what you've produced is actually a reliable challenge. I'm not so sure about your evidence. So they're going to be skeptical in that direction. And I so, you know, in practice, scientists hold different things closer to their hearts than other things. But I think that if you asked them, they would say, Yes, everything is open to challenge in principle. And that's a really important difference, then in democratic states, where, for some people, their deeply held views, their their ideological commitments, they're like, no, nothing will ever challenged, that they are deeply held, ideological commitments, and they are not open to challenge. So I think there's, um, there might even be a qualitative difference in, you know, how we proceed. In a democratic system. When you have people who have deeply held commitments that disagree with each other, they still have to find a way to muddle through and live together. And that's part of the messiness of democratic decision making and seeing that, you know, work its way out, how do you in a society where people do hold really different fundamental beliefs? How do we live together, and in Canada, they talked about the problem of pluralism and how to make a pluralist society work in the US. People are less interested in sort of accepting pluralism and more interested in debating and debating and debating and debating and debating until shifts occur. It's a very different culture in that sense. But it's also one that has a very sort of robust culture of discussion and debate. And there's a kind of, you know, open mindedness in being willing to engage in debate. But there's also in the US the sense that I will just keep engaging until I when
my wife, my wife, accuses me of using the argument style of tiring people out that my goal is just to keep arguing until someone gives up. And I of course, would argue with that, but that would only prove her point.
And sometimes, you know, you sort of see this happening in our legislative bodies. And we're really grateful that they have to go on recess or they have to go up for election so that this sometimes just has to stop at the There are other differences between science and democracy, you know, for example, scientists would not stop a debate and have to take a vote on it, when say like a bare majority of scientists thought something was true and and then use the vote to sort of close debate off this, you know, the sort of procedural details of how democracy functions are part of the result of the need for temporarily making decisions that everyone will abide upon, in at least the next few years until we potentially revisit the issue. And, and science doesn't have that in the same kind of way. So there's, there's a possibility for always, you know, continuing to pursue certain lines of research or certain lines of debate, as long as they're legal. And as long as you can keep getting your lab funded, you know, no one's going to say, you know, the time for debate is over, we stopped, besides, the committee has decided it's done.
And that's, that's a strength of science.
Are there any deeply held ideological beliefs that in science that the community or individuals in the community will let Trump all other results and I have in mind, there are folks who are so profoundly opposed to abortion, that they will vote for candidate who they don't agree with anything else the candidate says or believes in, but the candidate is against abortion? Is there anything like that in science, that that that practitioners, not just Well, I'm not going to investigate those things? I'm not going to ask those questions. You know, I'm not gonna become a pharmacist if I don't want to give people birth control. But who will are so committed to a particular notion that they just reject all other conclusions? Unless it aligns with this?
It does happen, but they rapidly become outliers within their scientific communities.
So does that happen?
Yeah. So I was talking to a colleague of mine, there's apparently debate. And this is wonderful, because it's like not something that policymakers usually pay tend to do a debate about whether about the origin of birds. are birds dinosaurs? Or did birds and dinosaurs diverge? Before dinosaurs came into existence? I think the debates about that.
My best friend Gail talks about this all the time. Actually, she's she's, yeah, I am well versed in this debate.
Great. So you can correct me when I say something wrong, because I am not well versed in the debate. But there is there's apparently some people who are, you know, continuing to argue that birds are not dinosaurs. But they have become sort of sidelined. Because most paleontologists think no, birds are dinosaurs, we have a lot of evidence. And and if we keep using this, this idea that birds are dinosaurs, we keep getting, you know, it's fruitful, we keep getting new lines of investigation, those two lines of investigation confirm the original hypothesis, you know, it keeps being productive, and developing deeper and richer explanations as we go. And the birds are not dinosaurs, people are seem to be sort of stuck, and not able to sort of move their their views forward. Now, you know, what that means in practice is people will start paying less and less attention to the papers written by those people, because they don't really sort of say anything new. They're not adding to the debate. And they get sidelined like that. Now, if the birds are not dinosaurs, people make a substantive empirical and theoretical breakthrough, such that, you know, people like Oh, wait, you've discovered like, something completely exciting. And
here's a 5 million year old parakeet.
Right, right, like, Oh, my God, right? or whatever it might be, then, you know, the views of the people who have the opposite view would, presumably they go, Wow, we have to start paying attention again. And we have to engage in this debate again, but until they do, they get sidelined. And so that's the sort of practice that, you know, it's not as dogmatic as in our political sphere. Because it's not like, people will be like, you know, I, I will never speak to you, again, regardless of what you find. I don't see scientists doing that, they would just be like, well, this person just keeps saying the same thing over and over again. And they, they don't really have any new findings. And it's not very interesting. And I'm going to go do this research, or this research over here based on this research that, you know, is counter to that, and it's exciting, and it's producing the results. And that's how things you know, die on the vine in science.
I want to return to something I said in passing, in part because I promised but in part because it was something you talked about a little bit in one of your papers, and it really intrigued me and it offers a very nice bridge to this connection between science content and and and science policy and political necessity. You talked about, you talked about the choice of devising tests that either have false positives or false negatives, if you don't have a test for, let's say, HIV, that works very well, that it's not 100% reliable. And you have one test that is going to give you 5% false positives. And you have a question. And you have, or the other test is going to give you 5% false negatives, that it's probably better to have false positives and false negatives, because you'd rather have people think they have HIV when they don't, then not think that they don't have HIV, HIV when they do,
Could you could you talk about examples of this, and how, at least as I understand it, this is really the the cross section between the public need and effectiveness and reliability and necessity, and pure research?
Yeah. So this is the decision of whether or not you think your evidence is sufficient. And that decision, as William James pointed out, over 100 years ago, is, you know, the tension between seeking truth and avoiding error. Right. So a false positive is avoiding, you know, well, okay. So a false positive, false negative is is another way of expressing that, you want to avoid error. But you also want to seek truth, you want to have a test that you can deploy. But you also don't want to avoid, you want to avoid some of the worst case scenarios of the kinds of errors. And so trying a trade off between false positives, false negatives is about finding the right trade off between getting it wrong in one direction, getting it wrong, and another while you're trying to also produce a reliable result. And that involves that has to involve an ethical and societal evaluation of the implications of the knowledge or the test in practice. So it's in this sense that science can't be value free, because social ethical values are essential to thinking about whether or not you have sufficient evidence in any given case. So in any situation, where you have a skull test, deciding whether or not you your claim is well supported, you are confronted with this kind of question, you know, do you have is the evidence strong enough to make the claim to do wait? If you wait, you might actually be causing harm? Because people should know the claim and be acting on the claim? Or should you say the claim when you have some good evidence, but maybe it's not as strong as it could be? And and, you know, maybe, in fact, you'll say, the claim prematurely, and it won't turn out to be a well supported claim, people act on it. And there'll be other consequences in that direction. And there's always this tension in deciding to make an empirical claim that you can't avoid the social and ethical context, which is why science can't be value free, unfortunately.
Okay, I was gonna ask the final question, but now I have to slip one in. Why is it unfortunate? What What is it because you yourself are very skeptical of preserving the ideal of objectivity. So So what is it we lose? When we accept the fact that science can't be value free? What what are you reluctantly jettisoning?
So this goes back to the boys and some other, you know, commentators, wb Dubois in the early 20th century, who thought that, you know, it would be really great if science could be held to be value free, because then it could be the basis on which we could have democratic discussions that could be shared. And you see this from well, as well and Rawls and his discussion of public reason. Science is one of those things that he thinks uncontroversially, it can be used in our public debates as a basis that we all share. But if science isn't value free, if values shape both, what you look at it, what you look at and what you choose to research. And not just that, but also how what you end up thinking is a sufficiently justified claim. At the end of the day, if values influence that, then it there might be some claims in science that are sufficiently well supported for some members of democratic society, but not sufficiently well supported for other members of democratic society, and legitimately so so it makes our democratic decision making more complex. Now, that's in theory, I think, in practice, we've been here for a long time, like, science hasn't been evaluated only feels like
that's only been a little over an hour, but
like, you know, in our democratic science has been a contested ground for many decades, at least since the end of the Cold War. So you know, I don't think it's it's it's, it's huge news, but there's still kind of a yearning among philosophers like if only we could have this space where we could all agree on the on the basis of our discussions and empirical basis. And just go from there. And science is not going to give us that easily. I think when you have really strong evidence, it's really hard to make claims that the evidence isn't sufficient when it gets really strong, you have to have really strong skewed values. And that that should be part of the discussion when people want to make those kinds of claims. But, um, it is a more complicated discussion to have in using science and democratic side democrats decision making when it's not value free.
So I want to I want to ask one last question. And it's, it's, it's a massive question, but I'm gonna ask you, with the with with in a way that, you know, you have to use it as a soundbite. But But given all that we've talked about, is there something that we missed or something that you want to emphasize that science can learn from democracy that doesn't get enough attention? And that democracy can learn from science? Is what if you had to sum up the message of what each can learn from each other? What what would what would that be? What What would you want to call the most attention to?
So? That's a great question. Um, I want to actually answer a different question first, and then I want to answer that question
yeah, just go right. I never claim authority on this radio show at all. So so we'll actually even probably keep this conversation, because it's fine, I want you to go where you want to go.
So that there's the issue of, of what i what i what I'm most interested in these days is, is how to help scientists think about the fact that science is embedded inside of a democracy isn't it is embedded inside of societies and in you know, in Canada, the US and Europe inside of democracies. And, and the, the thing that I think that scientists have been struggling with, since for the last 20 years or so, is how to think about their relationship with society. Now that the deal that was struck at the end of World War Two has ended. And that was the deal that look, society would give scientists money, and autonomy, scientists would research what they want, they would produce knowledge, and society would use that knowledge and science would not be responsible for the uses of that knowledge. This system has completely broken down, it's the end of what is called the, you know, the linear model is dead. And now scientists have to think, Okay, what are scientists obligations to society, certainly to speak, you know, the truth as they know it, to not ignore evidence that might be uncomfortable for society. Those are clear responsibilities. But I think as well, scientists are struggling with whether or not they have responsibilities to society actively in thinking about their research agendas, and how the research agendas are formed. And you can see this in changing statements about scientists responsibilities to society. So 20 years ago, in like the National Academy of Science report on being a scientist, he would say things like scientists are obligated to, you know, communicate their science findings to the public, and to make clear with their findings to the public, but not at all responsible, they never mentioned any sort of responsibility, about the implications of the research for the public. And I think that science has gotten to be so powerful in our society and so widespread, that scientists can no longer just ignore the implications of the research for the public in deciding what to do. And I think that this is a really hard thing for a lot of scientists. And if they want the public to trust science, they're going to have to start thinking about that really hard. So that they shouldn't pursue research, for example, that is easily weaponized or could be used to further enrich the top 1% of the wealthy elite, without, you know, helping at all the bottom 20%. And I think that that is a hard thing for scientists to grapple with. So that's my little, what scientists should think about, given the sort of thinking about science being embedded in society. But I think there's like your question of what what is it that science should learn from democracy and democracy should learn from science? And in some ways, there I think, I think that, you know, democracy is a is a different sphere, because some ideas for some people are held above reproach. And for science semi, there are no ideas that can be held above evidential challenge. But it's a very specific kind of challenge that, you know, scientists bring to bear. And, and so I think, you know, democracy should learn more openness to ongoing change, ongoing debate, to experimentation with policy, to try being be willing to explicitly let's try a policy in a particular location. And let's really track the impact of it and let's see if it does what we want it to do. That sort of thinking about policymaking for democratic societies should be more explicit and more widely utilized. And I think in for scientists from democracy, perhaps a better understanding of the full range of human commitment, and beliefs. And that science is not necessarily the most important thing for most people. In fact, it's not the most important thing for most people. And, and perhaps rightly so that science is a hugely important human endeavor. But for most of us, our families, our loved ones, our communities are going to be more important. And if scientists want to have equal importance, then they have to do projects that help those communities. How about that?
That's a it's a great answer, because it clarifies the distinction between the two, it answers my question, but it also gives us an agenda for the future. And I think that that's often where philosophers end up getting stuck, that here we are, at least when we're engaged in the the descriptive aspect of philosophy rather than the prescriptive that this is how the world is, this is the tension. This is why science fails with his wife succeeds. Where do we go from here? This is where democracy fails. This is where democracy succeeds. Where do we go from here? And both and most people aren't very good at that, because most people unless they're desperate, are actually reluctant to change. And desperation has a power unto itself. But that's a whole other conversation. And, and, and this has been incredibly rich and really exciting. And I've learned a tremendous amount. Heather Douglas, thank you so much for joining us on why.
Thank you so much for having me, jack. It's been a lot of fun.
You've been listening to jack Russell Weinstein and Heather Douglas on why philosophical discussions about everyday life. I will be back with a few thoughts right after this.
Visit IPP ELLs blog pq Ed, philosophical questions every day. For more philosophical discussions of everyday life. Comment on the entries and share your points of view with an ever growing community of professional and amateur philosophers. You can access the blog and view more information on our schedule our broadcasts and the y radio store at www dot philosophy and public life.org.
You're back with why philosophical discussions about everyday life. I'm your host, jack Russell Weinstein, we were talking with Heather Douglas about the relationship between science and policy. We started out a little far afield, we started talking about science itself and how science internalizes values and how there are conflicts of, of what we are trying to do with science and what we're trying to achieve. And eventually, through the conversation, we got to this point where we were looking at the similarities and the differences between democracy and science, because this isn't just a matter of policy, right? This isn't just a matter of making rules and regulations and deciding who gets money and who doesn't. It's about two different methods of inquiry, trying to find solutions to their problems. Obviously, science is a method of inquiry. It's a way of experimenting and investigating and reproducing results, and building knowledge upon knowledge upon knowledge, so that we can learn stuff so that we can cure things so we can develop new technology. The goal of science is both pure in that it's about intellectual curiosity, and it's about effectiveness and pragmatic results. Democracy is in fact, the same. Democracy builds upon itself, the knowledge we learned from the Revolutionary War, fed the Civil War, which fed I don't know, the Great Depression, which fed World War One and World War Two, we've learned about individual freedom, we've heard about the equality of the races and the sexes. We've learned about how big and how small government might have to be. And of course, we've learned about science policy, democracy is itself a method of inquiry. So what did the two have in common? What can they learn from each other? And what do they do better than the other? I'm not going to repeat Heather's answers. I think they're worth revisiting, if you don't remember. But what I am going to suggest is that we learn about both from each other, and that we think about politics and science as siloed as fundamentally separate. And what Heather has underscored over and over and over again, is that they are not separate, that they fundamentally influence one another's and they increase their possibilities and tighten up their limitations. And we have to be attentive to that. We have to focus on the influence one has With the other, seek the truth, seek some form of objectivity, but also seek responsiveness, attentiveness, urgency, beauty and efficacy in both fields, and if we learn about one we learn about the other. It's not enough, just argue conclusions. I believe in global warming. I don't believe in global warming. I believe in abortion. I don't believe in abortion. The NRA should do studies the government should ignore guns. That's not enough. We have to take the notions of integrity and the goal of each and see how it can form the other because only when science and democracy work together, can both science and democracy be well functioning and give us excellent results? You've been listening to Jack Russel Weinstein on why philosophical discussions about everyday life. Thank you for listening as always, it's an honor to be with you.
Why is funded by the Institute for philosophy and public life Prairie Public Broadcasting in the University of North Dakota is College of Arts and Sciences and division of Research and Economic Development. Skip wood is our studio engineer. The music is written and performed by Mark Weinstein and can be found on his album Louis Sol. For more of his music, visit jazz flute weinstein.com or myspace.com slash Mark Weinstein. Philosophy is everywhere you make it and we hope we've inspired you with our discussion today. Remember, as we say at the Institute, there is no ivory tower.