Recognizing Inequities in Artificial Intelligence with Meredith Broussard

    3:00PM Aug 26, 2023

    Speakers:

    Keywords:

    ai

    journalists

    technology

    people

    data

    bias

    work

    question

    systems

    policing

    generative

    problems

    artificial intelligence

    newsrooms

    broussard

    terms

    facial recognition

    feel

    computational statistics

    math

    Welcome to this ours featured session. The show will begin in just a few moments. Please find your seat

    Please take your seats our shows about to begin

    Welcome to recognizing inequities in artificial intelligence with Meredith Broussard. Please welcome to the stage founder and executive director. Clarity media, Michelle Faust. Raghavan.

    Good morning. Good morning. So happy to see everybody today. I'm Michelle house. Raghavan, really thrilled about this session. We're going to be recognizing inequities in artificial intelligence with Meredith Broussard. I think that this has been an amazing conference so far. The other day, yes, it has absolutely. Your day. I was looking at the schedule and I noticed we had more than a dozen like about 15 sessions around the subject of AI and many others that were talking really more about people and the needs of people, particularly issues and conversations around equity, inclusion and belonging. Right now, it is vital that we continue those conversations around AI, where we are melding the conversation of both murder the persona Broussard is a data journalist and author and associate professor at the Arthur L. Carter journalism Institute of New York University. Her research focuses on artificial intelligence in investigative journalism. And ethical AI, very important issues. Her new book came out this year, more than a glitch confronting race, gender, and ability bias in tech. Now, the book I had the pleasure to read most of on the plane over it really clearly demonstrates the dangers that are posed by over reliance on the algorithms in AI to take care of the most sensitive parts of our lives. It also offers humane recommendations for how we can interact with tech moving forward in this moment. My favorite quote from the book, Meredith writes, race is a social construct, but it's often embedded in computational systems, as if it were scientific fact. Very important things to consider as we move into this moment. moderating our conversation is the Mayan eat Banga are an owner and a board member, a presenter, a documentarian and senior producer with AJ plus. As we get into this conversation, you're going to have many questions, please feel free to go on the schedule. Click on the link to ask us questions and I'll be posing those to the stage as we move forward. And not now but later today. Remember to vote for the board elections that are going on right now. Without further ado, I want to welcome to the stage in Miami Vanga and Meredith Broussard.

    Good morning everyone. Thank you so much for joining us. I am very excited, Meredith to have this conversation where I plan to be the alarmist foil to you and also referenced the fact that for those of the those of us who were in the Marriott this morning and had that alarm go off at 6am. I blame AI specifically.

    Well, you know, it's thrilling to not be the alarmist on stage. So I'm really looking forward to this

    before we begin by a show of hands in the audience, I was wondering how many of you are in newsrooms or news organizations that are currently using AI? Okay, and then how many of you are in newsrooms where they're talking about using AI? Okay, I mean, do these numbers sort of surprise you? What do you think? Of that? How do you think we've adapted to this technology?

    So something that's really interesting to me is that we're all using AI without knowing it. Right? Every time you do a Google search, you're activating something like 250 different machine learning models, right? When you do a when you record a zoom call, and you get an automated transcript that's using AI. But it doesn't feel like we're using AI it just feels like we're using technology. Right so we expect AI to feel like something special. We expect it to to I don't know, like be bumpy or something but no, like it's just technology. And so that's one of the things that I write about over and over again in my work is the idea that ai ai is not special. It's not magical. It's technology. It's math. It's transforming real life into mathematics, which is an amazing trick, but it's not magic.

    I want you to keep that same energy because I'm going to read a question because I asked chat GPT and I'm gonna read the exact question, right? So she doesn't like it. Remember the answer she just gave. So I asked, what question should I ask Meredith Broussard about AI and it gave me 13 questions. It's your favorite fan. And this was the first one how do you see the current state of AI in terms of the impact on society? And are there any particular areas where you believe it has made significant positive or negative changes? It gave me two questions as one.

    Wait, are you just going to ask me all of the GPT questions

    I wish I had been that smart. No, I just just the first one I thought was funny.

    Okay, wait, give it to me again.

    Yes. It said, how do you see the current state of AI in terms of its impact on society? And are there any areas where you think it's had positive or negative changes?

    Well, that's a good question, Chet GPT. And, in fact, I wrote a whole book in response to to that question. In fact, I read two books in response to that question. And the reason I wrote books is that I feel like the answer demands more than 280 characters. So let's start with what we're talking about when we talk about AI. AI is just math. It's very complicated, beautiful math. But it's just that it's another way of talking about as computational statistics on steroids. So what we do when we build a machine learning system and AI system, is we take a whole bunch of data, we plunk it into the computer and we say computer make a model. The computers is okay makes a model. And that model is just the mathematical patterns in the data. And then you can use that model for a variety of amazing things. You can make decisions, you can make predictions. You can generate new text, you can generate new images, you can generate audio. It's very flexible and multipurpose. But the problem is that the data that you have used to train the model has all of the biases of the real world embedded in it because we do not live in a perfect world. Unfortunately, we live in a world where, you know the elevator malfunctions at six o'clock in the morning and the fire alarm goes off and it doesn't get the flashing lights don't get turned off for another three hours. Like that's it. That's how the world works. We're journalists, we know that there is dysfunction and disaster everywhere. So we just we can't expect that computational models that machine learning systems or AI systems are going to be better than the rest of the world because they're just pattern recognition systems pattern reproduction systems.

    I think what I've seen that both journalism Big J the industry and tech big tea industry have is what I call the fallacy of neutrality, that the neutrality of something makes it like the good thing or the balance thing in the right way. And I'm wondering your advice for journalists who are using this technology or about to embrace it? Should they be approaching it with neutrality and what happens when they do or if they do, what problems we will be running into and seeing?

    Well, the debate over journalistic neutrality and the view from nowhere or you know, the false notions of notions of objectivity in journalism like that. is a whole other conversation, right. So where I where I stand on it is, I am averse to something I call techno chauvinism, a kind of pro technology bias that says that technological solutions are superior to others. So what I argue is that we should use the right tool for the task. And sometimes the right tool for the task is a computer. Absolutely. Sometimes the right tool for the task is something simple, like a book in the hands of a child sitting on a parent's lap, right one is not inherently better than the other. So I think that when we push back events against techno chauvinism, we make better decisions about using technology. So one of the one of the things that people talked about a lot when chat GPT launched or when generative AI launched in general, was oh, it's going to replace journalists. And I mean, how many years have we been hearing? Oh, yeah, technology is going to replace journalists. And yet here we still are. And what we're doing still has enormous value and enormous importance. In the world. And the computers are not doing cutting edge accountability reporting. The computers are not autonomously. You know, holding decision makers accountable. They're just tools in the hands of journalists who are holding power accountable.

    Do you see this generative AI as having the same impact across the industry equally? I think someone could make the case for like, Oh, it feels like it would be more impactful for someone like a data visual journalist. But what does that mean for other types? of journalists and other people in the editorial space? Well,

    the thing about tax generated by generative AI is it's deeply boring. Right? It's really mundane has everybody raise your hand if you've used Chad GPT or another generative AI system? Yeah. I'm so glad to see all the hands go up. Because, you know, six months ago, that was not the case. But by now, everybody's used it. And it's very nifty for like 10 minutes, and then it gets kind of boring, doesn't it? Because what it's doing is it is reaching for the middle. Right? It's reproducing the most common patterns that it sees in this enormous text corpus that has been trained on you know, it's been trained on millions of pages from the internet, millions of books. I you know, and of course, the legality of that is working its way through the courts right now. And what we want out of good journalism, good writing is we want it to elevate us. We want it to entertain us. We want it to be a little bit different than everything else we've seen out there. And all generative AI is gonna give you is reproductions of stuff you've already seen.

    I love that you brought up the legality of this because I don't know how many of you know, the New York Times right now is considering some legal action against chat. GPT for scraping its work. It's not clear if they will do that or what that looks like. But what do you think that means for writers, creators, curators? If something like that turned into a lawsuit and also how do we protect, you know, the work that we're doing?

    Well, Lena Khan, the the leader of the Federal Trade Commission, recently published some guidance with some very strong words and she said, there is no legal exception for tech. One of the really interesting things to me is that for many years, people have been laboring under this techno chauvinist assumption that what happens online is different than what happens in real life. And so this has led to tech companies feeling like they're exceptional and like they don't have to follow the law. i If you ask your average software developer about what are the laws that govern the space that they are developing software for, like most of them don't know? Right? And so, tech companies are skirting the law a lot of the time. That's good for journalists to know because when you go and you look for problems in technological systems, you're going to find them. Right. So algorithmic accountability reporting, which is the kind of reporting that I have been focusing on the past couple of years. I, it's, I mean, I won't say it's easier than other kinds of investigative reporting, but you know, an investigative reporting you kind of assume that there are problems and then you go look for them. Like it's the same thing and algorithmic accountability reporting, you go look for problems in tech systems, and you inevitably find them.

    I feel like that dovetails really well into my next question, I want us to turn toward things like policing and crime, which relies on a lot of data and stats, and what you said earlier about, we're already using AI in all these different kinds of ways. I feel like some people do not recognize that the police are also using this in terms of predictive policing. So can you tell us a little bit about that and what the impact has been in real life?

    I started thinking about this couple of years ago. When I wrote a story for I think, was the Atlantic and it was called when cops check Facebook, because I found out that cops were catfishing teenagers on Facebook, so the kids would be involved in some kind of raucous and they would post about it on Facebook and the police were making fake Facebook profiles and friending these kids so that they could see the private videos that the kids were posting to their friends and then the police could use that to identify the kids who are involved in the raucous and they were, you know, claiming that this was a way of fighting crime. I had been under the impression this was totally naive of me, but I had been under the impression that I that there was really, really super high tech stuff being used by the police, in the same way that there was really super high tech stuff being used by criminals. And it was just a reminder that oh, yeah, everybody is using the same technology. Right? Like there is there is crime happening via WhatsApp, like it's not happening on like some secret dark web. So I got interested in the ways that police were using ordinary technology and then it moved on very quickly from you know, understanding the cops on Facebook to looking at facial recognition technologies. Facial recognition is really biased. You know, we know from JoyBell and weenies work that into new cameras work on the gender shades project that facial recognition is better at recognizing light skin and dark skin. It's better at recognizing men than women generally does not recognize trans and non binary folks at all. And so when police rely on these bias technologies, they are exacerbating pre existing problems about bias in policing. Or we take something like crime statistics, like a very popular thing for a while was to build these systems that would take crime statistics and then make a model and try and predict where crime was going to happen next. Well guess what crime statistics are not actually records of crimes. They're records of arrests, right, and they reflect where policing has been applied. When you feed a system, a predictive system on where where arrests have happened already? Well, then it just predicts that more crimes are going to happen in the places where crimes have already happened, which leads to more over policing. Of already over policed neighborhoods, and intensifies the, you know, America's problem with with over policing, and, you know, intensifies the carceral state. Is that a good idea? Probably not.

    I mean, it reminds me of the story, I believe it is in Chicago, of this young black man where the police were using this predictive technology and went to his house and said, our technology says that you will either be the victim of a crime or you will commit a crime. So we will just be following you and watching you and he gets arrested on some marijuana. Charge later at work, but eventually he gets shot not once but twice because the people in his neighborhood thought he was snitching to the police. I think you all know the concept of snitching. I don't have to. Okay, great. Explain that. So they were correct. He was the victim of a crime but only because they use the predictive technology. I think some people would suggest, well then why don't we just give AI more data? Why don't we just fill it with data? Why is that not the thing that could make it more equitable?

    So people do often often say that they say, Okay, well, you're saying that the system is inaccurate, let's make the system more accurate. And it is absolutely true that I you could increase the accuracy rate by feeding in more data. But the problem is that these kinds of technologies are disproportionately weaponized against communities of color when they are used in policing. So something like facial recognition, it's not going to help to make it more accurate for darker skin. It's actually just going to intensify existing problems. So a better solution is not to use facial recognition in policing at all.

    But we've seen like in the last year and 2022, particularly several cities that had had some form of ban on facial recognition turned back on those bands I know like in New Orleans, two years after they did the ban, they sort of reinstated it and actually they've never said how many arrests that technology has led to I think Vermont was the last state with a total ban. And then it has since gone back and said well, you could use facial recognition technology for us, you know, investigating sex crimes. What does that mean in terms of like the general populace, like how much risk are we out? I was joking earlier about being alarmist but this stuff when we talk about policing, which we have been doing in journalism since forever, but I think even more focused in these last few years and then hearing about the advancement of technology and how cops are using it. What does that mean for our society and a Democratic one and equitable one? Well,

    I come at it as an accountability reporter. My job is is to look at the technologies and see how they're being misapplied. From my perspective, they're being misapplied just as often as they are, as they're helping. So one of the one of the things that I find really useful is I feel like we've passed the point where we can say, oh, AI should absolutely be used for this AI should absolutely not be used like that binary thinking does not serve us well. So I really like tying the use of AI to a particular context. Right. So saying that facial recognition should not be used at all, is not really going to work because guess what, there's facial recognition in your phone and using facial recognition to unlock your phone. That's actually probably a really low risk use of it. You know, it doesn't work for me half the time anyway. If I'm wearing a mask, it doesn't really work.

    My friend's baby his face opens her phone because he looks like her that I heard that at this conference like a day ago, you guys and I was like, Oh no.

    Yikes. Oh, yeah. There's also there's all those also those stories of people who are identical twins are identical, right, like open each other's phones, you know, just kind of fun. So at any rate, it's helpful to tie AI to a particular context and make rules about the use of AI in that particular context. Right. So I, one of the things about the i eu AI act that I find particularly useful is the division of uses into high risk and low risk cases. So low risk use of facial recognition on locking or high risk use of facial recognition is police using facial recognition on real time video surveillance feeds, right because that is going to miss identify people of color more often and is going to get more people swept up through the legal system. So that high risk use would have to be monitored, registered and regulated in a low risk use would just, you know, people could just do that.

    What are the biggest traps that you think journalists could fall into when using this technology? And then how can we prevent ourselves from doing so?

    Oh, God, there's so many there's I will take all that waiting for us. I'll take the top three. Okay, so techno chauvinism is really the is really the starting place for me. Don't assume that technological systems are working well. Right. So assume bias. In the technological systems instead of assuming. You know, assuming that they work like AI systems generally do not work, as well as people claim or as well as the marketers claim. I win. So you can also audit any AI system auditing comes from the compliance world. I swear, like, you know, if your eyes are starting to glaze over when I say the word auditing, I apologize. It's actually really interesting. So what you do is you kind of test a system to find out how well it's working. There was a Bloomberg story recently where they used a generative AI thing with stable diffusion, and they asked it to make pictures of a doctor or pictures of a nurse, a picture of a dishwasher, a picture of CEO, and the pictures that the AI generated were, like every stereotype that you can imagine, you know, the doctors were men, the nurses were women. It was mostly people with light skin. You know, if you ask for a dishwasher, it would give you a person with dark skin like it was. Like it was the most basic stereotypes you can imagine. So we can do this with every single AI system and guess what, you're probably going to find a story every single time. And so we need more of those stories. And then in terms of going beyond bias and figuring out how is this really how are these systems really harming people? You can think about the ordinary ways that you know from irregular reporting that people are harmed, right, the ordinary harms that people have in the world, and then you can look at okay, well how are people experiencing those same harms inside technological systems and you will find them so public assistance is a is a useful example. You know, how people have have trouble accessing public assistance in the world of paper forms. Well guess what, in the world of digital forms, it's even more elaborate. So there was a case in Michigan where they discovered that there was this form that people had to fill out in order to access public benefits. And it would ask things like, you know, on what date was your child conceived? Like, in order to get you know, to get SNAP benefits for your kid or whatever? And it was like, Who knows that like, what are you asking me for like this the you know, the phase that the moon was in? Well, like, you know, when, you know, conception occurred, like was ridiculous. And so there are all these things that are actually technological barriers to people accessing benefits. And what they found in Michigan, is it's actually a public interest technology success story, because what they found was that when they reduced the time that it took to fill out this incredibly onerous form with all the unnecessary information, then yeah, they got benefits to more people.

    Obviously, our society has inequities and race and gender and class in geography. And I wonder if you think if journalists and newsrooms don't use this technology in the right way, if there is a chance, a very high chance, low chance you tell me that we end up amplifying those inequities that we ended up pouring into them because we're using this information and using AI to amplify and exacerbate problems that already exist?

    No, that's a that's a very real, very real possibility. I was really thrilled to see that there were so many sessions at OMA this year about confronting white supremacy, you know, looking at bias, you know, kind of looking at ourselves as journalists and using that reflection to make our work better, right, because we all have unconscious bias. Like we're all working to become better people. But we're not there yet. You know, and I and so we have unconscious bias. what everybody does is they embed their unconscious bias in what they create, in what they write in the technologies that they create. So one of the things that we can do in order to help our writing be better and more inclusive is we can examine ourselves and say, okay, like, how do I need to be a better person?

    What do you think that most of us get wrong about AI?

    Think there's magical thinking around AI. In any room where we're talking about artificial intelligence, like half the people are secretly thinking about the terminator or thinking about Star Wars or thinking about Star Trek or

    minority report, but I feel like that's the Minority

    Report. Yep. Yeah, Hollywood is so deeply embedded in our brands, right. So here's something interesting about how our brains work. So we are better at remembering stories than we are at remembering facts and statistics. You know, so Hollywood is really good at telling stories, like their genius at telling stories. And so we do think about Minority Report or the Terminator because they're really well told stories and we think about that first before we think about AI being math. Right? And so we can use this as reporters when we are writing data heavy stories. We can make sure to couple the data with anecdotes so that it sticks better in our readers minds.

    I love that you keep saying that. AI is math and like the status math because as your journalist so you know, the big joke is always about how I became a journalist to not be around math and now which is always the first lie you realize, right? When you get out of school and get your like, this job is so much math. So I wonder like, how do we approach this without fear and how are we smart about it? Because oftentimes, many journalists are so fearful around math around data. You know, we have a lot of like, general assignment reporters are not necessarily beats or specialists. So maybe I'm not dealing with AI or generative AI today for this story, but for the next door, or maybe I'm not thinking of how it intersects with my beat in a real way. Like we just talked about the police beat and crime beat.

    Well, math anxiety is very real. You know, I talk about this on the first day of every class that I teach, and I and I told my students, listen, it's not that you're bad at math. It's that you have been taught math badly. Right? And nobody has told you that math anxiety. is a real thing. And so the reality is, if you are okay with math up until about sixth grade, right, then you'll be fine as a computer programmer, as a professional, right. So as a journalist, like you already know all of the math that you need in order to be like a really competent data reporter or an algorithmic accountability reporter. So just hearing that often makes people feel like oh, okay, like maybe this is not so scary. And I really do want people to not be scared of AI and not be scared of technology. And I also want to normalize the idea that technology is almost always broken. Right, like it was,

    it is almost so it's always broken, right?

    Like I was I was helping my friend's mom yesterday with like something that was broken on her outlook. And she's an enormously, enormously talented, brilliant, multi credentialed, you know, like, world changing person, but like, her outlook was messed up and she was like, Oh, I can't figure this out. Like raise your hand if there is not a single thing in your life. technologically speaking, that is broken right now. Right. So yeah, everybody has thinking, again, technology all over the place. So I don't know I just I don't know why I need me to expect the technology is going to be better.

    Because it's all about how it's sold to us all the time. Right? That's totally sold to us that way. But that doesn't matter. It shouldn't make things easier. It will be faster. And then we get in there and we're like, this is neither faster. This is neither better. Please don't make me are it is like 777 I'm like, I don't want to hear restart this computer. It's going

    to make it because they're gonna ask you did you turn it off? Yes, I turn it on. Again.

    That's not a solution to me. I want you all to start thinking about like what questions you have and encourage you to put them in the app, because we're definitely going to be taking those later. When we have the introduction. She mentioned that yes, we've had a lot of like seminars and stuff here on AI. And I want to read this from my notes to make sure I got it right. Because I heard someone say in one of the sessions. Ai cannot be an achievement for all of us, because we're not all in this together. People have different advantages and disadvantages. And I just wonder what your thoughts are on that. Statement.

    Yeah, people do have different advantages. I, I think that that is a good opportunity to reflect on privilege and how privilege is embedded in AI. One of the things that I do in my classes is I teach using the computers in the classroom, which frustrates a lot of students they're like, oh, I want to bring my laptop and when he's my personal laptop because I you know, so much more comfortable with it. But I discovered many years ago that there is a lot of inequality in the classroom around who has what technology. And so I'll have a student who you know, is really excited about using their, like, brand new souped up Mac laptop and then have another student who is on a PC that's like 10 years old and takes 10 minutes to boot up and can't run any of the modern programs. So it would take an awful lot of time in the classroom to troubleshoot everybody's individual laptop that would take away from the the learning experience but also the disparities in the student's ability to learn based on how much money they have, which is reflected in the technology they have. It just it doesn't feel right to me to to make that part of their educational experience as well. So there's teach on the on the computers that are in the classroom, because NYU is generous enough to have these incredibly souped up computers that everybody can use, and it puts everybody on the same footing.

    I mean that digital divide as you all know, starts very early. Definitely a class issue in every state and region and territory in the US and then can have impact. I'm wondering if you see that playing out also in journalism. I hadn't considered it as you said, like in the classroom sense. But then, do you think it plays out to when these students graduate and they get their first journalism jobs or newsroom jobs are adjacent jobs and the impact that would

    have Yeah, absolutely. Absolutely. Newsrooms are very differently resourced. I've worked in newsrooms where you know there is an expense account for, you know, buying new technology, and if you need a particular app or a particular piece of software, people are like, okay, like, just buy it and, you know, reimburse you and then I've also worked at places where I there's absolutely the minimum and I like at one point, I remember I, my corporate email would only accept one gig of, of stuff. And so, like my email box kept getting filled up like every other day because people were sending all these press releases with fancy attachments in the bubble. And I was like, why don't have more storage space? Like why is the newsroom not like buying a better email system and they just didn't have the money or the know how to do that. And so newsrooms are just really differently resourced. And so we need to, you know, we need to be sensitive to that.

    How do you think we came to this place of I like the term that use techno chauvinism? Am I saying it correctly? How do you think we came to this time and space in place? And was it inevitable of having that be an issue?

    I mean, it was not inevitable, but it was definitely a strategic decision by a very small and homogeneous group of people. So one of the things that I covered in my last book, artificial unintelligence, as I looked back at the history of computing, because I had these beliefs about what the future was going to look like, technologically speaking. When I started tracing back the history of those beliefs I discovered that most of what we've been told collectively about the future of technology actually comes from a small and homogeneous group of people who are mostly Ivy League educated Ivy League and Oxbridge educated white male mathematicians. Right, which is, you know, like some of my best friends or Ivy League educated white male mathematicians is not necessarily a problem.

    Famous Last Word. Some of my best friends are Yeah.

    But again, we all have unconscious bias we embedded in the technologies we create. And so the vision of the future the vision of, okay, kind of sleek, world where algorithms govern everything where there are self driving cars that bring us our groceries and we get to stay home and play video games all day. And, you know, we don't have to interact with people that is a vision of the world that comes from a very specific type of person. That is not my vision of the world. That is not the future that I want. And there is it's a vast world. There are lots of potential futures. And so by resisting the future that's articulated by a small and homogeneous group of people, we can expand the the voices who are articulating possible futures and we can start to see different futures The other thing that I found so fascinating in my research, was that the idea that there shouldn't be geographies or rules on the internet came from hippies in California. Really? Yeah, from the 1960s. So

    I knew you were gonna say the LSD years. I just felt it like

    100% 100%. So if you remember the whole earth catalog, which was down for Earth, alright, so Whole Earth Catalog. I'm really getting deep into it. Yeah, I'm into it. Alright, so the whole earth catalog was the hippie Bible. I it was this magazine that people read on the communes and had things like how to build a geodesic dome, how to handle your clothing, like all of this fun stuff. And in the back, there was a comment section right? So people would write in with their commentaries on you know, commune living and their, you know, tool recommendations and their duties like dome. You know, troubleshooting tips, like here's what you do when you get a leak in the roof in the middle of the night. I end, people loved the interaction in the back of the whole earth catalog. So you know, the idea that there should be comment sections on the internet, that people should be able to chit chat back and forth at the bottom of his paper articles. Yeah, that's where it comes. To. I

    get it a time capsule to the 1960s to battle like the person who made this book.

    Well, actually, he's still around. Great. Yeah, he's still kicking around. He's still, you know, still shows up at conferences still, like buddies with Jeff Bezos like oh, you know, all the stuff. I so the thing is, like the communes failed, and a lot of the the folks who were on the communes said, Okay, well, you know, our communes failed. But here's this new Uncharted world called cyberspace. And we're going to take our values and put them into cyberspace. Right. And so the idea that government doesn't belong on the internet and tech companies can just do whatever they want, because, you know, first principles that comes from you know, the kind of privileged hippies, who

    I love that connection because it provides context, which I think should be our primary job and any story that we tell that often we're telling stories and things in a vacuum. And I feel like we can't have this conversation about AI without the context. Of this and misinformation because we are in that era, perhaps we have always been in that era. But I think ahead of you know, more than a year out from a major election, that it's, you know, on more minds. I wonder what are your concerns and your hopes for us in terms of covering it and using AI with all this distant misinformation out there?

    Well, I think the the proliferation of generative AI is a disaster for the production of disinformation. I mean, it's it's a disaster for us because it's so easy to produce dissent misinformation now. I, I think that one of the major ways that we could combat the misinformation crisis or the coming information crisis is by funding the media better. You know, if we have more journalists, if we have more disinformation researchers who are empowered to, to raise, you know, raise concerns, then yes, that would, that would help combat the flood. Is that going to happen? I don't know. You know, I'm not in charge of that. If I were, I would just throw a lot of money at it and and I would have fixed the problem, but unfortunately, I am not the boss.

    We can make that happen. What I mean, I think a lot of us approach this from an individual level, even as an individual journalist or an individual news organization, news organization. And I'm wondering, Is that the right thing? What is the role of government and all this and what do you think the role of government should be in terms of how we manage AI? What we allow it to do how we weed out inequities make things better? What should government's role be?

    Well, I think government's role government should have a role in this. I think that getting people more technologically literate is a way of strengthening democracy of getting people more involved. And I'm also really optimistic about the the work that's come out of the White House Office of Science and Technology Policy in the past several years. I don't know if all of you are familiar with the blueprint for an AI Bill of Rights out of the White House Office of Science and Technology Policy under Alondra Nelson. It is fantastic. It comes from a human rights framework. I and it articulates basically that the rights that you have in the real world should also be preserved inside AI systems, right, which sounds really simple. But it's extremely powerful. And you also realize that oh, wait, that was not happening before. And honestly, it's still not happening. I see the blueprint for an AI Bill of Rights is a is a call to arms. It is not legislation is important step toward legislation and regulation. But it is not. It does not yet have the force of law. So something interestingly, I think that's in it is that it says if an AI makes a decision that goes against you, you have the right to remedy.

    I read that and for those who don't know you can find this on the White House website. It's he's easily searchable on the web. And I did find that fascinating like if the AI rules against you, there should be someone that you could go to and say, Hey, this, isn't it. It also was very big on the right to privacy, which I think we've been talking a lot about, like nationally and societally and it wasn't really to me meshing in the same way as like what we've been talking about and living. What do you think are must haves in a Bill of Rights?

    Well, I'm pretty happy with what they came up with. You know, and I love that idea that there is remedy because right now, like how many times have you been frustrated by a technology and you want to call somebody and like have something done about it, and there is no phone number?

    I hate that. And it's the worst, like, three or four times a year? Yeah, yeah,

    yeah, it's terrible. I mean, the fact that there is no phone number anymore is a kind of techno chauvinism. People think they're being really cool by like being only digital. But really, you're just being annoying. And like also people are trying to save money on customer service. Well, before you start trying to save money on customer service, make sure that's actually what your customers want. Because, you know, a lot of the time they don't it's yeah it's super annoying. So, I am really optimistic about being about having a right to remedy because with cases like mortgage approval algorithms, right, like people are trying to, you know, to use these automated mortgage approval algorithms, which will decide if you got a massive loan of course, a mortgage is a major way of building generational wealth, right. And traditionally, there's been bias in who gets mortgages. The markup did a really terrific investigation into mortgage approval algorithms found there were 40 to 80% more likely to deny borrowers of color as opposed to their white counterparts. And so, under the vision, articulate in the blueprint for a Bill of Rights, I when you get turned down for a mortgage, you would have the right to talk to somebody who would explain why the machine makes that decision. And very importantly, the person will be empowered to change the decision. Because right now too often people are like, Oh, well, that's what it says in the computer.

    Yes, yeah. And it's totally opaque and no one knows why it says that thing. I just want to check in and see if we have any questions. Okay.

    So, we've got a question from Ben and Alisa that are similar. Alyssa mentioned that there's the elephant in the room of newsrooms that want AI so they can pay fewer journalists and editor, editors and the question is similar from the two I basically asking how can we best evangelize around these issues and make it clearer before it's too late?

    All right. So I think that that idea that we want to use AI instead of a journalist usually comes from the business side. It doesn't usually come from the newsroom. So we need to be clear about where that imperative is coming from. And we need to not let the business side make knuckleheaded decisions about technology. I because there have been, you know, we all make knuckleheaded decisions about technology sometimes. And you know, we can blame the business side of the newsroom, but, you know, like, it just happens. Yeah, but let's maybe not make that particular knuckleheaded decision last time.

    It was a question about language, we use the term artificial intelligence, when there may not be actual intelligence. We use machine learning when the machines are not doing learning in the same way that we would think of. Do you think that language is part of the issue? That question comes from Jack in Illinois?

    Oh, yeah. That absolutely factors in when you say machine learning, it makes you think there's a brain inside the computer right? When you say artificial intelligence makes you think that there's a brain or even when we say something like smart speaker or smart thermostat. I don't know. Do you have one of those smart thermostats

    like I don't I live in a building that's 1000 years old, so no,

    yeah, I have like I bought a bunch of them. And like one of them just didn't work. Like it was like the circuit board was busted on and I was like, this is not smart.

    And was your dumb thermostat. Then mocking you like I could have? Yeah,

    the dumb thermostat was like, Listen, why did you give up on me? Yeah, exactly. Could have

    been me. You wanted the new thing.

    Yeah, exactly. Exactly. I felt very foolish. Sunni, right. Yes, the the language matters. We can also think about who chose those names. And what were they trying to do by choosing those names strategically? So artificial intelligence as a field got started in 1956. At a summer meeting at the Dartmouth math department and the reason they call it artificial intelligence, is they were trying to distinguish their field from a previous field called cybernetics. Right? So there was this guy Norbert Wiener, who was in charge who kind of owned the term cybernetics, and was an older gentleman in the field. And Marvin Minsky and John McCarthy and a bunch of other younger folks. Were like, oh, yeah, Norbert, like he he is like old and irrelevant. We're doing the new hotness. And so we're gonna call her a field something different. We're gonna call it artificial intelligence. And they also chose that name because they were big sci fi fans. Right? i There are I mean, the tech world is actually uniquely bad. Choosing names sometimes. Python, for example, like you think Python is named after a snake. Yeah, it's not. It's named after Monty Python.

    Is you know what? Yeah, it's too much. Yeah. Do much right. See inside jokes. Yeah.

    People choose names because they just like things and they're like, oh, I want to name my computer program after this thing that I like, and you know, kind of like you name a dog after something you like so but you know, words have impact so artificial intelligence is a very badly chosen name, computational machine learning. Actually, a better name for it would be computational statistics. But that does not sound sexy. Machine learning sounds really sexy. And sounds like the kind of thing you want to throw money at. And computational Statistics does not. And so it was strategic.

    I mean, it also sounds like if we were using the terms that you're using, like computational statistics, it feels easier to hold that accountable because right statistics, we know run back to that as opposed to like the machine is already learning, unlike its brain is already going so it feels a bit more fearful

    100%

    We've got a number of questions, kind of two sided one is, can we use better data, larger data sets to train it? Ai better and then on the other side, is it possible to have better trained AI that can identify bias in itself and in other algorithms?

    Okay, so yes, it is possible to make these systems more accurate. If you use more data, in the case of something like the mortgage approval algorithms that I mentioned earlier, I it is not possible because there is no such thing as a world where there is not financial discrimination in housing, right? Like you, you theoretically could,

    but the data doesn't exist

    data does not exist because the world is not a perfect place. Right? So you just you cannot do everything that you imagine with, with math and with data I mean, we know this is it as data journalists, right? Like often you'll go and look for the data on something and it just doesn't exist, then you have to do different story that comes from the data that does actually exist. So I Oh, and I should say that I I was at a tech conference the other day and somebody was talking about how they achieved really good results on getting better output from generative AI for their coders. Right so they were using one of these collaborative coding things, where they fed in all of the code that have been written by their top engineers, and trained a model on that, and then use that model to help other coders at the company to generate new code. And they were really happy with the results. Right? So customizing these larger generative models with small data from your organization. People are having a fair bit of success. With that. Again, we can't expect that it's going to replace journalists, but maybe it's going to, to, you know, help you generate very small things like code segments faster. So can we make it more accurate? Yes. What was the second part of that question?

    I think the first part was about whether if we fed it more data that it could be better. And I think earlier you had said right part of the issue with that is that the data is flawed, but you know if you want to restate it,

    well, what was what was the second part? It was the second question. That's the second part for answer the second one,

    if AI can find bias in itself, we can get Oh, yeah.

    Okay, so this is about algorithmic auditing. And about mathematical methods for adjusting for bias. So let's, let's stick with the mortgages. i When you have a system where you've identified the fact that it gives more mortgages or approves more white people for mortgages, then people of color you can put your finger on the scale, like you can use mathematical methods to say, Okay, give more mortgages to people of color. Is anybody doing that? Not to the best of my knowledge, right? Most people have only gotten to the point of investigating algorithms for bias. They have not gotten to the point of remediating these things. I think that it's probably going to happen first in heavily regulated industries like finance.

    I think we can take one more question quickly.

    Okay, we can summarize several questions. As journalists want to know, we're using these tools in newsrooms. How can we be more transparent about how we're using it, and the potential deficiencies as we're covering? Ai? What could go wrong? Great. Question.

    Well, I think we can cover AI as if there are problems in it. Right, like we can stop. Stop just taking stop reproducing press releases from tech companies.

    Turn that mic up. Say it again. Like really close.

    Yeah, I mean, the time of just a fawning press coverage for big tech companies should be over. And I think that newsrooms should definitely have questions have conversations about what's our policy around disclosing use, use of AI? Because we don't, we don't tell readers that the reporter used Google, like while researching the story, like nobody cares. Of course, you of course you used Internet search. Big whoop. You absolutely should disclose if an AI is writing an article, like the AP does a really good job of labeling, which articles are written by AI. And so this is a this is a policy issue. And it's probably going to change over time. So we should expect to be dynamic in how much we disclose to our readers about our uses of AI.

    Meredith, I want to thank you so much for giving me your time today and the audience the time today, guys, clap it up for Marinette.

    Thank you, everybody.

    And I want you to know, I think you gave us several takeaways, but obviously the most important is that AI is just like a McDonald's ice cream machine. And all technology. It's always broken. Thank you guys so much. Have a great day and a great great rest of the conference. Thank you