Identifying and Dismantling Tech's Deep Systems of Bias
8:06PM Mar 3, 2021
Speakers:
Devin Coldewey
Keywords:
people
technology
ableism
conversations
companies
world
accessibility
sophia
disinformation
knowledge
question
kinds
frankly
problem
ai
design
exist
thinking
accessible
facial recognition
Hey, everyone, welcome. Thanks for joining us here today. This panel is about the systems of BIOS that are deeply embedded in tech, we kind of can't just sort of patch out as it were. But I'd like to start out by allowing Haben Girma, our guest today to introduce herself and her means of communicating, and how but maybe you can also tell us a little more while you're speaking about the how the benefits of the latest technology compared with the barriers that it introduces for people with disabilities.
Hello, everyone, this is haben speaking, I'm a Disability Justice advocate. And one of the reasons I came to this work, it's because I'm deaf blind world, mostly designed for people who could see and hear. So most of the technology that's built was not imagined for disabled people, which is frustrating. And also absolutely ridiculous. Tech has so much potential to exist in visual forms, in auditory forms, in tactile forms, and even smell and taste. It's up to the designers to create tools that everyone can use. And I want to describe one of the tools I'm using right now. So to speak, I'm using my own voice. But to know what other panelists are saying, I'm using a Braille computer. I'm holding it up right now, along the bottom are Braille dots. So as people speak, I have an assistant typing what people are saying, and the words pop up in Braille. So I'll be reading the words and then responding by voice. So you might notice a delay between when someone speaks. And when I respond. Back to you, Devin.
Thank you very much, Robin. So the whole idea that technologies, as you've mentioned, can and do harm or benefit different groups specifically, is very much at the heart of your work mentality? Have you found that the the bias there is primarily in the technology itself? Or is it in the people that are using it? Or is that distinction not really meaningful anymore?
Um, hi, everybody.
Thanks for that. Great question. So I came to this work really through policy. And in the policy world, we're thinking about impact, right. So we're not necessarily thinking about technical infrastructure, or, or HR, but one of the things that I found throughout my career, both in industry and in research is that the two are linked. So there is a problem of technologies which are inherently racist, or sexist, or ablest, as heaven, so beautifully pointed out. But there is another part, which have haven't really spoke to as well is an imaginary for technologies that could actually serve all people. And if the if the scientists who are creating those technologies don't have experience outside of their own experiences, or we're sitting in a moment where Google AI has got rid of Michel, Michel and gebru, both of whom were technologists, from researchers from minoritized, communities who are thinking about new and different ways that tools could be designed, then you may not see them coming to products, I'd say that the two are definitely married. And I'm really looking forward to the rest of this conversation to kind of get much more deeper into that.
Absolutely. Thank you for that. And Sophia, your your work has found bias. That's a little it's both more obvious and more subtle, specifically in, among other things, search engines. Do you would you say that what would you say is the main failure by companies like Google and others who have attempted to organize and present information from the internet?
Well, thanks so much for inviting me here. And I, I also feel, you know, so strongly about these conversations about the you know, the harm and the, the neglect, quite frankly, in technology design should be the most important conversation that is going on in Silicon Valley right now, quite frankly, and I'm really grateful to people like Robin and matale, and so many women in particular Women of Color, LGBTQ women and scholars and journalists who've really pushed these conversations into the mainstream, quite frankly. So I'm really grateful to be in their company. And, you know, my concerns were with what we might think of as just banal technologies, things that we really don't give a second thought to, and that also present themselves as widely neutral, and valuable. And, of course, this is where I became interested in looking at Google search, because Google's own kind of declaration that they were interested in organizing all the world's knowledge, I think, was a pretty big claim. Of course, I'm coming out of the field of Library and Information Science, and thinking about, I don't know, 1000s of years of librarians, for example, around the world who have been indeed organizing the world's knowledge, and what it means to have an advertising company, quite frankly, data mine, our knowledge, but also commingle it with things like disinformation, propaganda, patently false information and ideas, and really flatten our ability to understand knowledge and good information. And of course, we know that knowledge and good information are essential for things like democracy. So I, you know, I think that, for me, this has been a really important place to look. And I stay a bit focused on that, because I see how profoundly that technology is designed in discriminatory ways in misrepresentative ways. And I think that we, the public deserves to have strong information and knowledge institutions and organizations, especially that fairly represent people who are oppressed and marginalized. And that knowledge needs to be used in service of liberating people from oppression and kind of marginalization. So you know, this is kind of what brought me to looking at at that specific technology. Yeah, you
know, the three of you all mentioned, basically, the deepness of the problem, this isn't just a recent thing, and it's not a simple thing, it goes way back to the origins of the technologies and the companies themselves having you mentioned that the the internet and the web was not designed from the start with people with disabilities in mind. mentality, you mentioned that the people that the companies are not involving people that need to be involved in order to produce an equitable amount of software. And Sophia, basically just said, like, it's a it's an advertising company that's attempting to organize and present the world's information. But that's a fundamental conflict of interest. Do you think it's possible for the tech industry or a single company to accomplish a goal like organizing and presenting information or creating an equitable, equitable AI? Maybe Sophia, you can start with that?
Well, you know, in libraries, we understand, for example, that there are many parameters and constraints to deciding what to keep what's valuable, what's credible, what's been vetted, what is scientific, what is has artistic value, and those those ideas are, quite frankly, contested all around the world. I mean, it isn't that there's a, you know, unanimity of, you know, values. And of course, this is one of the reasons why, for example, librarians of color, and libraries in other parts of the world outside of the United States have really struggled to not have English language dominance, you know, white American dominance, European dominance, in let's say, all of the knowledge spheres that are legitimated in the world and and so these things are very contested their political and important issues around again, being part of a cultural record of a society and having one's knowledge be used to further a more equitable society. So I find it really troubling that those with the most money get to buy their way to the top, which is of course, what happens in a search engine, those with the most capital when also if you happen to be a product or a service or website that furthers Google's own interests, or other being all of them, you know, all the commercial search engines really have the same shared logic which is helping them to be as profitable as possible. And this is one of the reasons why you can find disinformation, pornography misrepresentation, racist ideas, things that are titillating and profitable. I mean, of course, we live in the United States racism and sexism are big business, extremely profitable, ask anyone in Hollywood. So when you have these kinds of technologies that are driven by a profit imperative, that it's actually profit at all cost. And this is where minoritized voices and people who are not part of like the the profit driver are really quite frankly disposable. And and one of the ways I think that we've seen that most profoundly is the way in which the worst kinds of ideas sometimes in very important social crises, come to the fore. Because, you know, disinformation is also profitable, it goes viral. And, you know, it makes search engines a lot of money. So I think these things are extremely dangerous, obviously. And, you know, I tried to document many, many, many instances of this in my in my book, but of course, I could write a whole nother book since that book came out at these kinds of failures. I'm not doing that, by the way, but
I will expect it anyway. You heard it here first. That's the news. I actually want to you touched on the disinformation and these, that that aspect that actually dovetails well with something I was going to ask Robin, haven't you mentioned this, this really kind of wild conspiracy theory that has been spreading on tik tok that Helen Keller didn't exist, and that it's it's especially, it's especially difficult because because tic tocs accessibility is so bad, deaf blind people such as yourself can't get on there to combat visit and so this disinformation. So what kind of what kind of situation? What kind of new biases and problems does it create when there is not only disinformation but inequity in your ability to address that disinformation?
We have to go back to the explanation that tech does not have to be accessible, or tech does not have to be inaccessible. So blind. Deaf Blind people have used technology for quite a while, and were early users of technology, including being designers and engineers. We are on many of the social media platforms, there are blind and deaf blind people on Twitter. Tick Tock was not designed with accessibility in mind. So there are very few of us on tik tok. I don't know if there are any deaf blind people on tik tok. So when you have a space where there are few disabled people ableism gross ableism is the assumption that disabled people are inferior to non disabled people. It's widespread. It comes up in as design laws policies. So people on Tick Tock have questioned the existence of Helen Keller, because the people on the platform can't imagine how a duck blind person would write a book, or travel around the world. Things that are well documented that Helen Keller did, there are even videos of her giving talks, signing box, videos demonstrating how she read and write through rip through Braille. And there's also lots of information on how blind and deaf blind people are doing these things today, writing books today, using technology today. So when you have these spaces where there are no disabled people, or very few disabled people ableism negative biases grow more rapidly. And that's incredibly harmful, because the people there are missing out on talented, diverse voices. Our communities are richer when we have all kinds of voices, including disabled voices, participating in conversations. And in terms of tech accessibility. guidelines on how to make tech accessible have existed for decades. There's the Web Content Accessibility Guidelines for making websites accessible apps. mobile platforms have their own accessibility guidelines. There's Android accessible guidelines in iOS accessibility guidelines. So the tools already exist for designer. It's just that many designers themselves suffer from ableism. The assumption that disabled people can't use smartphones, can't use websites, therefore, they feel like they don't need to bother to think about how to make it accessible. So ableism gets built into the products. And then the people who use the platform also spread ableism. And that's not the kind of world we want.
No kidding. Thank you for that. And that also addressed a couple of the questions that I have from the audience about resources for designing for accessibility. And we can get to some more of those later. And actually, a couple of the ones that I have here from the audience are very much what I was about to ask you metalli, about racial bias in, for example, the justice system, obviously, you work very specifically with facial recognition, among other things. Because this isn't this is an especially urgent problem because it's being deployed by a racist justice system. So what are some other technologies that you think in addition to that, what do you think are some other technologies that are being deployed against black communities against marginalized communities that we might not be aware of?
So I'm, so I'm going to say two things, I'm definitely going to answer that question. And I definitely did not bring a charger for my computer into this. So after I answer the question, I'm going to go dark for a couple of moments and then get my charger. So I apologize.
No problem advance.
But thank you for the question. Yes. So back to my policy days, one of the bills that I was responsible for introducing was the no biometric barriers to Housing Act that came in in the last Congress remember that one where we had like Mitch McConnell, and Paul Ryan and Trump, so we knew we were just speaking to each other, and it wasn't going to go anywhere, but we introduced it anyway. Because why not, we can always try to have a better world. And one of the things that that really brought me to was this whole host of technologies that when used by security forces, or policing reinforce these discriminatory impacts on black communities. So that could be the way license plate readers were used by ice to identify cars. And when they pulled people over, they would do these additional biometric checks, whether it was fingerprinting or Iris readers, and then use that to criminalize these people on the road to deportation, which is an example of how technology being used in the face of security in this case, national security would have this horrible human rights impact. Or we saw at the capital riot, which I'm sure we were already happy with the way that we could use cell phone towers and getting into an AI. I'm at TechCrunch, talking about technology, and I forgotten the name of the technology. But that's mine, I also don't have a charger, it's a thing. You know, like where they were security forces can actually tap into your phone and get your location data from and that was what was used largely at the Capitol, right to put people on no fly zones. We saw this same thing. During the uprisings of 2020 with George Floyd where police forces were able to see who had been at a demonstration by the way, it is our constitutional right and our First Amendment right to protest our government, right. But we often think about first amendment rights to say that Helen Keller doesn't exist on a private platform like tic tock, which is reading of the law, different question. I can get to that later. But we were seeing how all of these technologies on their own, are impacting black lives. But imagine when all of those technologies are together, imagine when, here in New York, I walked, I walked to the subway to take a train because I have to go to work. This is old days, not these days. And my face is captured by a CCTV camera that could wrongly put me at the scene of a crime because it does not recognize my humanity, because black faces are not recognized by those systems. That's a very old idea that really takes us back to this idea that black people aren't human. They're in fact three fifths of a human, which was at the founding of this country, right? But we're reproducing that idea through technology. And then I log on to a system where I'm again faced with similar technologies that discriminate and it's that kind of
interlocking weave of oppression. That used to exist in the analog world has now become systematized and now become just part of how we decide who gets benefits just part of how we decide who gets bail and who doesn't get bail. And lastly, before all my charge runs out, the thing the worrying about that is we don't have the case law or the policy to to combat this. So very quickly in the criminal justice system, there is a case Michael Lomas these the the state of Wisconsin, where compass, which is a sentencing algorithm was used to decide how much time this defendant would have. In the in the ruling memo, the judge says that part of how the decision was made was using this algorithm. And when that case is contested, when that case reaches appeal, the Supreme Court of Wisconsin then says, Well, you know, Michael Lomas was a pretty bad guy did all of these things, it wasn't the only thing that was used, it was just a thing that was used. So we're allowing the algorithm to decide what happens. That's scary, particularly because that was in 2017. But fast forward to 2021. And Nigeria, Peters, in New Jersey sat in jail for 10 days, when wrongly identified by facial recognition, he then was able to use data to fight data and prove he was 30 miles away from the scene of the crime, of course, the algorithms that we will look to like, that's not a joke, by the way, that's horrifying and terrible. Yeah. And because he didn't have bail money, which is another oppressive system, sat in jail for 10 days. So I really could over talk on that one point. But it doesn't allow and this is something I would love to get into conversation with haven about just because the technology is deployed in a way that is harmful, in one sense, we can allow that to justify technological ableism in another sense. So even as we're even as we're thinking about these issues, we have to be nuanced. And it's actually one of the reasons why I left white papers, and I still write academic papers, but I left white papers behind and start to think about how can I use art and culture to tell these stories was because the nuances also getting lost. And it goes back to something Sophia said, This knowledge is being generated by mostly feminized folks, mostly folks of color, many black women were three black women on this panel. And the fact that we are generating the knowledge means that automatically the people in Silicon Valley don't believe us. So we also have these communication barriers that we have to fight and I am going to get my my charge.
Cable, don't worry.
Yes, I'll be right back.
No worries. Thank you for that. Yeah, you touched on a lot of things that I was going to ask about in the first place. But I want to hand it to Sophia for a minute and then haben as well. Sophia, new technologies, it seems like as metalli mentioned, people are so eager to embrace new technologies, companies are so eager to embrace new technologies that are coming down the road, machine learning AI and facial recognition. It feels like there's not enough time to address the potential problems with them to document them to make policy decisions around them. How do we make sure that how do we as someone just asked in the in the comments here? How do we bring these conversations to companies and make sure that they're as excited about equity and accessibility as they are about bringing that technology fundamentally to market and also having I would love to hear your thoughts on Metallica is the question of accessibility for technology as well, as well, though, we'll start with Sophia and then go to heaven. Let's Uh, well, I don't want to overload here.
Yeah, great. Well, I think that one of the things we're seeing, and you know, I really can appreciate the audience of people who would be listening to this today this session, you know, people who likely are designers or work in tech companies. So, you know, my, my comment will be kind of geared assuming that kind of audience versus what I might say, let's say to lawmakers, I think that first of all, there's a tremendous amount of risk by not taking up these issues. You know, I've heard that the risk management profile, for example, for a company, let's say like Facebook, in terms of harm, what they can't solve with software and AI, that they use, you know, human beings, quite frankly to sort through moderators, for example, the risk that they face is probably estimated around $2 billion, right? This is what I've heard from experts who work on Facebook. So if you, if you're talking about a $2 billion risk, I think then this is a decision that exceeds the desires that the design desires and software engineers, right now we're talking about the market position. If you're a startup, and you're facing significant risk, let's say legislatively, that you know, you might fall and find yourself the subject of a Federal Trade Commission investigation or department of justice investigation. Now in California, you know, we have, we're increasingly seeing more privacy and different types of protections come into existence. And I think you have to think far beyond, you know, like, what you can do versus what you should do, or what's ethical and responsible to do and I think these conversations now can no longer be avoided. This is a place where founders, venture capitalists, everything, every VC in the valley on Sandhill road should have a person who is responsible for thinking about the adverse effects of the products that they might invest in, or the companies that they might, how did we get, you know, Elizabeth Holmes. So I just want to you know, of course, you know, no black woman, whatever, enjoy the opportunity to just burn through millions of dollars on nonsense. So you think about like kinds of discrimination, also, that women, people of color, women of color, in particular experience as founders and trying to start up their ideas? Certainly much more mindful, I think across the board around these kinds of risks, again, conversations than others. So, and I, you know, sorry, to everybody who's watching who's friends with Elizabeth, you know, I almost wore my black turtleneck, because that was my look. So, you know, I think these these questions about, you know, morality are an ethics and justice sometimes are, you know, these questions are posed to the wrong arm of the organization, and new employees should be hired. I mean, I often say, these companies should be hiring people with masters and PhDs in African American Studies, in ethnic studies in gender studies, I promise you, they're going to have an analysis about Hold on, let's, let's contextualize how this might roll out. I've looked myself, people ask me, they show me their technologies all the time. I remember, you know, someone developing attack and asking me for my opinion, in about 30 seconds. I was like, you know, this is a stalker app, right? And they were like, what? And I was like, Yeah, like, No woman is gonna want and then I explained to them all the ways this could be used powerfully for anonymous stalking. So these kinds of, you know, lack of its like, the lack of ability to see and understand from many points of views, including the kinds of things hobbit is talking about, but also, including, you know, the ways in which certain kinds of products and services can get inverted, weaponized, as well, there have to be more people in the conversation. I think a lot of things shouldn't even get out of the gate, quite frankly. And they wouldn't they wouldn't pass muster if they had to go through a rigorous kind of evaluation. And those are things that companies can do right now. Before they wait for regulators or the Department of Justice to knock on their door.
Gotcha. Thank you for that. I want to give him the chance to talk about this. That is exactly what you're talking about. Including the every stakeholder from the beginning and including accessibility from the beginning. Having Can you tell us how to balance the revolutionary capability of technology like facial recognition, like AI with the risks, because it can be so transformative for people with disabilities for other anybody really, but we have to balance that with the risks. Can you help me contextualize and sort of figure out the the balance there?
The answer to most of these questions is have the people involved. Nothing about us without us is that seeing in the Disability Justice Movement, so if these seas and companies are thinking about investing in a solution that they think will be good for the world? Ask disable just just advocates get us involved. Nothing about us without us. In the past few years, there have been Ai based accessibility solutions, heavily invested in millions of dollars by VCs on the assumption and claim that it'll make websites and apps more accessible to disabled people. I've tried them, they're terrible, they actually make websites worse for me to use. And VCs continue to invest in these solutions. And these companies continue to promote these claims that you can add one line of code, and it'll make your website automatically accessible to blind and other disabled people. And it'll make your website automatically compliant to ADA requirements. I've tried those websites, it's it really makes things worse for blind and disabled people. We need the VCs, to also connect with Disability Justice advocates, and really find someone who with knowledge and background in accessibility and tack. Same thing for any company. All the companies should have technology, existing and tech, and tech in the process of being built, should be consulted on accessibility. It's easier to make something accessible, if you design for accessibility, rather than trying to make it accessible afterwards. It's like having an elevator in a physical building. You don't build the structure, and then think about adding an elevator. You think about adding an elevator before you design it. So it fits in smoothly. Same thing with digital accessibility design for the start, and include us nothing about us without us.
Absolutely. Thank you for saying that. That's that sounds. That's absolutely seems right to me. I wish that we had more time to talk about this, but I have to wrap this panel up. I think we've already gone over time, my producers are gonna kill me. But this has been super interesting. I would love to continue the conversation with all of you after the fact and I'm definitely going to pick up the thread with you. Thank you so much for joining us and helping me understand some of the stuff way better than I did half an hour ago.