As a software engineer in the thick of data driven data obsessed Silicon Valley, one of the first lessons I learned was, it's actually very difficult to be data driven when you have no data. Sometimes lack of data is unavoidable or definitional. But in other cases, the lack of data is a choice meant to minimize or obfuscate a problem. Let's take, for example, the issue of diversity in the tech workforce. we as an industry are falling far short of what we aspire to do to design and build great technology. Because of the majority of our teams. It is attractive and seductive to believe in technology as an objective, a neutral implement for progress, that the code that defines our products and algorithms, and the data that feeds them cannot be wrong or biased. My experiences as a software engineer at startups like Korra, and Pinterest, back then fewer than 10 or 15 employees showed me the utter Stark opposite. We were coding all of our very human biases right into the products we were building. It alarmed me, for example, when I was called upon as a woman to represent the viewpoints of all women, which I knew I couldn't do fairly. And I also worried about all the other perspectives were missing. Black Latinx
older people, people with disabilities,
the less economically advantaged and so many other lived experiences. The more I thought about it, the more I was struck by the irony and hypocrisy, that for all Silicon Valley's obsession with data, there was no data at all on diversity, which meant no benchmarks, no metrics for success, and no way of knowing if we were doing better or getting worse. In October of 2013, I wrote a medium post posing the question, Where are the numbers, and I set up a GitHub repository to crowdsource data collection on women and engineering. Unexpectedly, I sparked a wave of diversity data disclosures from companies small and large, including companies like Google, Facebook, and apple. With the data out there, the problem was undeniable, and suddenly very urgent. Four years later, with the now annual cadence of diversity data reports showing little progress is also undeniable. that change is hard, and will require committed effort over a long period of time. And unfortunately, the solutions aren't straightforward. However, now that we have measurements, we have a framework for experimentation as a means of iteration and improvement. And we have accountability. We've used data to formulate the problem of diversity in tech. Now we're using data to find the solution. My name is Tracy Chow, and I'm a founding member of project include. Thank you.
Artificial Intelligence reflects human values. First, AI is powered by data. Ai models learn about the world by analyzing data, understanding statistics and patterns of the world from this data. Consider, for example, my PhD work on the image net project, we set out to build AI models that can recognize all objects in the world, the limiting factor was lack of data, we collected an annotated hundreds of 1000s of images with hundreds of different objects. And with this data, we could now build AI models that could actually recognize objects, or work an image that led to a revolution in AI, where deep learning convolutional neural network models that have been in development since the 1990s, could now be applied at large scale to real world data. image that also inspired the creation of many other datasets, shape net space that medical image sets music net event net activity net, these datasets define what AI is learning about the world. However, data reflects societies. These data sets up ultimately encode some of the human biases some of the racial biases socio economic biases. Where does the data come from? Do we use the internet to get data? What search engines what queries do we put cameras into schools to build AI models for better education? Which schools because this data contains biases of our society, ai models will ultimately learn these biases. These biases will then influence someone The decisions that AI models will make. Some of my ongoing research work in the Princeton visual AI lab is around understanding revealing and to the extent possible mitigating some of these biases. Third, researcher shaped AI and our human experiences ultimately shape us as researchers. Where does what I bring my own biases to the table and you bring yours. It is important because AI is such a broad field with so many possible applications of behalf people with comparably broad worldviews and experiences working on AI. We know that AI will change the world, but who will change AI? Ai for all is a nonprofit foundation that I co founded this year, that partners with universities and companies to educate high school students from underrepresented groups about AI, and inspire them about all the good that AI can do for the world. I'm passionate about both building AI technology, as well as educating a diverse next generation of AI leaders. My name is Olga Russell kowski. I'm an assistant professor at Princeton University.
Please join me.
You've raised important issues, I'd like to speak further about the issue of bias diversity and inclusion and AI and technology in general. So, okay, you've been active and calling out the, the need to examine algorithmic bias and the threat that it poses in AI, when did you first realize the field of AI needed to examine that issue?
Well, so I've realized that we have a lot of diversity issues in people, but the conversation around how this affects bias. And I really started when I came to Princeton, and I talked to my colleague, Professor Arvind Maria, Narayana, and who has been active in the space, actually, for a few years has published some very interesting work. And this was really through conversations with him, that I started realizing more and more and more how much this affects even the technologies that I'm building in computer vision, for example,
pen Tracy, I will this is a question for both of you. I'm wondering, can I can AI also be used to reduce bias? We've seen new AI tools and services that claim to do things like reduce, reduce bias and resume postings to help with job interviews to write more inclusive job descriptions? And what ways can AI be used in this way? And are those tools likely to be effective?
I think they can definitely help. But again, it's very important to look at who is building these tools, and what biases they are bringing to the table, what they end up using as heuristics and their algorithms and how they're checking themselves for bias. So it is very important that we get a diversity of people in the room when we're designing this algorithm, as well as collecting the data sets that feed in to those those models that we're learning.
I completely agree with that. I'll also say that there's a little bit of a danger where we perceive AI as being unbiased. And so when AI is making decisions, we have a tendency to think, well, this is not you know, this is unbiased. This is based on a lot of data. This is machine learning models. So there's no problem, when in practice, there can actually be a lot of hidden underlying biases that again, have been learned from data.
Right, so having to watch out for that. Now, Tracy, as you just mentioned, in your presentation, you kicked off a movement back in 2013, of companies tracking and disclosing their diversity data. Most of the major technology companies have been releasing these diversity reports for several years now. But one might ask Where are the results? So what what is the next step? What should companies do after they have released a diversity report?
I think people underestimated how hard the problem would be. It's going to take foundational cultural shifts. And if you look at all of human society, we've had the patriarchy for millennia. It was It'll take a while before we can undo some of these cultural values. In terms of what companies should be doing, I think the data is the first step. And I think there's a lot of data we can be collecting at different stages of company building and within companies. So it's not just demographic mixes, but also what to candidate funnels look like? What does retention look like? What does promotion look like? and collecting all this data will help companies understand where the most leverage points are to start using interventions to address that. There's A lot of different approaches that can be taken, I think it's important that we approach diversity in a very inclusive way. So we're thinking about not just gender diversity, but racial, socio economic, and so many other different forms of diversity. And we think about comprehensive solutions, we're not going to have silver bullets. So things like anonymizing random resumes sound great, but we're not going to have just a couple of easy fixes to get to true diversity.
Right there sort of are these different stages, right? They're sort of the pipeline and education stage and recruiting and also retention. While you're both involved in nonprofits, or go you with AI for all Tracy with project include and others. What institutions do you think will lead the way in terms of tackling the technology industries diversity and inclusion problems? Do you think it will be nonprofits working in partnership with companies nonprofits working in partnership with educational institutions? For example, with you, Olga, is that going to be an effective combination?
I think we need all the help we can get. I think it will come down to everybody working together. I think we need everybody, we need universities, we need companies, we need nonprofits, we need collaborations between all of them. I don't think any one of us can do it alone.
Yeah, I think it's very important that we have the whole ecosystem of players. So just some examples of different ways that these different institutions can work together. When I was at Pinterest, we working with an external diversity consulting firm that was designing experiments that we could run, also informed by research coming out of Stanford and other institutions. So we were applying some research that had been done in an academic setting around minorities. And so you're tight and belonging, and starting to apply that in our industry setting. We're also working with this diversity consulting firm that works with a number of different companies so she could see so the people there could see across industry, what was happening. So all these different players in the ecosystem bring different perspectives. And hopefully, we can work together to drive lasting change.
Okay, so I have one more question for Tracy, and Olga, and then we will take audience questions. This time, I will actually take audience questions. Sorry about that. So please think about what you might like to ask them. So my last question for you, too, is what has been effective thus far in terms of creating change? you've both been working to increase diversity and inclusion in the technology industry for years now, what organizations besides the ones you have co founded, seem to be getting it right to some extent, or if none come to mind, sort of what practices what strategies and approaches seemed most promising thus far.
I mean, I'm happy to take that there's a couple of practices in particular that are very promising, proving very effective. So one is teaching AI and computer science from a humanistic perspective. So emphasizing the good that this can do for the world. And there have been studies that show that women, for example, are more motivated, more excited, more attracted to fields that have an impact on social good, rather than just sort of, let's write some code because it's fun. And it's not it's not to say that coding is not fun. But there's a lot we can do in terms of attracting diverse people to the field, but really emphasizing the humanistic impact. And the other thing is, there's some great studies by Professor Sarah Jane Leslie, actually at Princeton as well on, say that fields where innate brilliance is believed to be required, Will those fields will tend to be less diverse, if you believe that, you know, I can only be a mathematician, I can only be an AI researcher, if I'm brilliant, that's actually going to drive a lot of people away. So we need to really emphasize that these fields are not inaccessible. They are something that is just a set of skills that you learn. And then once you learn some skills, you can start being great researchers start making impact. And that's very important is that it's not it's not innate brilliance of researchers that drives this field forward.
Right. And in fact, that humanistic approach is an a foundational element for the AI summer camp. You found it for young women called sailors, I guess you were telling me it's now going to be called Stanford AI. Do we have any questions? We will raise your hand and we have people with microphones who will come find you. Okay, it looks like we have one over here. And then we will come to you in the front row. Yes.
I think you guys touched on this a little bit. Lucky to address it a little more. data itself is very biased. And given that it is so biased, how you parse out some of that biases. So when you do build AI models, they don't
they don't take into account the issues that we already have in place.
I think that's that's that's a great question. The short answer is we're only I mean, it's an active research direction. We're only now starting to kind of figure out how to go about that. Part of the answer is we just need to be asking these questions. We just need to be cognizant of this. We could rely on there's some work for example, with our How do we study bias of humans? We know humans are bias, how do we, you know, use things like Implicit Association tests and so forth to try to get at human biases? Can we apply some of these to AI models? What other things are out there? It's it's a very good question and very active research direction.
Alright, yes.
So
they asked me to sign up. All right. My question is, I've been working on Oh, man empowerment since I was with eBay, leading the programs for them, etc. Since 2008. And one big flaw, I feel, it's my personal opinion, everybody may agree or agree, or not agree, is when I see all these woman moments that again, bias because they run by women, I see all three of you, when we're all men start including the men that they work shoulder to shoulder with, into it.
So I think male ally, ship and ally ship across many different dimensions is very critical to moving the movement forward. And we've seen this with other civil rights movements, LGBTQ movements, there's a lot of other movements we can learn from. And I do think we are trying to include male allies in gender diversity issues, and white allies in racial diversity issues. It is tricky to play in that space. And I know a number of people who come to me and said that they're afraid to step into these waters sometimes because it is so easy to say the wrong thing, even with good intentions. And I think it's on us as people who are trying to make progress, that we can work effectively together and understand that even if we sometimes have disagreements on the exact steps we're moving to take to move forward. Or we may have some slight disagreement with the way somebody is presenting an issue, that we generally are still trying to move in the same direction and assume good faith of each other, when it makes sense to
absolutely completely agree with that. I'm also mentioned so the the camp that I started at Stanford as part of AI for all is for girls, the camp that I'm starting now that I'm a professor at Princeton, the Princeton AI program will be for racial minorities. So that's exactly addresses your question I'm trying to, I think it's very important to be a wide ally. And like, I can still make an impact on the issues of racial bias and racial underrepresentation. And you just have to go there and read the literature and try to figure out how to do it. Right. But I think it's absolutely it's very important to have male allies in gender issues and to have allies from the majority groups in the minority in any of the minority issues.
All right, we are out of time for now, but thank you both very much for being here today. We will be following these issues closely. Elizabeth back