The Future of Us, AI + Gender
5:22PM May 5, 2020
Good morning. Good afternoon. Good evening, and welcome to episode six of the AI for Good Webinar Series. We hope that you, your family, your friends, and your colleagues are all keeping healthy and safe. My name is Fred Werner from the IQ, the International Telecommunication Union. And it's my privilege to introduce today's webinar. So be it you is the United Nations specialized agency for ICT. And we're also the organizer of the AI for Good Global Summit hand in hand with XPrize foundation 36 un sister agencies, ACM and Switzerland. And the goal of the AI for Good Global Summit is to identify practical applications of AI to advance the Sustainable Development Goals and scalable solutions for global impact. Now, like most of the world, the AI for Good Global Summit has gone digital and we're moving forward with digital
Programming every week, allowing us to reach even more people than before. And this week, we're hosting three breakthrough sessions on gender, food and environment. And today's webinar could be considered as part one of the AI and gender inclusivity breakthrough session that would have been hosted in Geneva this week. We're not for the virus.
So before introducing today's webinar, I'd like to go over a few housekeeping rules. First of all, your microphone has been disabled. So please use the chat and the q&a function if you wish to communicate. It's the responsibility of the moderator to identify and ask the questions to the panelists. And we're counting on your interactive and active participation to have a very interactive session here today. So without further ado, I'd like to introduce today's moderator, his name is Nima dhoka niku. And he's the technical lead for the AI Watson prize for XPrize foundation. So Nima, welcome
The show is all yours.
Thank you, Fred. Hello to everyone from Los Angeles, California. our webinar today is titled The future of us AI in gender. Over the next hour, we're going to be discussing the various issues around AI and gender. In particular, we're going to be identifying those solutions that AI can empower. They can help empower underrepresented communities and enable an equitable future for humanity moving forward. First thing I'm going to do is I'm going to ask each of the panelists to introduce themselves. Let's start with Andy.
Everyone. I'm Andy, I'm the CEO of electro labs, and I work in digital medicine. So I work with a lot of sensors and wearables that are using AI to capture biometric signals, and then use those signals to either determine care or clinical trials or different components of the healthcare system. I formerly served at the FDA and I've been really active in thinking about it.
How to use these sorts of tools in a safe and secure manner, both from the data right side and security side.
And where are you joining us from?
I'm sorry, I This was my one prompt, which is I'm joining everyone from San Francisco.
Excellent. Next up is Kisha.
Hello, everyone. I'm keshawn Rogers. I am the CEO of time studying time study as a healthcare enterprise platform. We essentially allow hospitals to understand how people spend their time at work and how that impacts the work that they do. I'm a computer scientist, and I am joining you from Richmond, Virginia.
Excellent. Thank you so much.
Hello, everybody. I'm talking here from Berlin, Germany. And I am CEO and co founder of clue and clue is an app that allows people with cycles to track their menstrual health
provides them an insight based on their data, and to really help them understand what's going on in their bodies.
Excellent. Finally, Caitlin,
can you unmute?
I unmuted? Hi, I'm calling in from Geneva, Switzerland. And I run women at the table, which is a civil society organization that focus is on systems change by helping feminists
make impact in technology, sustainability, the economy and democratic governance.
Fantastic. Thank you, everyone, for joining us today. Before we get started and dive deep, Caitlin, can you set the stage for us and explain what we mean by bias and algorithms and data sets? Can you give us some examples of how this can impact our everyday lives? Yeah, sure. So the simple way and of course on the activist among everybody on the panelist and not and not that
A technical expert. But I think part of our work is about how we make expert AI how we make impacts of citizens on all this, because it's really our right and our need to start to interrogate the algorithms, algorithm makers, and see how the technology really
interacts with us. So we know that everything begins with data and data sets. And most datasets are exclusionary, historically. So they've left out women, other historically excluded groups. So already the data is a little bit skewed. The algorithms are based on that skewed data. So the algorithms get really smart, and then they sort of working on a skewed way. And then there's machine learning that actually learns from the first algorithms and then you totally have a system where patterns are identified that maybe we thought were just like weak signals in our lives, but actually become embedded and exacerbated in this machine. learning that that is part of automated decision making, that we're seeing more and more in our lives and every government going forward. So that's, that's one thing. What does that mean concretely, there's a very, there's an anecdote that I think many are familiar with, but I will share again, is Amazon tried to 10 years of its employment data, and benchmarked because they wanted to deal with the thousands of resumes and they wanted to do something good about hiring women. And so they benchmarked 10 years of incredible high fliers, who happened also to be white men from Stanford who had engineering degrees. So that was the data that they were starting with about what really the most excellent candidate would be. And sure enough, the algorithm with no no intentionality whatsoever, started to throw out women's resumes. As matter of fact, it flew to the bottom of the queue, anything that said like women's chest Captain or women's swim team, and it started So through women out of the queue, and that is, of course horrifying. What's the big lesson of all of that is, is that once out, once Amazon figured it out, they were not able to have the machines, unlearn the bias. And that's really where we're at now, where we understand that once those patterns are, are accepted or viewed, the machine can't unsee it because it's only seeing ones and zeros. There's also another very funny example of the resume company that wound up when there was an audit, it turned out that it privileged people whose name will Jared or who played lacrosse, because that was that was the cohort of data that that that the algorithm was trained on. So we need to be very, very careful. There's also the most current example of the apple credit card. We saw that even though gender was not even a consideration. Women who are equally quiet were they were absolutely given one 10th or even less credit scores from this credit card. So this has implications all the way through from criminal justice system to, to whether your resume is being read to whether you're seeing high paying jobs for ads on Facebook, to whether you're on even a digital ID systems, as we're seeing across the world that are being implemented that have these unintended consequences. So that's very, very scary hair on fire. However, it's also an opportunity if we're aware, I think to sort of take algorithms and say we can correct for a lot of this historic exclusion. And if we're really terribly thoughtful, we can we can re redo the next
song. Thank you so much,
Andy, a recent article that she wrote, talked about how high stakes algorithms and
prescription drugs should be treated the same. Yet with prescription drugs, we have instructions, we have warnings, we have doctors prescribing them. But with AI six algorithms can that can affect your lives in very similar ways. We don't have any of these. Can you? Can you talk to me a little bit about this? And why? what you've learned from the drug development process and how that can actually inform us in the ways we deal with high stake algorithms? Sure.
So I think this is an analogy that works about 80% of the time. And then of course, algorithms are different from drugs. So stay with me for the reason why this makes sense. And then I'll talk to you where this breaks down. So in many instances, one of the things I that I drew from Caitlin's work is that for sure, there are algorithms that have biases and that they perform differently. The trouble with algorithms is that you also want them to do that that is their design. So by definition, what algorithms are supposed to do is group different clusters so that you can find out what's what's happening within that group in a way that perhaps
Human can't see. The question is, are those groupings happening in a way that you like and you think are ethical? Or are they happening in ways that you don't want? So if you think about the drug development cycle, there's a number of different drugs, drugs performed differently on different types of people. And what you want to figure out is which drugs work on which people and how do they work. And so, in many ways, if you think about algorithms and drugs, they're both trained on a population, they work well, on some populations, they might not work as well on other populations, they can have adverse events. And maybe that adverse event is a physical event with a drug, maybe an adverse event is that somebody didn't get a job with an algorithm. And so one of the things that's always been really interesting for me is that a lot of people make the argument that you have to know what's in the black box in algorithm like it has to be interpretable. But for many drugs, we have no idea how they work. Think about a lot of drugs like SSRIs. We know that they work reliably, but we don't know how. And so one of the ways that you could think about it is the way that
In scientific literature, we develop things like clinical trials, we understand what populations they're trained on and what and we track different types of events, we have warning labels. And so even in instances, if you know the mechanism, the mechanism of action for a drug, you can use it more reliably. If you don't you have more controls on it. I think that could be a really important model that you can use for algorithms, really articulating what is the data set that it's being trained on? If you understand the mechanism or how it's working, then you might be able to use it more liberally. If you don't, then you might have more controls on using that.
Excellent. Very, very interesting.
Kisha, we're talking about data inclusion and data data needs to be reflective of the populations that it serves. But I wouldn't consider that just the panacea. Because a lot of our research that we do, comes from around the world but at the major conferences that the nurses of the world and so forth, It's very North American Central European centric. And oftentimes the researchers and developers of these technologies from Africa from South America from other parts of the world came to get visas to attend to showcase their technologies and their breakthroughs. Why is it important for us, that the researchers, not just the data, but the actual people building the algorithms, the men and women, of course, are both in gender wise and geographically wise representative of the communities that need to be served by these algorithms.
So I think it's a really good question. I hear this question a lot when we have sessions regarding technology and gender or women. I think the the best way to answer questions is why with the people that you're building solutions for not be involved in the process. I think we spend a lot of time trying to prove that people shouldn't be involved or included in the process. I think we will way beyond The time where we have to include the people that we impact as a part of the process. So earlier a Caitlyn mentioned, being an activist, I actually believe that everyone working on a technology project should be an activist for the people that they're building solutions for. And sometimes what that means is that you have to involve people that understand the communities that you impact. There's a lot of work in the tech space, we're doing a lot of AI for Good and you can actually be in the business of doing good and not actually doing and good. And you can be in the business of doing good and actually doing harm. And so that's why it's important that across the board, even at the table, when we decide who and where and how is AI relevant to solving problems, you have to include people the entire step of the way. Another point that I wanted to mention is a lot of times when we talk about AI and and technology we hide by The label AI technology bias datasets, I think it we also have to put the human in the center of these conversations, humans created the datasets, humans build systems. So we have to, we have to get in front of the AI a bit because it can often be seen as a scapegoat. And if you read the news, a lot of times you'll see the headlines that the AI is bias, Oh, my, something happened, oh, my goodness, this AI is penalizing women. It is no longer acceptable. And I don't even think we have to articulate why that's not acceptable. If you're building a system that should be equitable, then you have to have people at the table to highlight that along the way. So if the data set is not balanced, certainly there are tools out there to highlight that. But what about humans? And so I don't think we have an algorithm crisis. I actually think we have a crisis of caring. I think people have to care enough about the people that they build solutions for to include them in the process. That's the first step to actually solving this problem.
So that's a extremely valid.
And from a technical development standpoint, you'd never not talk to your customer. Right when you're building something in that, yeah, so so it makes complete sense. Either. We're talking about data and customers. And one thing that comes up is around personal control of your own data. And as with the clew app, this is very relevant and very important. Can you talk to us a little bit about how this personalization of data influences the data collection analysis, especially with underserved geographic and socio economic and gender communities like never before? Hmm,
well, first, I want to pick up on one in keyshot say that you absolutely have to care about your customer. And I think when you think about how do you make something that is actually truly helpful and valuable for people, you somehow got to sort of almost well kind of think about What does every individual need? And so when we think about making clew, helpful for every single person, enabling them to customize the app and actually figure out what data they need to track is a huge part of it and making that simple. And one thing that I love about that is that it sort of brings together this idea that data can really help me, you know, because I think for many people data can be this sort of, you know, kind of fluffy thing. But by enabling people to control what data do they really collect? And what insights do they care about, I think you bring that message to the home on a very tangible level. And so then one of the beautiful things about working in my house is that we all have this knowledge that we share, and we all have cycles for many years of our lives. And so when you build a product that is, you know, helping with some of the fundamentals around what it is to be a person that has cycles. That is that you know, when people pick it around the world, it's helpful for them. People, you know, it's not all the same problems that we say, face, but many of them we share. So who has, you know, users in 190 countries that we're very privileged to gather a data set that really is very broad, and also reach different groups of people. I'm not saying that our data set is representative of the planet, by any standards, we have, of course, all the sort of skews towards people with iPhones and developed countries, but we still, we still get a sense. And so that's one of the reasons that we very early decided that we really wanted to participate in the science community and make this data safe, available in anonymous form for science research, because we could see that the sciences, you know, they didn't have access to this kind of diverse data in any other way. Really. And that has been Yeah, We have learned a lot, I think they have learned a lot. And then of course, we really want to make an effort that are the people who gave the data then also get to learn from these new insights.
Fantastic reminder to our audience, which is over 200 people right now, you can ask questions in the q&a function at the bottom of your screen. So just submit those questions. And last time at the end nd we were just talking about representative datasets, you know, from all around the world. And this is, of course, very important in medical fields when we're doing clinical trials. But oftentimes, clinical trials are not representative because it's hard to get people to be participating in clinical trials. You talked about this concept of virtual virtual clinical trials, and how it can improve data collection. Can you can you expand expand on this?
I am muted. There's lots of different terms that people are using. So I'll I'll send a link with more details. But there's virtual clinical trials, there's decentralized comm trials which are no blockchains, by the way, is not a blockchain session. There are remote trials and then there are more like digital trials. And so really what you can think about is there's two things. One is where where is the person. So with COVID, a lot of people are sheltering in place. So they now have to stay at home. If you're doing a remote trial or direct to patient trial, they're physically in their home location, perhaps using something like what we're doing with telemedicine. That's different from how you're collecting the data. So you can also use wearables and sensors together biometric signals for people. So for example, if somebody has a tremor, like you have Parkinson's, often, many accelerometers and gyroscopes will pick up your tremor faster and better than a doctor, especially if it's in the early stages. And same with picking up temperature or changes in heart rate. A lot of these things are not as perceptible to humans without having it. And so there's a lot of movement towards having more sensor based data. And all of those are algorithms. So Have you think about it, Fitbit or a smartwatch? And they don't know if they've actually if you've taken a step, they don't know, if you've slept, they're using AI, they're using an algorithm to predict whether these biological signals are asleep, whether they're step. And so some of the things that you should think about is how are they making those predictions. So one of the things that people have really thought about and are looking into more is if you look at the pulse oximeter, they often are using ppg, which is a green light. And this will absorb differently in different types of skin tone. And so if you if you train these algorithms, and you're not looking at the different types of skin tones, people can produce their heart rates or what looks like a heart rate differently. And so there's lots of opportunities for unintentional bias. And then if you're running a clinical trial and somebody is using and distributing these, accidentally certain groups might not have the same level of data collection, just because it's being represented differently by different groups. And I'll send some more data about both of these.
So it's absolutely interesting. Of course, now that we're talking about clinical trials, we're talking about you mentioned COVID, we've reached the mandatory COVID session of every panel, because by law, we are required to discuss the virus. You show your work with time study is around using algorithms and AI to optimize and find productivity in the workplace and track burnout and things like that. Of course, we've covered as we are all currently experiencing, we're working from home. And that's kind of broken down the barrier between the private and public spheres of your life for pretty much everyone, women and men. How has this affected productivity? And have you guys seen any trends or interesting things that come up from this kind of reality that we're living in? Yes.
So our platform, actually collects data to articulate how people spend their time at work. It was designed to uncover the administrative overhead that actually drives down what we call top of license time. So top of license time is basically the time that you spend working on things that represent your highest skill. So for a doctor's that's time spent providing patient care. So our platform is designed to not only highlight how much time is spent providing patient care, but what is driving that time, up or down. And also how does that impact the things that really are important, like the providers satisfaction with their job, the patient satisfaction as well as clinical outcomes. So what we've seen during COVID is as with anything, it actually amplifies situations that are already poor. So in healthcare, there's already a lot of administrative overhead. A lot of that is driven by regulation and bureaucracy, etc. What we're saying is that that COVID crisis lives On top of an already stretched system actually causes more of the same unintended consequence, which is that instead of spending time certain patients now you're spending time trying to find mask, because your hospital doesn't have enough PP equipment or you're spending time with nurses or doctors that are women, they find a mask and the mask doesn't fit them because there's this default male that use not only to test cars, but also to create masks, it turns out a lot of nurses the mask doesn't fit, so they have to adjust for that. So what we're saying is that situations that are already undesired actually become worse during a crisis like COVID.
That is driven by
the limited resources. So we're seeing a lot of limitations of resources, driving the time that you need to spend doing what matters, driving that time down and stretching people thin and for people that are already at risk for that sort of dynamic effect. actually creates a more war situation for like nurses and doctors that identify women. As women, they're on the front lines with patients. And now they not only don't have mass, they don't have mass that fit. So it's, you know, the whole cycle of gender equity actually influences the whole process from
the start to the finish.
And that's absolutely, I mean, it makes complete sense out when the default crash test dummy is male, even for women. It's a small male, essentially. I mean, I wouldn't be surprised when you say, yeah, that masks are made for men, even though probably it's more women are using them in those settings. But you're absolutely you're absolutely correct on that.
One thing you talked about in terms of
the the the work that we're doing, one thing I'd love to pivot towards actually is start talking about large companies. Let's talk about small companies and identify you are as someone who's done a startup and has done successful startup that caters towards women. A lot of the startup world is male dominated male centric. Everything from the investors are males, which then means that the startup people are mostly men, because men pick men. And then the products that are produced are male focused as well. yet you've coined this term thumbtack, which is around female oriented tech startups that bring products to underserved communities. It's very profitable. And it actually serves a desired need. And can you talk to me about this whole process of how do you break through in this male dominated world?
Well, I don't I don't know if I have broken through in the middle of them. And I think we have a long ways to go all of us. But I mean, what I will say is that the problem with lack of diversity is that you have blind sides, right? Where there is an organization or as a person. So what I've been most struck with having had many conversations with many groups. Brilliant tech investors, this smart people and they, you know, come with good intentions is that there is a profound blindness in the world of the reality that half of the world population is living every day. It's simply it's actually staggering. And it's a it's a, it's a blindness, which is sort of embedded in culture, it becomes norms, it becomes to booze it becomes but also just a sort of, it's just not on the radar. He's just like, like seeing this part of the world is that other part of the world. So when I come and I speak to them about sort of, you know, the question that we have having this biology, you know, am I normal, I'm healthy, I'm in my children, what is this pain? What's happening to them? Right, like, there's a burning question that changes throughout this life, of a changing body. And men, they they want to understand, but it's like, it's like, how would they know if nobody told them? I wouldn't even think to ask. So when you think about who builds products to solve problems, right? That's why we build products and build services. Unless you have a diverse group building products and companies, you're gonna miss things. And it happens that a huge part of our need space was missed, because the one who felt that didn't build companies and products are had a hard time getting to build companies and products. So what what's the most of it's very encouraging to see that over the last years second has been this sort of crack, like this sort of opening into the Oh, this is a wealth of female health, that where maybe technology could play a massive role. Um, so so it has been I mean, yeah, it's hard. I mean, it is hard, but I feel I care a lot about sort of radical inclusivity I believe that we need diverse teams to build for the world and we need to really include men into this world of female health. And, you know, when we talk about building Technology. Of course, we talk about building algorithms. And we absolutely desperate to have diverse teams asking these questions. It's, I mean, it's fundamentally important. It really is. Otherwise you end up in very strange ideas of, you know, well, you ended up building and well, that's not for everybody, essentially.
Yeah, absolutely. The that the concept I always go back to as a female that startups are twice as successful in turning revenue than than male startups. So even like a dollars and cents level, but it didn't make sense to to be inclusive. So yeah, absolutely. Caitlin, last question, less individual question. I guess I want to come back to you. You started out stars started us off by discussing what biases and for a lot of things we've talked about so forth have been bottom up approaches. How can we or how can startups or how can companies do you know in improve the situation. But what's the role for policymakers, NGOs and governments in this arena? What is the the top down approach here that can help? And where should they focus on?
This is why it's why I never
unmute myself. I do have two very specific policy suggestions. But what I just want to say to what I just said, and a lot of what Andy was talking about is those of us who are fans of invisible women, this extraordinary book that really talks about the world of standards and how they're made to a default mail standard talks about the invention and marketing distribution of Viagra, which actually had found a cure for severe menstrual pain. But none of the people in the room thought that there would be any market or any need for for resolving cramps which of course if they had had any women there, they would send their money thing. There might be women, like billions of them around the world that would buy this. But instead, they thought that there was some real need for Viagra. So that's what went to market. So that's, I think, the kind of loss that we have for not having lots of different kinds of people in the rooms. We were working on this for quite some time there was a group on gender responsive standards. And these are not only standards I see somebody in the chat is talking about the I triple E standard. I know I so it's working, it is working. But we're also talking about the physical standards that we live with every day from the size of a piano key to science beakers in science labs, and we this group based in Geneva has this gender responsive standards declaration and in May all the major international standards makers of the world signed it on including ISO it you EC, and others, to make a gender action plan so that they would look not only To see who they should include and making standards, which I'll get to, which we see a lot in the chat, but also what those standards themselves actually address, because that's what's really important if at the end, is what, you know what, why are you making standards? And how are they particular gender responsive? So when we talk about inclusion, we're starting to do a lot of work around public procurement. Because I think one way to look at the inclusion of more women more, more diverse teams, we'll just talk about diversity writ large. Because when we talk about gender when we talk about women and women at the table, we look at gender equality is a stealth app for democracy. And we're really talking about including all peoples that have been excluded. But in this particular case, we are talking about women and we're thinking that public sector monies that are being put towards automated decision making that are to all sorts of algorithms that are helping us automate our government. If you There were set asides for women owned businesses, and that those women owned businesses were also included human teams that were the designers that were the coders that were the makers. So it's not only about making a whole bunch of new female oligarchs and Mark Zuckerberg, which would also be really wonderful for those among us that want to join the oligarchy. But we also really start to sort of extend the way that women are part of a design process writ large. So we think that that's a policy decision that could be socialized and could absolutely work. Then the other thing is to look at open datasets, or to collaborative data sets or to new data sets. We had seen at a we know that there are certain groups that are making datasets just for their particular ethnic group because nothing's ever been studied. We know that medical clinical trials in general have never even included women and most of the trials. Um, so just in terms of re evaluating what was in the original data, what we need to do to augment it, I think is also a really important thing for policymakers. And then the final thing would be at, say, education, because it doesn't really sound really right. But policymakers have to have the courage. And they have to be encouraged to ask questions that the algorithm makers so this isn't, it isn't about math. And it isn't about like who did best at math at all. It's really about what are the assumptions that you're dealing with? Why does your data Give me that answer and not the answer? Because there's a lot of lot of decisions that are made that are, I think, passive decisions in this process. And that if they were to real discussion and real dialogue all the way down the road that we wouldn't get ourselves in to some of the problems that we have.
Excellent, really, really interesting initiatives that? Yeah, I think the one takeaway I always have is
the ability to, like envision a better future oftentimes involves you seeing in the model or an example. So so if you can have a representative Zuckerberg, a female Zuckerberg, it will inspire so many people to help them in that direction, of course, that it even be like a teacher or it can be a middle manager or it can be a startup founder. But you are you need that as part of the ecosystem to even fit, start thinking about, oh, I can do that.
Right. But it's not only about making more unicorns, I think that we really have to find a sort of, we have to go wide, we have to go horizontally. And it's and it's not only about owning the company, right? It's also about having a great ideas. It's about talking to your community. And I think we can also reconceived what that what that means in terms of what an inspirational model might look like. Exactly.
That's it. All right, so I'm going to ask a final closing question. Before I turn it over to some of the q&a questions. This is for everyone on the panel. If you can imagine a future state a better future state, in terms of gender equity, what is the specific big breakthrough or breakthroughs that you most want to see happen? And that maybe some of the people attending here should take up the challenge of of solving? I'm going to start with Andy.
I mean, I would say the biggest breakthrough is definitely not going to be tech. I would think tech and algorithms actually cause more of it. I would say, I think one of the biggest challenges that we have is everybody wants Yes, no answers, like things are good, they're bad. I can fit this in a tweet, this thing is fine. And for whatever reason, we do not have like really good ways of talking about the nuance and the gray space and all of this is gray. I mean, I think there's some instances where like, I feel like there It's pretty clear, but others might disagree. And when we live in a world of like virality and misinformation, and then people are just arguing two totally different things, I think, I think there needs to be a lot more ways that people can come to together at the table rather than fighting with each other, which is probably obvious for like most things in society.
Thank you to shop,
I see a huge opportunity for us to redefine what it means to work in the space. A lot of times when we talk about AI, we're very technically focused. And you know, I have a computer science degree, I've spent 25 years building software. What I'd like to see is a broader vision for who makes a guy but also some recognition that there are other forms of intelligence. Right. So some of these projects are successful to the degree that the people on the project, know that they don't know everything. thing they don't know half of what they need to know, in order to deliver a great AI solution. And if you can show up that way, then you're open to actually hearing the ideas of people that you may not have considered to be a team member on your project. I think this is a great opportunity to expand how we think about how technology, but also, I think what's required is that people have to be willing to be uncomfortable, long enough to get to the other side. Because, you know, even if you think about data sets, you know, everything is by so you go into, not to see it is bias, but to see where it is biased. And so it's a different way of thinking. And, you know, what I'd like to see is that people think about this entirely differently. I think we don't have to demonstrate or prove that women are underrepresented populations should be included. Of course they shouldn't be you building something for them. They should be involved in process. I think what we need to be thinking about is how do we work together? sure that AI is beneficial. And there are tons of organizations that are already working on this space on AI Africa is one that organizations around the globe that are actually already thinking about this. And so what I'd like to see is technicians actually reaching out to the communities that actually are already aware of the things that they're not aware of. But in order to get there, you have to be willing to admit that you don't know.
Next one is either
when I think about what future I'm hoping or in breakthroughs, I will say, I think we need a really deep conversation about ethics and about what will we actually want to create. And if we only care about money, we're going to end very funny places, whether it's building algorithms or products or anything. So, again, I think that's also why it's important to have diversity Because then people have different things they value and you end up with some good balance of, you know, something that works for everybody. And so, so I say that on a high level, I think that's a systemic problem we're facing. And actually maybe Corona, it might be, you know, breaking things open. And we also reconsidering everything a little bit. If I could choose where I left AI to make a big breakthrough. I would say in female health, there is so much we can do for sort of preventive care. I would love to see people building many more algorithms to understand what can we what you know, even just a dataset that we hold a coup probably one of the largest human cycles. I thank you, we could learn so much if we you know, if we could you know, if we could learn more about what, what it can tell us to help people stay healthy, rather than treat disease. So, I'm very curious about what great things we can do with female health. If we if we have good Yeah,
Yeah, I mean to come off, I agree with the ethics. The others know that I feel very strongly that we have the Universal Declaration of Human Rights and that this is such a fabulous document with settled law that we can actually go off from there and just say we hold those values close to our hearts. And therefore, what kind of tech do we what kind of tech reflects that? So my real, that would be my first question that we start looking at what what the settled law is and this Universal Declaration of Human Rights. The second thing would be few less pizza delivery apps, and a few more sort of sustainable, maintainable water wells, built in different places. I mean, there's just so much that needs to be solved and is needed in the world. And I think that that we're really somehow lacking imagination about using the title To solve it, probably because we're very focused on it's being able to be monetized to at this point. But I think for me, in this future state, I would have tech that actually looks at historic inequities, social inequities, racial inequities, gender inequities, and actually uses the tech to correct for those inequities, instead of merely saying, as privacy activists now we're starting to do, it's like, we need the status quo. The status quo isn't good enough like for us to fight for what we have at 2020 levels, right. So what can we do with a tech to sort of upend the system in a way that makes it really more equitable for everybody? And scientists have a big part of that. And that's my final point is that I see that a lot of technologists, especially students, university students, don't see that it is their right they go Who am I? Who am I to decide what spare Who am I to say? And who are any of us, right? We're citizens. And we together decide on the kind of world that you want to live in. So this is a call to those technologists out there to invent what is going to serve people.
Absolutely. Excellent. So we had over 30 questions submitted. So I apologize in advance for not asking your question because there's just too many of them. But here's a here's a few of them that we flagged. Speaking of students, for students and researchers starting out in a relatively new field of machine learning and bias, fairness, what do you think are the specific aspects in this field that need to be more addressed to need to be addressed more rigorously? So for those students and researchers who are starting out what aspects of this field should they be doing? Basically?
Definitely transparency. Also data privacy ownership. There's a lot to know around data privacy. And ownership and it gets really complicated when you start to think outside of where you live, because as you think across the regions, things get very complicated. So I'd like to see more technical Institute's address that in their curriculum. We've already mentioned ethics, which I think should be a core and a fundamental that's taught in technical classes. I think also, it's fair to note that, you know, a great percentage of people that are in AI research actually are from a university. What does that look like for people that did not go to a university that actually graduated high school and learn tech, you know, on their own? What does this look like for them? So, as we're talking about technologists and people that work in AI, again, I think we need to broaden our vision of who we're thinking about when we talk about who can make a great
echo that I think sometimes when people say Computers like, or ethics, like people kind of like yeah, of course, like you should add that in. But there's a really good piece that I would recommend reading that came from cryptographers. So cryptography for people who don't know is effectively what under bakes a lot of encryption. So it says, Who gets to see what and when. And effectively cryptography changes power dynamics. And so a lot of academic researchers are saying, well, like I'm just building like different encryption systems. It doesn't matter. People pick how they use them. But there is a very strong moral imperative that you have to decide if you're going to release a new system into the world. What is your social responsibility, and I think this is very true. Also for algorithms, which are shifting power dynamics, and have very meaningful impacts on people and so things like ethical training and other pieces I think, are core to determining whether or not you're going to ship something into the world. Excellent.
Someone else asks, How can we clean up Data is already biased. Is that even possible?
I think I would start by trying to figure out why is it bias? You know, because the data set represents what it represents. So data tells a story. And so it's more than just fields and values and labels. It was collected the way that it was collected from the people that it represents for a reason. So when, when you encounter, quote, unquote, bias datasets, my first step would be to understand data. What How did it get to this place? I think also, we should always be open to saying, No, when AI is not a relevant tool to solve a problem, I think that should be a part of the conversation as well.
It's also after the data. It's also the model, isn't it? Right? So you want to interrogate this data, which we're kind of stuck with because people aren't going to stop building things while we build these more, more pure and equal datasets. So it's the in the modeling. And in that process, there's some opportunities to I think it's awareness.
Yeah, you actually bring up a really great point points. There was a speaker, I forget her name. She's a researcher at Microsoft, in New York. And she mentioned exactly this point of, we shouldn't just say, algorithmic bias, right, because it can be in the dataset. It can be in the algorithm, it can even be in the implementation. My favorite example of this is they built a tool for judges to determine how much bail should be set for for prisoners. And the results were still biased. And they were like, what happened? What's wrong with the system, they actually noticed that the judge would accept the recommendation of the system for white prisoners, but for African American prisoners, they would overrule the system. So again, you can build a perfect system, but if the users are, you know, bringing in Their own biases, you can still break the system. So so you're absolutely right about this. It's like sequential, like if they interrogate all the pieces of the puzzle? Absolutely. Um, someone asks a much of the data that algorithms use, most of the day of the algorithms are based on our company proprietary Hadley balanced data set and algorithm availability with company privacy and confidential information.
Maybe I can give it a go. I mean, this is hybrid complex, right? I feel like the whole world of data and for consumers understanding so it's like to completely difficult to understand and it's I don't know if anybody really understands how data flows as well at all. But one thing that you can at least start by doing as an entrepreneur is to be transparent and have a really clear agreement with people of like, what do I get and what do you get, because they feel one big part and problem in the economy is that it's full The premise that users don't understand how we make money. And so they've come product and I, it's a big educational project to make users understand how data actually flows and how money is made. But I feel that's that's least the beginning that you actually write Terms of Service and privacy policies that that are meant to be read and understood by users. And so from there, people understand what data are they providing, and what will happen with this data? And how is this data going to be kept private? And how might it be used for different purposes? And as a user, you can then choose the companies that you agree with, you know, that needs to be transparency?
Yeah, absolutely. But there's also a pirate power. It's bigger power dynamics. It has a power dynamic when Google has all the data, right? And then you're a startup or you're a researcher or you're an entrepreneur, and you're trying to change the system, but they have data. So
really, it's even I mean, I can talk for myself like we really want people to understand how data flows. It's a big challenge to, to explain. Right? So maybe we need some sort of certification where there is a label like we have, you know, organic food is like this is a company with good data practices. So as a consumer, you can navigate, because I think it's literally impossible for the average consumer to understand how data flows and what's actually happening with their data. So
first was the beginning of that to the from the famously renamed fat conference on fairness, accountability and transparency, they have the data sheets for datasets to start labeling data. So at least we know the provenance, the way we know about our food or drugs, where the data comes from. And I think there's also model cards from the same team yet. gabbro is sort of like a goddess in this area. Have fairness and runs a team at Google model cards for model sets as well, that's so people are beginning to do this. But I also want to say I love this idea of a Hippocratic Oath for AI. I mean, they just start to bring in the value, the values aspect of it from the very beginning that Andy had brought up.
Excellent. Since we are being hosted by to this question, our last question should be is about should we have more policymakers trained on AI? And should we have an international agency take the lead and create mandatory laws or some sort of guidelines in this aspect?
For the API to know I am
you more more knowledge is better, more awareness is better. And yes, I think policymakers really should They that would go a long way to their understanding their understanding what's coming at them and the the ramifications of a lot of the decisions they're making. So for sure, I'm not sure who should govern it at the moment, personally, but more, more information is better, for sure.
checks and balances between policy makers and people that are actually building and delivering and maintaining and using these products. So, you know, policy is one thing, but a lot of times it's, you know, it's in hindsight, so something bad has happened. And then policymakers create policies to prevent that from happening again. The problem with you know, ai that can do harm is that the people that are the most vulnerable actually take the hit, right. And so some people are in a position where they're never hit by some of these biases and some of these missteps. So yeah, policy is one one of many things that have to take Place to ensure that people aren't harmed. Hmm,
absolutely. Excellent. With that last question, I just want to thank all of our panelists from all around the world for joining us today. And I also want to thank XPrize. And I also want to thank it for hosting us. If this conversation has inspired you and want to get more involved with as empowerment of gender equity, I just want to point you to a few practical applications and next steps that you can take x prizes gender data gap initiative is a multi year effort to launch a series of challenges to close the gender gaps and data. We'll be hosting a webinar on May 21, at 9am pacific time, so the same time as this panel for this initiative, and to learn more about the gender data gap initiative, and also the webinar, the links you'll see being posted in chat right now. And then also, this webinar is one of three that are focused on all of the AI for Good breakthrough tracks for this year. And the purpose of which is to identify practical applications of AI to advance the UN Sustainable Development Goals. So tune in on Thursday and Friday of this week, same time 9am pacific time for talks on from experts on food and the environment respectively. Thank you again for everyone for joining us, and we hope to see you at the future of food webinar on Thursday. And one last thought the future of AI can be good. We just have to make sure that that future becomes a reality. Thank you, everyone. Goodbye.