AI for Gender Inclusivity
8:14AM Jun 24, 2020
good morning good afternoon and good evening and welcome to the AI for Good fold all year, always online webinar. My name is Ayda Dabiri from the ITU You the International Telecommunication Union, and I have the privilege of introducing today's webinar. Ai to you is United Nations specialized agency for ice cheese. And we're also the organizers of the air AI for Good Global Summit alongside the XPrize Foundation, in partnership with 36 UN sister agencies, ACM and co convened with Switzerland. The goal of the summit is to identify practical applications of AI to achieve the Sustainable Development Goals and scale the solutions for global impact. Now, like most of the world's the AI Summit has gone digital, and we're moving forward with weekly online programming, allowing us to reach even more people throughout 2020. Today's webinar could be considered part two of the AI for Good genda breakthrough track that would have been taking place in Geneva or not for the virus. We have a distinguished set of panelists today, but we're also counting on you the participants to help create an engaging discussion. For this will be Using the q&a functionality, we can find where you can find just left of the center of the bottom of your screen. In addition to the q&a, there's also the chat functionality. Please make sure to set the message recipient, as everyone are all panelists and all participants and not just to the panelists, you can select this option just above the message box. Now, before we turn to our panel, I would like to ask everyone to use the chat functionality and let us know where you're connecting from which city or which country. So I will begin, I will write in the chat to everyone. Geneva, Switzerland, okay. So make sure to send it to all panelists and attendees, not just the panelist because not everyone can see you. Okay, I see. We have Toronto we have UK, Columbia was moving fast. Spain, Italy, Mumbai, lots of places. Great. So I see we have a very international crowd today and I am looking forward to introducing our facilitator now, Henning koloff, who is the principal project manager at XPrize. Over to you, honey, thank you so much. I
Hi, everyone. Welcome to today's webinar, where we will discuss how to leverage AI technologies to achieve gender equity. So my name is amin kala, I'm the principal project may. I'm also leading our gender initiative, which is an effort to close one of the most invisible problems in the world, which are gender data gaps. Today, I'm joined by our amazing gender, brain trust, who you'll meet very shortly. But before we start, I just want to level set so the purpose of this webinar is to introduce the topic areas in which we're seeking projects submissions from the public. So as part of the AI for Good gender breakthrough track team thought leaders known as our brain trust, are using their expertise in data and AI systems to help drive global initiatives in the area of gender equity. So by the end of this webinar, you will find out how you can participate. And again, the goal of the gender breakthrough track is to generate meaningful projects that have a short or medium term impact on advancing SDGs by using AI data, ecosystem and community building and collaboration. So before we dive into the topics I'd love for our brain trust members to introduce themselves. So please feel free to get start your video and start your audio brain trust members. And we'll start by having Francesca Rosie introduce herself, please.
Thank you, I mean, Hi, everybody. So I'm trying to karate I work at IBM Research in the US. I have a background mostly in academia, actually because I joined IBM only five years ago. And I've always worked in my career, I've always worked in AI, you know, trying to advance AI capabilities. But in the recent years, I've also been trying to advance our capability to make AI as beneficial as possible, either with the technological solutions or research based technology based, or also with non technological solutions like working together with many other stakeholders, policymakers, even UN agencies and many others that are around important actors in the space of AI. So at IBM, I am the AI ethics leader for IBM. And so I co chair for example, data technol ai ethics for that is our governance piece to put in place concrete actions to make sure that the technology that we deliver as the right properties regarding by so many other desirable properties that we may
amazing Caitlin, please.
I am so I'm Caitlin craft Bachman, I run women at the table which is a civil society organization based in Geneva, and is focused on systems change in the economy in sustainability, technologies, technology and democratic governance. We've had a very, very hard core focus for the last two years on AI and gender so that we're very, very happy to come in as gender experts and colleagues in the technology sector.
saying welcome Caitlin and Nicole, please.
Hi, I'm Nicole Washington and I have a background in Engineering and Management Consultant, I've been in an angel investor with one of the largest Angel groups in the US. And I've done that for over 10 years. In my spare time, I have been a staunch advocate of STEM initiatives. And I've worked to increase awareness and excellence in STEM and communities that are often underrepresented. And so that is, of course communities that include African Americans, Latinos and and women. So I'm glad to be here today. Thank you for having me.
Glad to have you, Nicole. Thank you and Eileen, please.
Hi, everyone. I'm hiding callous skin and I'm an assistant professor at the computer science department of George Washington University. And my expertise is on machine learning and AI systems, especially how can we make AI for social good, I have been investigating AI ethics with respect especially by fairness and privacy in the last couple of years. And I'm trying to understand how bias propagates in AI systems, for example, how do we communicate bias through language? And that ends up passing to AI systems? Or how can we analyze the fairness properties of AI systems today exhibit any discriminatory outcomes and so on. As a result, I'm very excited and happy to be part of this event and organization. And I'm looking forward to having this discussion today. Amazing.
Thank you, Eileen. So welcome to our amazing brain trust. We are missing two members Stephen and Acer. So I just want to do a shout out to them. They're with us in spirit today. So before we dive into the actual topics, in Francesca, can you tell us more about the gender breakthrough track and what are some of its goals?
So this track is about, you know, gender equity and gender issues in general related to technology and AI. So the point is that, you know, our world has been over the centuries by mostly the point of view of men. And so they embed that info in all the policies and decisions that govern our world, their point of view, their habits, their priorities, and even their physiology, which is different from women, of course. And these generated they know a lot of decisions made according to this point of view. For example, we see examples in many different domains from car crash tests that are designed to protect mostly men or a clinical trials or even protective equipment in the pandemic, age where, you know, the protective equipment are fought with the physiology usually of men rather than women. So but even to go to the technological domain, like a voice recognition system, so that work better for men or women, for women, or even face recognition systems that, again, will face detection, whatever they work better for men than for women. So this generated a lot of decisions and a lot of data that reflects these choices and these biases. And as you know, in the current age data is really what will feed technology to help people make better decisions. So if we rely on these kinds of data without A checking that really reflects what we want of our society then we may use bias data that can generate in decisions that may be biased, unfair, you know and generate discriminations about not just gender. Of course, there are many kinds of bias, but here we wanted to focus especially on agenda one. So, this track goal is to really reflect on what can be the possible solutions to this problem, whether our technological or non technological solutions positively in a multi disciplinary way. So we welcome you know, projects that are very multi disciplinary. And the idea here is to go towards, you know, getting closer to the Sustainable Development Goals Number Number five, which is to define a more equitable world. So that The overall goal and and again, we welcome the participation of everybody to really achieve the goals of this track along the three dimensions that we have identified that I think we will go in more detail a little bit later.
Exactly. Thank you so much. That's very helpful. And like you said, we will dive into those specific topics and a little bit, but before we do get started, Francesca, can you just tell us like, what is the next step for people who do want to be involved? I know you mentioned a few different topics. We'll get into more details in a second, but who are we hoping we'll get involved what kinds of projects are we hoping to see?
Yes, so, so yes,
so we will talk more about the three topics that we have identified to give a structure to the set of projects that can be proposed, but we will in general, so each project of each project proposal we have to choose one particular of these three topics, but we are going to prioritize and prepare a short term project that okay also word we can see an impact in a short time like kind of three or six months given that this is a gender related track, and that we think that, again, technology solutions should be complimented also about the diversity in project teams. We also would prioritize the themes that are very diverse in that are submitting projects that are very diverse in their composition, diverse in terms of gender, but also they balance in terms of the different skills and background that can bring to the you know, to the specification and the work of the project. So in generally know multi stakeholder themes that involve academia, research people, technology people, domain experts, you know, and so a mix of the different voices that need to be together to really understand what's the best way to identify the problem, and also to find the best solutions. So we have, you know, deadlines that people that I think that later on or that we will be more precise about them. But already, I mean, the call is open for receiving these projects. And then, you know, as you know, at the end of September, that will be the top three will be presented in the event via Thank you.
Great, thank you so much. So, Nicole, before we dive into the actual topics, can you describe a little bit about the journey of this breakthrough track up until this point, and what kind of work have you and your fellow brain trust? Members been doing?
I think Thank you. That's a great question. As everyone knows, first of all, I just have to commend my colleagues because we've done all of this through, you know, a pandemic through a social unrest through our incredibly busy schedules. So, you know, it has been all because we believe so passionately in this idea of removing as much gender bias as possible from not just AI data, but as you as you hear about and as you've heard the models for AI as well and we just believe is so important that we have assembled to do this incredible work. You know, we all come from different backgrounds, which has been really lovely and we have different subject matter expertise. But as we've gone through the process of designing this track and the outcomes of the track we've, you know, wanted to think about what are our goals, what will Be the goals that we need in order to accomplish those objectives that Francesca just talked about. And then when we did that, as we went to design the track, we thought about, there's some more expertise that we need, not just for the people who are going to create the, the groups and present or their their projects or that would compose the project groups, but also for mentors. So we want mentors together on those folks as well. And so we knew that we needed more expertise in different areas than what we all had as a group and luckily enough we have the tremendous networks so that we could easily go out and get those people so that was, you know, really comforting to know, I think I think this team was well assembled. You know, how did we do it? We use technology, right? We use email, we use zoom? We use various platform forums to communicate and put ideas down in real time so that we could discuss those things. So, in my opinion, I think it has been a really wonderful labor of love. And I'll say that because people have still gone about their regular lives, you know, they've had jobs careers, some people have moved thousands of thousands of miles away from their home during all of this time. So it I think, has been probably the next best thing to gathering together in Switzerland, which, you know, we're hoping to be able to do one day.
Thank you so much, Nicole, and we seriously cannot thank you enough for all of your time and energy and passion towards making this whole project a reality. So thank you. Pleasure. So in the next section, we're about to unveil our three topics that we are seeking projects within once we get through the three topics there. will be a slide that summarizes it. So don't worry, you don't have to listen to every single word but please do. So Is everyone ready to hear the first topic? Okay, let's go. So Francesca, the first topic area focuses on identifying technical and non technical ways to define, detect and evaluate algorithmic gender bias. How did you come up with this topic? And why is it so important?
Well, yeah, so that's the first topic that came to our mind to define innovative ways for detecting and mitigating gender bias. And as Nicole said, not just in data, but also in the models, but also in the uses of these
systems, but also in the policies around the AI system. So the whole the whole ecosystem around AI, whether it is data collection, and tracking and evaluation, whether all the design choices that are made during the development phases, whether the deployment of the users and the policies around AI should be taken into consideration. And we hope to see, you know, really a combination of actors that play roles in these various phases around the UI, to to propose innovative ways to understand and identify and sort of solve gender bias also around the fact that many different factors come into play so intersectionality. For us, it's a very important concept to put together various factors that can play a role in gender bias, the role of culture that may differentiate among ways that different regions of the world at different cultures and better Round and look at gender bias, the role of language, the presence of women and different genders in a research environment in the social media online presence, for example, in Wikipedia, as well as in the economic environments so that the, the not just the really, you know, in every aspect of our everyday life, whether in the real life and whether in the VA training and online life, and we also welcome projects that can try to evaluate the gender bias and then mitigated using both a mix of qualitative and quantitative ways to measure and define gender bias. We know that in some scenarios, there are quantitative ways to define a gender bias with numbers, but in some other scenarios, maybe It's better to use a qualitative way to define the issues. So we welcome you know, any one of these and possibly a mix of them. So everything that can help but understand really how to identify and mitigate the gender bias with various factors and values, various aspects to consider at play around this topic.
Right, very clear. Eileen, do you have any more specific specificities that you'd like to add to this in terms of what you're looking for under the project?
Francesca did a great job. Yeah, an overview about our first topic, which is a very important topic at this age of AI. And it's basically incorporated in every aspect of our lives. It's in every social fabric that we are dealing with. And the AI ecosystem is quite complex. Accordingly, understanding how we can promote Gender Equity through AI requires understanding the different components of this AI ecosystem. And when we are talking about AI in the social domain, we cannot be coupled it from humans and accordingly we need to start understanding what kind of biases we might be dealing with with respect especially social groups intersectional groups or however you would like to define gender. And as a result once we are using AI tools, we might also be able to define and detect new types of biases or discover biases that people are not even aware of. And once we are able to define and detect them, then we can start evaluating it. And when we build the tools to evaluate AI bias, especially with respect to gender and when we are talking about gender, we are talking about basically the largest social groups in the world. How can we use these tools to promote gender equity because once we are able to make So that means that in the future, we will also be able to somehow manipulate AI systems so that they can lead to the outcomes that we desire, especially to promote gender equity. And accordingly, again, as Francesca mentioned, there are many domains that we can start exploring these AI systems and research is one of them an obvious one, but at the same time, how are women represented in social media? How are they represented on Wikipedia? Or how much woman involvement can be observed in research in corporations in government? And how do these affect the AI systems that are being developed by diverse sets of machine learning developers, for example, accordingly, how can we make use of all of these different components and knowledge domains so that we can build tools to promote gender equity through AI and also mitigate gender biases or discriminatory effects? AI. So this is what I have for now and I will be happy to go into details in the q&a section.
Perfect. Thank you so much. So just to summarize, the first topic is identify technical or non technical ways to define, detect and evaluate algorithmic gender bias. Now we're ready for the unveiling of the second topic. So Caitlyn, the second topic asks, How can AI systems be designed and used to help human decision making be more gender inclusive? So again, why did you choose this topic? And why is it so necessary to address?
Thank you, um, well, as you see, we all look at this as a continuum. So all three of the questions really work together. So this first question is a top level it's about detecting mitigating bias. Now this next one is has data, a part of it Which comes later, but really talks to the model and talks to the design of the system? And how do we design fair systems in AI? This goes to many different areas of fairness, it goes to the binary, literally and figuratively, of male, female. But it also goes to very interesting and dynamically important question of intersectionality. So it's just not really just one section or another. It is how all these things intersect in different ways and wind up being systemic bias, which we're taking from the analog world, and transplanting all too frequently into the digital world. So as an example, in terms of models, if we just look at medical systems, which Francesca mentioned, we know that the norm study population was always Caucasian man in the US, and that led to thinking from everybody that the innovative data research that we had was better Basically, whatever worked for men, whatever worked for white men worked for women, black men and white women. But what happens is is that we've seen now through research that's gone on since those things are cardiologists see that even implicit gender bias associated in simulations clinically, but cardiology testing male female patients, they diagnosed totally differently from male and female. We also know that from another set of studies, implicit biases and healthcare professionals for women of color, particularly African American women, and they're a contributing factor for disparities and adverse Maternal Child maternal and child health outcomes, rates of disparities and contraception use, access to in quality of prenatal care, clinical decision making and intrapartum postpartum basically the entire human experience being an African American woman. Being a woman all together. So this is models writ large. And we're looking to see how can we design systems that include all of the fabulous diversity of experience that we all have to be fair and to really deliver value? So that's that question.
There it is. Thank you. And Eileen, do you have? Can you get more specific about exactly what you're looking for under this project?
Sure. So as I mentioned earlier, AI is making consequential decisions about every every human being nowadays, and it's just becoming more like scale. It's so cheap, a system is developed and it can be implemented at the world scale immediately, and then start making decisions about Okay, I have these resumes, which candidate should I invite for a job interview? Okay, I invited this one particular candidate to the job interview. Now I can automate the Video screening process as well. So let's be clear, record this candidate for half an hour and try to understand, am I going to hire this person? Is this person going to proceed to the next round? And what kind of things that this include of this person speak? What does this person look like? What are their facial expressions? What kind of language is this person using? So we see multiple modalities in AI being used to make very important decisions about humans. And as both Francesca, Nicole and Caitlin Caitlin mentioned, most of these systems are designed for men and in particular white men. And as a result, for example, when a resume screening system gets to parse the resume of a woman who wants to become a doctor, for example, she will implicitly be associated with not being a good fit as a doctor because you sterically statistics show that white men are successful doctors. And once we are aware of these types of biases, and we are able to detect them, then we can start making use of AI for gender inclusion purposes. And accordingly, when this resume screening system is able to detect such a bias, it can notify a human in the loop for example, from the human resources to suggest that look, there might be a bias decision going on here. And you might want to take a closer look to make sure that you're actually making fair decisions accordingly. And in this way, we can use AI to enhance human decision making, as opposed to causing more discriminatory decision making that ends up not only perpetuating bias in society, but at the same time it ends up amplifying society. But as I said in the beginning, since these tools are so cheap and light scale, if you learn how to Use them to promote gender equity, and intersectionality then you will be able to get much more accurate results to basically promote the well being and success of woman so that we can achieve some notion of fairness. And let's see if I have any thing else to say on this topic. And the example I gave is based on human resources. But as I mentioned, this can apply to any area of our lives because AI is automating our entire lives nowadays, like our phones, our computers, job screening, health insurance, the long decisions are criminal to prediction and so on. And basically, the better we understand how to use AI systems and data sets to promote gender equity, the faster we can get there, as opposed to dealing with every individual human beings or institutions, implicit or explicit biases.
Incredible. If you're not ready to submit your projects yet, I don't know what's gonna get you ready. So just to summarize, the second topic is how can AI safe systems be designed and used to help human decision making be more gender inclusive? All right, we are ready to unveil the third topic. It is, it's a little long, so just bear with me, and then we will show you on the screen. So Caitlin, our third topic, it asks us How can diverse datasets be identified and collectively leveraged to give a more complete picture on gender inequality to allow for evidence based policymaking? So can you explain to us why you chose this topic and some practical examples of how this plays out in daily life and why it's so important?
Yes. Well, of course, we've all been understanding that Data is the holy grail, although data is what informs the model, and the model is informed by the system. So again, we're looking at an entire ecosystem, which I want to just reiterate. And I'm not the techno solutionist on this incredible group. So I must say, an opposite view of what's happening in recruitment is, it isn't so much as Oh, there's bias. It's a question of how do we, how do we actually unpack the bias, sadly, so very famously, we know the Amazon algorithm never deployed that benchmarked against an engineering department, but its engineering department was a Stanford graduates sorry for all the white male engineers in the audience. But what happened was is that women who are chess captains or went to women's universities, colleges, were all down rated down ranked on the program because the benchmarks the original data set was skewed. We also see the Facebook ad delivery system delivering ads to lumberjacks, but not delivered. adds about police department jobs good, fabulous middle, or formerly fabulous middle class jobs, but really only going to white mouse. So people, black women, white women who would have wanted to potentially have joined the police force never knew that there were openings or that they were welcome. So the Facebook. So a delivery system took a certain sort of database and then recreated the problem online. We know the apple card, we've just all lived through that beautiful, nothing about gender whatsoever and that Apple card, but we see the two equally racked people, in this case, a husband and wife, one got 10 times the credit rating of the other because the system proxied for gender. And that's because the data in the system was such that wound up sort of spitting out even though gender was not a factor. It still understood that Jen that there was something gendered in the data set and that's Exactly what we're looking to say, how do you design something that doesn't have those variables? I just want to say something else here that going back to the metal. We're doing something on medicine now. So I'm very into medicine. But there was a, there was an algorithm that just tech skin cancer more accurately than dermatologists. So that's fabulous. That's like an AI for Good hit. But the system of system finding 95% of melanomas versus 89% of doctors, so how could we not love that, but then it found out that it didn't actually see anything but light skin tone, so it was not able to detect melanomas and skin that was darker. So even when these things are incredibly successful, and let's say the model was fabulous, if the data is corrupt, because it's ineffective, it's not going to work. So we're looking to see what are really the innovative fixes to that particular problem.
Amazing and Eileen, I know you have some more specifics for us under this project, can you Tell us more, please.
Oh, yes. So Caitlin was mentioning how, for example, these systems are more accurate than clinicians or human decision makers. And that can sound like a threat in the beginning that robots, our automated systems are going to take over the workforce. But they are indeed more accurate in, for example, clinical diagnosis, or many other tests. But at the same time, they have these deterministic problems with systemic bias getting embedded in them. And in many cases, even when we think that we collect the highest quality data set in the world, we don't really know what high quality data sets are because we don't know what kind of implicit biases or historical injustices might be getting embedded in those data sets. And when we look at it from this perspective, we can also see AI as not only a system that might end up amplify perpetuated bias, but we can also see it as an explanatory tool or exploratory tool, so that we can discover new types of biases. And once we are able to do that this can also be used in evidence based policymaking so that we can allocate resources in fair manners. Or this can be used to deliver certain targeted education initiatives in certain areas in certain domains, and so on. But this requires basically that we collect all the data sets out there that we can get our hands on. So we have a bunch of or millions of probably public data sets that are out there. All of these data sets are coming from different domains. They have different types of noises. They come from different countries and cultures, representing their values, their implicit biases, and all kinds of systemic things that are getting embedded in those In one way or another, and we also have all kinds of private datasets, how can we promote open sourcing these data sets so that we can start combining them and get a better understanding of the scope of gender bias in the world and not just in these data sets? When we look at AI from this perspective, we see that it's a double edged sport. We don't only look at AI bias, but we also look at societal bias all together. So by nature, this is a very multidisciplinary, interdisciplinary area for exploring and accordingly, the proposals we expect in this area can be very creative. It can be technical, non technical, but there are so many different ways that society can contribute to gender equity through the use of these AI tools, and especially these data sets because the data sets are out there that give us a clear statistical picture. about what is happening, and being able to make use of these things, coming up with systematic ways to combine these datasets, and then maybe start creating our own data sets to investigate particular types of bias, and maybe even simulate, okay, I'm able to detect and evaluate this bias in this type of data set. What kind of assimilation Do I need to run so that I can kind of understand what kind of affirmative action or action is going to mitigate these types of biases by promoting gender inclusion. And when we say gender inclusion, I want to again emphasize, as other brain trusts were mentioning that we are not talking about the male, female binary, it's all representation of gender because in datasets Usually, people in the social domain are represented as either male or female in categories, and it's not clear how effective this is at the same time. It's not clear how ethical This is, can we really categorize people so that we proxy to their properties? Can we come up with better ways to improve social representation individuals representation, as opposed to categorizing them? And how do we use all of this information to promote gender equity and sustainable development goals in the United Nations?
Beautiful. Thank you so much, Eileen. So to summarize, our third topic is how can diverse datasets be identified and collectively leveraged to give a more complete picture on gender gender inequality, to allow for evidence based policymaking? So we're now going to show you a slide that actually shows all three of the topic areas because I know some of you may have missed the first two or some of the words within those so I will share that slide right now. Amazing. Thank you. It's slide number three. There we go. Perfect. Okay, so again, topic one, identify technical and non technical ways to define, detect and evaluate algorithmic gender bias. Topic two, how can AI systems be designed and used to help human decision making be more gender inclusive? And topic three, how can diverse datasets be identified and collectively leveraged to give a more complete picture on gender inequality, which will lead to evidence evidence based policymaking? So we will we can share lists after the webinars over and again, just a reminder, if you have questions, you can start asking them in the q&a area. But before we get into the q&a section, I wanted to introduce my amazing colleague, Amir benefactor me and who will tell us a little bit more about what to expect in return. of the process like how to submit your projects and what are some key deadlines and dates. Amir,
thank you very much and in, in Glad to be here. So of your benefit to me, I'm Chief Innovation Officer XPrize. And I've been partnering with ITU for the past four years on the AI for Good Global Summit. I'm currently the program chair and so content curator for the concept of AI for Good Summit. Once we designed the breakfast sessions, in the past three years, we realized that there is ample desire and need to collaborate on projects that have immediate visibility and can lead to actions. So one of the outcome of this breakthrough track which we're witnessing today, the launch of it is for everyone to propose ideas project initiatives that fall under the gender equity lens and try to give you some form of visibility in the September 21 25th we of AI for Good Summit. How this works is that there is a link that is going to be shared with you and you can submit project ideas, ask questions, and tell us what you think we should be doing in the three topics that have been mentioned and show to you on the screen. You have until August 1 to submit your ideas and projects and the brand trust that is gathered here along with Arthur and Steven that aren't with us today. Along with a 20. Other experts will look at those projects and help you refine them. And we will select three topics and two or three projects or three initiatives that would be given visibility during the AI for Good Summit week in September to be workshopped and hopefully to be launched. So the goal of this breakthrough and this breakout series is to help projects that are amazing and have an impact on this gender lens to be launched after the summit and we have about three months to work on them. So we give you the chance to proposals project will give you access to amazing brain trust and experts to accompany and give you advice and opinion and support in terms of being ready. And we select three because we cannot do more to be during presenting during this week in September. I hope this is hopefully clear. We are very excited by this topic Express has been on the forefront of gender equity and we have launched the full gender equity lens. We have been in the past three years talking about gender biases and ethics in AI. And we figured that the best way to continue this conversation is to get a chance for project initiative to surface and to listen to all of you and to see how we can collectively build meaningful projects and launch them because at the end solutions have to be sustainable, and beyond conferences and white papers, anything else. We hope that we can collectively identify ways to progress in this direction and the brand process guides Got some distraction as well.
Thank you. Thank you so much, Amir. So to repeat the deadlines, August 1 is the deadline for submissions. August 5 is when our brain trust and mentors will start reviewing the submissions. And then the brain trust experts will mentor teams that are selected until August 25. And then the top three submissions will be chosen on September first, and then those teams will then be guided through a process in preparation for the workshop which will be September 2 through 21st. And we do I did share the link in the chat which is the link to the website where you can actually go at the very bottom of the page, you will write right in your new And email and select gender as the track. And then once you hit submit, you will receive an email with more details. So we will share that link again after the webinar if you didn't catch it in the chat. But I think it's time for our QA and we have some really great questions coming in. So the first question, what set of readings would you propose to accompany the launch of the initiative? readings, for example, that would help us better understand the bias systems around us.
brain trust, any suggestions on readings?
Well, it did. I'll jump in there. There was there's so much and it depends on what and in what way you want to approach the topic. If we want to talk about data just to get yourself all fired up. And visible women
is a very, very good
kind of primer. Just to show you how our This is kind of systemic, and ways that you might want to unpack data bias. We can come up with maybe a couple of really algorithms of oppression is another that's amazing Sophia Noble. There are a bunch of incredible books that have been coming out. And we could maybe put together some PDFs too. But quite frankly, what I'd love to know just in the chat, I think we'd all like to know, are you looking more about learning about systemic problems? Or are you interested more in hearing about some technical solutions that already exists, which are also available?
Anyone else? I mean, we can definitely put a list together of some helpful readings. But Eileen, do you have any?
Oh, so I teach bias in AI in spring. As a result, I have a comprehensive list of research papers, as well as books and blog posts regarding the Topic bias in AI is a relatively new research area. And luckily, as a result, we have a list that is kind of feasible to read. But if you would like to get an idea, but the technical background in this area, the fair amount book.org, which is an open source book, in fairness in machine learning, might be a good starting point. Other than that, Caitlyn mentioned amazing books. There is also weapons of mass destruction by Katie or Neil, that's another one, I would be happy to share my reading list which is ever growing. Another option is looking at the few conferences in this area of fairness in machine learning, there aren't that many of them. There's also AI ethics and society, for example, and looking at the most recent papers in those conferences will also give you an idea about where the field is what open problems are, what kind of problems are being handled and solved nowadays.
Also, I think that in order to get a handle on what's going on with the biases, and they might be good to start with just AI in general, a colleague of ours own the AI revolution, I believe is the name of the book. And he also Hoda and also some blogs, online blogs. I know Ernst and Young has a good topic on women or gender bias of AI. So I think some of your current information real time information are going to come from the blogs, but I think it's great idea to put a list together and add it to the website. That's a great question.
Amazing, yeah. So So our homework will be to create a list and we'll share it with all the attendees after this. Does that sound good? Perfect. Okay, so the next question we have is, can you tell me what you all think about doctored datasets that purposefully account for representation is that impractical? Because we need some sort of massive data pipeline for training and ongoing data after deployment.
So that when, you know, I saw that question, I think it's interesting doctor data sets, I'm curious to know if it's, if they're referring to intentionally diverse data sets, or, you know, something that's not not real, you're kind of getting the data from different places. So, to that, and I think intentionally diverse data sets are very important. You know, we that's what we're talking about, we need to figure out how do we make those data sets diverse, and we need to make an intentional effort to do so.
Um, I you know, one thing that we haven't brought up that I think we would be remiss from the UN community and the feminist community is that there isn't actually any real system either way to get sex disaggregated data worldwide or even in even in a place like the US, we saw that you weren't even able to figure disaggregate who was getting COVID, who not. And this is just worldwide. And this means that policy decisions, of course, are made, just based on sort of the back of an envelope because nobody knows how women are affected or not affected. And that's not something that we really talked about in the, in the questions we say, actually, how can we, you know, how can you kind of use datasets that are there, but it's also and design new data sets, but it's also how can you unpack the data that we do have to know where the women are and where the intersections are?
Right. I would
also like to add one more point. So there is a policy aspect to this question as well because for many legal purposes, using protected attributes, In data sets, while making decisions is prohibited, it's illegal. So we cannot represent people with protected attributes to make decisions about them. But at the same time, these data sets have some sort of proxies to these protective attributes that are influencing the decision making processes. So I understand that we don't want to make decisions based on protected attributes. But at the same time, if we don't include them in these data sets, in a way to doctor these data sets, then we cannot really easily analyze the gender bias or whatever types of buyers is leaking in these data sets. When it's doctored. It gives us a more structured way to handle these things and measure them. But there is this dilemma about non discriminatory decision making versus being aware of these protected attributes and accordingly making decisions. And this is stemming from the issue that our law and policy is designed for human decision makers. Not automated decision makers and policy will need to somehow progress and adapt to this new decision making rearm so that we can come up with standardized ways of generating or collecting these data sets.
Right. Thank you, Francesca, anything before we move on to the next great. Okay, so the next question is, is anyone a part of this panel or the mentors, machine learning or AI engineers so we can verify our algorithms and do some concrete testing?
I mean, I really is the expert of the field. I work in AI. You know, I have teams of people doing all sorts of AI techniques.
But I'm not sure that we mean and we we can help The projects you know, during the mentoring phases, we can help the projects to understand you know, how to test and how to evaluate what they are planning to do. So that's definitely something that we can do. I mean, I'm not sure that what was asked was really testing that we do have their solutions, but more like a mentoring, I think. But I also like the question about the fact that we should integrate multidisciplinarity multi gender also in our own mental and, you know, we already have it in the brain trust but also in the other experts that we will identify.
Right, right. Francesca gave an overall summary we can of course, try to help and mentor but I would like to also remind that being able to test these systems for bias is a research project within itself. Because we don't really have these methods or definitions to understand and test these types of biases. So this is one of the goals, how can we collaborate together to collectively create these systems and data sets and come up with definitions that can be at one point standardized to promote gender equity.
I just like to add also that we have reached out to some AI engineers to be on part of the mentoring team. So to the extent that we hear a positive back from those, we will have them available, but we have thought about that as a component that we need to round out these teams.
Definitely. Great. So moving on to the next question. What are your best practice recommendations for dealing with legacy data fields that have only collected binary data when broadening to an open gender reporting system?
Well, isn't that what one of the projects would be? I think that that's exactly the kind of great question that we're looking for somebody to give us a methodology to try and answer and maybe bring the rest of the world along with you on to answer it. Maybe, maybe it's already been done. I don't know. But
forever through, but that's when you can submit your project and let us know that it's happening.
I would like to see the thing about your $5 million re skilling price and how that has to do Kenny Chen's question and how this impacts with your gender equality work.
That's a great question. Amir, do you want to answer more on the rapid rescaling price and we can share more information to
absolutely the rapid rescaling prices is of course, accelerated think job placement and rescaling and in that price, we are fully adopting the gender equity lens that we have developed, expressed, along with the ethical framework that we adopted from a number of university researchers to give all the competitors right first baseline way to propose solutions that are gender unbiased or equal, but it's up to the competitors should come up with solutions as Kathleen says, we're looking for those amazing proposals that will be fully inclusive.
Right? Yeah, and the rapid rescaling XPrize just launched yesterday, which is very exciting. So we will know more in the coming months about what teams will be doing to tackle this issue. Okay, so I think we touched on all of the questions. But of course, if you have any more, you can always reach out to the AI for Good team. And we'll be happy to to answer any questions. Just a reminder, so if we could actually share the screen that has the website on there Junu please
Nina said, I just wanted to kind of follow up to say that, you know, what we're dealing with is a complex issue. And I think you can see from the questions, you know, our team is not the team that should have the answers. We're posing a lot of these questions and hoping that the crowd would go out. We have people, even online today from all across the globe. And we are hoping that the structure we've set forth, will allow everyone to come together with some support from us and help solve some of these, or answer some of these questions. And also, I just want to compel everyone that this is really you know, life and death. It's not just we want to be equal, so to speak, but when you look at you know, what my colleagues have talked about with car accidents, they don't that help prevent accidents, you know, in the medical technology, industry, you know, those things are life and death. So this is really an important initiative that we've assembled around,
just to re emphasize.
Thank you so much. I appreciate that.
Yes, we do not have all the answers. That's why we convene the entire world to come together to come up with with the solutions. Okay, I think I think we are done with our q&a portion. And I just wanted to reiterate, our deadline is August 1 for submissions. So visit the website to plug in your information and we'll send an email with more information. And then August 5 is when our brain trust and mentors will start reviewing the submissions and they will start mentoring teams until August 25. And then the top three projects will be selected on September 1, with workshops happening the week of September 2 to 25th. So we are so grateful that you You have spent your morning evening afternoon with us for the last hour. And I just wanted to say, thank you so much to our it you partners, you absolutely just blow us out of the water every webinar that we've ever done with you. Thank you so much. And I think I'm handing it over to Ida to do a close out. And I just wanted to thank our brain trust members as well for your time and your passion for solving this issue. So thank you. Thank you.
Thank you very much, honey. And thank you also to all of the panelists. There are a lot of great questions and a great discussion going on. Before we wrap it up. I'd like to highlight a few things that may be of interest to to all of us on slightly different topics but all in AI so this Wednesday tomorrow. We are excited to host a webinar with the global eSports Federation on the power efficient Have you sports for good. And then on Friday, June 26 is the AI for Good innovation factories live pitching session. So be sure to tune in all throughout this week for even more AI for Good. We're going to be pasting the links to these events in the chat. And you can also find out more at any time on AI for good.iq dot IMT. With that, we have indeed reached the end of this webinar, but we'd like to once again thank everyone involved, our panel, all of you participants from around the world, our partners sponsors and our co convener Switzerland. Thank you very much and we hope to see you tomorrow