So, DAIR was really born out of trial and a bit of fire. So, when Dr. Timnit Gebru was fired from Google, after publishing a paper that is around large language models, it really was an upsetting moment for this field that we call AI ethics. Because it was really saying, "Well, if you want to really tango with this work, you have to think about what the vested interests are of corporate actors and other people in quote, unquote, AI." And so, DAIR is a bit of an experiment, which is both exciting and also terrifying. But it's great, I wouldn't change a thing about the journey so far. So DAIR, as the Distributed AI Research Institute, is this experiment and understanding what can happen with AI if we resituate it from technology to a conversation about what it would mean to use technology in our communities in ways that would make it work for our communities. So, the idea here is it's a nonprofit because corporations have their own issues with this. Universities have a lot of constraints as well. Timnit and I have worked in both. She's worked more in the corporate side, I've worked more in the university side. We wanted to really understand what it would mean to focus on AI that would do two things. First, it was focused on harms coming from AI and other sociotechnical systems, pointing those out, doing research on where that exists, as well as develop new technologies that would work for people. Our focus is on Africa and the African diaspora. The idea being that we really need to situate Black folks worldwide in thinking about AI, since so much of AI leaves Black people out of it. What DAIR brings to this conversation is really thinking about what it would mean to do both of these things and to focus on the most marginalized people in these conversations. We started out focusing on three groups of people: refugees, data laborers, and gig workers. These folks are people who are often really at the bottom of where these technologies are being deployed or subject to their harms. We have people involved in DAIR that are researchers but we also have people involved that have that lived experience. Meron Estefanos, who's one of our Fellows is a longtime refugee advocate and has done advocacy for refugees who are escaping Ethiopia and Eritrea, typically in different kinds of flows either within Africa or to the EU and often are kidnapped and held hostage. She has literally freed hundreds of refugees through raising funds for them, through putting things on the radar of the West. We see these technologies being deployed at the border, within refugee camps for ostensibly humanitarian ends but it's really the testing ground on refugees who have very few rights. In terms of gig workers, we have Adrienne Williams, who's a former Amazon driver and a current labor organizer, who has been focusing on the use of surveillance technologies, often AI-powered surveillance technologies, in monitoring driver motions, even driver eye contact in the vehicle. If they're adjusting their phone when they're parked, it'll ding them, will call them out by name, will say when they're not driving, when they're not looking and it's adding more to their workload. We also have a fellow Mila Miceli who's been working intensively with data annotators in places where they're the most precarious, back in the lower class areas of Argentina where Mila's from, as well as data annotators who are refugees from places like Syria and are doing these annotation jobs, typically given instructions in English even though they're main language is Arabic and having a hard time to interpret them. So often, when we're in this conversation of talking about AI, people are like, "Well, why would you hire this person?" Well, this person understands AI more intimately than any engineer because they are experiencing this stuff firsthand and they're experiencing the outcomes firsthand. And that's an incredibly important perspective that needs to be drawn into the conversation.