Hi everyone, I'm Aishwarya, and unlike a lot of people at the conference, I am not a policy person. I am a content person, but I'm very passionate about the bridge between, policy and contemt. So I really hope what I'm about to share is helpful for all of you. And I'm happy to talk about it, after you've taught was well, so I'm Senior UX designer at the Wikimedia Foundation. And for those of you who don't know us, the Wikimedia Foundation is the nonprofit that enables Wikipedia and 13 other free knowledge projects to exist. So we provide the legal and a software emphasis on there for Wikipedia. Our story today starts with a Megan, Megan is a tech Quality Advisor here as well and getting to see and she wants to view some surface level research on the history of AI. So Megan, Google's history of AI, she goes to Wikipedia, and she starts reading. So how many of you in this room have done this before? Yes, we all have level to see it. Now, how many understand how Wikipedia works. Now that well, Wikipedia, for those who don't know, is a mobile community of over 265,000 volunteers, then create and maintain Wikipedia. Now let's just a peek behind the scenes of this article to what we call its revision history. So the revision history is a live feed of every edit that is outfitted me into this article on the history of it if you could see who made the Edit, when it was made and what it was. But, and this is the unfortunate part, not all edits to Wikipedia are productive. The edits that I've highlighted here are instances of what we call obvious and vandalism. This is where Amina enters our story. Idina is a Wikipedia volunteer and she patrols articles for instances of Orpheus fabulosa. Now vandalism is an increasing problem, and whose work it Nina does is every quarter, but she is feeling burnt out. But readers like me and depend on Wikipedia for reliable and accurate information. So we at the Wikimedia Foundation asked ourselves, how can we use AI to support editors like it Nina, and readers like make it to prevent Nina from burning out to increase her impact as a volunteer and to free up her time so that she can do more complex tasks. My team is building auto moderator, which is an AI power tool that human editors can use to detect and remove vandalism. Now, one thing that I did before our to no auto moderator is not the first AI tool that Wikipedia military kind of use. They have been using bots and AI tools since 2002. However, what has remained true is that we have always ensured there is a human in the loop. And so auto moderator will be a tool that our volunteers used, and a day of me making the final calls. So we do product design a bit different at the Wikimedia Foundation. we conceive, design, develop and deploy tech, in very close collaboration with the open source community. So that's the Wikimedia open source community, with our readers with with Peter readers, and with our colleagues. And we do this collaboration because Wikipedia is a collaborative project that belongs to all of us. So my team in collaboration with our machine learning research team has been designing audit and moderator in an open and community centered way. And so open means that we publish the progress of our designs, and solicit input from the community. Open means that we test with our community of volunteers. So for example, the machine learning model that powers auto moderator is currently actually Justin food. It has been tested by volunteers so that they can understand how this model behaves under different conditions. And we ran these tests with the full awareness. But if our volunteers didn't like our model, we wouldn't build it. So a few of us got together at the foundation, and we've developed a resource called the inclusive product development playbook. And my team is using this playbook to ensure that auto Odenwald rater is designed and developed in an inclusive and equitable way. So for example, we have these design principles, that guide and all of our decision making, and we'll just quickly go through the first three. So we've been talking about this a lot at the conference. What does this look like in practice? To have a transparent AI tool, stakeholders need to be a more easily discover, understand and audited, they need to be able to. So this is the human on appeal principle. So it's guaranteed that auto mod will make mistakes. So it's crucial, then human editors can close the loop and can appeal its decisions. And lastly, volunteers who set up auto moderators should feel a sense of agency over the tool. And what this concretely looks like, are these three features, the volunteers turn Auto moderator, on and off, they set its threshold and they can customize into their wiki. So we build a tool, but they control it. So because auto mod is such a powerful tool, we've been using it at By Design at our approach, which means we think about trust and safety. At the very beginning of the project versus at the tail end. As an afterthought. We've conducted a pre mortem, where we went through all the hypothetical scenarios of how auto mod could go wrong, or how they could cause harm, and how he might mitigate these scenarios. So while we unpack of this tool, per our calculations, roughly in over Bucha, 1000 instances of vandalism a day across all the wikis. So in the age of misinformation, disinformation in the face, and a decline of trust any media, Wikipedia still works, and it's still trusted. And it works because our model is entirely different from for profit companies. We are proving that community led content moderation works, we are proving that you can develop AI projects in an ethical and transparent way and our effort to develop a bottom model tat instead of top down, I think it shouldn't be celebrated and protected. Thank you