SOTN2023-16 Horizons In Trust, Safety, & Transparency
9:31AM Mar 12, 2023
Speakers:
[Participant]
Rebecca MacKinnon
Ashkhen Kazaryan
Tomer Poran
David Ryan Polgar
Suzanne Nossel
Berin Szóka
Keywords:
transparency
platforms
question
oversight board
content
people
ai
government
moderation
companies
suzanne
talking
community
law
moderators
wikipedia
point
happening
called
regulation
All right. Good evening, everyone. Thank you for sticking here. Even though I know there are some really exciting panels happening at the same time. Good news is I think they're all recording. So you can always just learn everything when you get home. Hi, I'm Ash, My full name is Ashlynn. Because Irene but that is my mother's fault. So you can call me ash. And today we're here to talk about trust and safety and the future of trust and safety. It's a very wide ranging topic, which is why we have an expert panel that come from very different backgrounds, and they work for very different organizations to hopefully cover all of the issues and solve them in an hour. I'm going to let my panelists introduce themselves. We'll just go down the line. Tell me what
is your thank you yeah, I don't know about solving but definitely getting to some of them. My name is Tomer I've been at active fence for the past three years active fence is a company that builds both software solutions, AI detection models, and kind of proactive protection services for online platforms to deal with all forms of harm specializing and kind of the more egregious topics of terror, child safety, disinfo. And, and kind of, but a whole wide spectrum of them. And I hope to bring today an industry perspective, both come from a vendor perspective, working across many trust and safety teams, but not being part of one of the online platforms making me possibly a little less partial. And, and Yeah, glad to be joining this panel.
Looking forward to hearing what's your favorite Trust and Safety Team. David?
All right, I'll continue it on. Hello, everyone. I'm David Ryan Polgar. I'm with the nonprofit based in New York called all tickets human, we're focused on strengthening the responsible tech, movement and ecosystems so we can better tackle wicked tech and society issues like we're hearing all about today, like section 230, which was packed earlier. And also so we can co create a tech future that's aligned with the public interest if I can underline public interest. In order to align technology with the public interests, we need to have the public involved a lot of what all tech is human does that I would argue I'm biased, but I would argue is different is that we set up a big tent strategy that gets people across civil society, government, industry, and academia all over the world, who are coming together to try to tackle some of these hard issues. And we do that through three main ways of multistakeholder, convening and community building with our Slack group, with 62 countries, 5000 members, and then awesome multidisciplinary education. And then lastly, diversifying the traditional tech pipeline with more backgrounds, disciplines and perspectives. And outside of all things human I also sit on a bunch of boards. One relevant to this would be tic TOCs, content Advisory Council for the United States.
Rebecca. Hi,
I'm Rebecca MacKinnon. I'm with the Wikimedia Foundation. We are the nonprofit organization that hosts Wikipedia and about a dozen other volunteer run, and volunteer governed free knowledge platforms. And I am currently Vice President for Global advocacy. So I lead the team that advocates globally for laws and policy environments that make it possible for people to edit Wikipedia, and to share knowledge across borders, which as it turns out, there are, you know, wrinkles being thrown at us all around the world. And, of course, protection from intermediary liability, preserving intermediary liability protections through section 230 is of course, very important, it protects the right of volunteers to actually set and enforce rules on the platform. So most of the content moderation is being done by volunteers or all of it. And yeah, so and and, like, David, we are trying to advocate for policy makers to think not just about who they want to punish and what the bad behavior is, or how to fix the Internet. But really, what kind of Internet do we want to create? What kind of Internet will be best for open, democratic, equitable societies that everyone can participate in and everyone can fulfill their right to be educated? And so we need to think about what types of public policies will make that possible and how we protect the models that make that possible. So that's what I'm here to talk.
Awesome. On behalf of all Millennials Thank you Wikipedia for my home. work, Suzanne.
Sorry, I'm Suzanne Nossel and I wear two hats. In this discussion. One is as CEO, Penn America, the free expression organization. At Penn America, we've become increasingly focused on threats to free speech online, we have significant programs working on how to combat disinformation without repressing freedom of expression and how to address digital safety issues in the same manner without curtailing freedom of speech. And that's, that's part of the wider work that we do on free expression issues here in the United States and around the world. And then on the Oversight Board, I've been I wasn't one of the original members, but I've been on it for about a year and a half as part of our effort to adjudicate tough calls on content and also delve into some of the very thorny policy issues that arise on metas. Platform. So looking forward to talking a bit about that.
Very excited about this panel. Let's start with the first question. The word transparency is being thrown around as the sort of get out of jail free card often. And as a counter to regulation. Some countries European Union right now and other countries are following suit are actually requiring transparency. So we're seeing more and more of that trend around the globe. Often, it's the bigger tech companies that have transparency reports, but honestly, I don't think many people read or understand, aside from a couple of regulators and researchers. My question is about Trent regulated and mandated transparency. Is that the way if it's the way? Like, is there a good example? There's a bad example? What are the benchmarks that you think we should be hitting? We'll start with Rebecca, but then this is an open, this is an open question for everyone.
Well, as it happens, before I came to Wikimedia, I ran a program called ranking digital rights that reads transparency reports and terms of service and privacy policies. So you don't have to. And you know, what's, what's interesting? And what what I've always felt, is that transparency is the first step. It's not, it's a means to an end, it's not an end in itself. And what what is the point of transparency from a human rights perspective, right. And I think, if our goal is to ensure that the Internet better enables and protects human rights, that that should be the purpose of transparency. And so when it comes to speech, I need to know who was manipulating what I can see and not see who has the power to prevent me from saying certain things, or to amplify certain things that that show up in my feed. So I know what I don't know. And I know who's manipulating what I know. And then I can make informed choices about it. Similarly, with data and who's accessing my data, I need to know who has the ability to access information about me and my communications, under what authority and who I can hold accountable if that's being abused? And can anybody know that it's being abused? Right. And so that's, that's the first step. That's not to say that transparency should be a replacement, let's say for strong federal privacy law, right? I mean, it's one thing to know what, what's been collected about you so that you can be targeted with ads. And it's another thing to make sure that you're protected from discriminatory practices. Right. So there, there's there's definitely, I think, a role there. Now. Wiki pedia and other Wikimedia projects are a little bit different from say, Facebook or something. We do publish transparency reports that focus primarily on government demands, what governments are demanding that we take down, which we we do very little of because we don't moderate the content that community does. And what data governments are requesting. And because we collect almost nothing about people, we can't really hand over much, right. And so but when it comes, I think where the real debate right now is about transparency reporting is less about what governments are demanding of companies, although I'm going to come back to that in a minute. Because I think actually, we've started to focus too little on that. And we need more government transparency, about what's being demanded of companies. So I hope we do come back to that. But But what's what's really the center of debate around transparency right now is what companies ought to be disclosing about content moderation, so it's less about what governments are demanding you take down and of course in the US, that's kind of moot anyway. And it's more about what are you taking down You know, what's the nature and volume of other stuff you're taking down? With Wikipedia, every edit that anybody's ever made on Wikipedia is public, you just click on the History tab of any Wikipedia page, you can also click on another tab to see all the debates about what should stay up and go down. And you can click on another tab that shows you what the rules are for that particular page or that category of context and who's kind of who's got authority to edit under what ways and I think there's a couple out there at the moment, but there's a few there's at least a couple of wikipedians in the room, or in the building today who could show people more if people were interested, but But that's all public, for anybody to discover, if you want. And so that's, of course, very different from from commercial platforms, where there is a real question of how much does the public need to know, so that we can understand who is exercising power over our speech? And under what circumstances?
Absolutely. Do you want to go next?
Sure, I'll make a few quick points. I think transparency is definitely kind of fundamental, before we decide what companies need to do or don't need to do at least we should all be kind of privy to what they're doing. And I think it's also great that unlike, you know what we heard everything to about 230. And most topics content, moderator, moderation are very partisan. These days, I think transparency, one of the only issues where like, you know, who wouldn't, on which side want kind of further access to what companies are doing. So it's great that it's bipartisan, and I think it has higher chances of kind of moving forward. I think that transparencies are important, I could tell you again, bringing the industry perspective talking to trust and safety leaders that are often frustrated, because they know that the efforts that they're making, versus their peers, maybe the additional investment, the additional headcount doesn't really translate into, you know, the public eye, they're not going to be seen as doing more or working harder to prevent these harms. And that creates an inverse incentive to do more, right when no one sees and their peers are going to invest a 10th and be seen as the same by the media, or by public. It gives an inverse incentive to do more. The final thing I'll say, and this is kind of the perspective from active fence as we monitor Dark Web communities and kind of just generally dark chatter, from hate groups, terrorist groups, child abuse groups, they're often looking at company bylaws, right? The the kind of enforcement guidelines that moderations team have in order to bypass them find loopholes, and so on. So there is kind of a pitfall in transparency that you're giving the enemy, kind of more knowledge of exactly how you're behaving. So
it sounds like your company is the bad man just watching over the dark parts of the city. So again, I wanted to go to you with the same question. With one caveat. In the US, there's currently some litigation Florida case, no choice CCA vs. Paxton. There is this bit about the transparency requirements that were in the Florida bill that I believe was the only part of the Florida bill that both the district court and the 11th Circuit said was constitutional. And like that's still not the last like we still have to hear from the Supreme Court if transparency requirements are constitutional. But I think in the United States, the First Amendment work and the free speech work is slightly greatly different from let's say, even Europe. And so do you think when it comes to transparency, regulation and requirements in the United States, do you think they even will will stand the First Amendment test?
I you know, I do think so. I mean, I think it's one of the ways that you can begin to regulate here in the United States without walking right into the First Amendment. So it's an easy and sort of obvious starting point, it's it's in many ways where the EU is starting. So I think if there's any hope for progress on the regulatory front, I think it would be with with transparency. I also think, though, that we have a lot to learn I mean, it transparency has to be paired with intelligibility. You know, if you can't parse and understand and analyze what it is that you're getting, you know, it can be virtually useless. And this is something we see with the board, just how kind of challenging that is, you really need to know what questions to ask transparency is in the eye of the beholder, they will say they're being very company will say they're being very transparent about something and we'll ask another question, and they'll be different sorts of barriers, for example, in our in our cross check PA to wanting to see the list of who was on the eligible and afforded crosschecked. You know, that was we weren't able to access that. And so I don't think it's as simple as passing a transparency bill. I think questions of enforcement are going to be complex. Also Trick question of how you make this information meaningful for users. You know, there is, we see such an enormous kind of forfeiture of control and privacy, just out of people, you know, wanting access, wanting to move forward, not wanting to trouble themselves with all of these really nettlesome questions about what the long term consequences might be being on these services undertaking certain activities, you know, for convenience reasons. And so, you know, how do we kind of balance that out? I don't think we could just assume you put this in the hands of users, and they're going to radically reshape, you know, how they, how they operate. And I think, you know, the other challenge is, is, as these systems become more sophisticated, it just, it's just going to be harder and harder, I think, to know what questions to ask and to digest what we get back, you know, on something, for example, like generative AI, you know, what it means to really have transparency when it comes to, you know, how that sourced and you know, what its effects are.
So, spoiler alert, we're definitely going to talk about AI in a bit. We're also going to have room really filled out, it makes me feel great. So thank you, everyone. We're going to have a good chunk of time for q&a. So please stick around. So David, your organization is old tech is human. And so when I think about transparency through that lens, I think of honesty. So what do you think about honesty, tech companies, transparency and regulation?
Well, sunlight is the best disinfectant. I think that's what we, we need right now. One of the side effects, sometimes of sitting on boards outside of all tech is human, is that you get a lot of really strongly worded letters strongly worded emails to you. But also by being surrounded by people's pain points, that allows you to understand different experiences. And the experience that a lot of people who are not sitting in this room have is that they feel the weight of the decisions. So on one hand, we frame social media as if it's a luxury, it's not a it's not a kind of human right. But a lot of individuals, it's very central to their expression, and oftentimes central to their livelihood, or their financial livelihood. So the transparency is essential in the sense that you have to ensure and even if you take it through a free expression type of lens, that decisions are not done in an arbitrary and capricious fashion. That's how we normally would think about something like free speech, when you when you're constructing a statue, is to make sure that you and another individual are treated the same or the same under law, or else it's, it's biased. And I think, when when I back up and think about my exposure to a diverse range of people throughout tech is human, what's really kind of caused me to learn is that the public's relationship to social media is not like a consumer business relationship. They are viewing social media, as if it's a quasi governmental body, we can argue all we want about that I'm sure we can. However, that's also because people like Elon Musk, frame it in that way by calling it a public square. And when you think about a public square, you say, My God, well, then I should have three co equal branches of government, then you have accountability and transparency. Everything we're talking about in a well functioning democracy is now where this argument is going with social media, social media, democracy, if you will. So Suzanne, who sits on the Oversight Board, when the oversight board was launched, when Zuckerberg was talking about in April 2018. It got picked up in the media as the Supreme Court. Well, that's very intentional the way we're thinking about it, because we are thinking that somebody needs to interpret the laws, somebody needs to make the laws, right. And so we need these to kind of carry them out. So if you don't have that, then you would be a democratically challenged country where you'd have the judge, jury and executioner kind of done in a hierarchical type of fashion. That's the friction point that a lot of the jump public has towards social media platforms is that they're saying? Well, okay, you told me that I offended something on the community guidelines are your terms of service, but aren't you the ones who are making the law and then you can change the law, and then you're interpreting law and executing it. And that in itself can be problematic. They're, they're expecting some separation there between the two. And that's where, you know, I am very bullish on the intentions behind the oversight board and saying, well, we need to segregate this. Because, again, we're social media is currently a private business, but our relationship to it is not of a private business. And that's, that's a big, big point that I want to underline.
Professor Jack Balkan actually writes about this, he says that there is a fiduciary duty between tech companies and platforms and its users. So I see the direction you're going since you've talked about the oversight board. I'm gonna go to Suzanne next. Suzanne, let me know if I need to call you, Your Honor or your justice since you're a civil important. But you've been on the board for almost two years. Do you think in general, these third party bodies are the silver bullet or maybe at least a partial answer to the trust and safety shortcomings. There are other examples of this. There's a digital Trust and Safety partnership that took up. That is a, you know, very exciting, new venture, but also was trying to kind of organize and be this independent body that gets more transparency and like better understanding and coordination between tech companies of different sizes. What is your experience in all of this?
Yeah, sure, I was going to talk just a little bit about kind of the oversight board and a few areas where I think it can contribute and sort of and what the limitations are when I joined. And I've always viewed it as very much an experiment. I'd say after two plus years, the results are not all in but we can see begin to see some patterns. And I, you know, a highlight for positives, each of which has a big caveat. You know, the first is really what you were touching on, which is offering this kind of reason, jurisprudence for our cases, you know, questions like, can you call for death to the Supreme Leader of Iran? On the platform, you know, is that should that be protected? You know, should you be able to put unsubstantiated accusations of attacks on civilians in Ethiopia, when those might provoke reprisals? You know, what about questions of nudity based on gender or transgender identity? Like really, you know, honestly, thorny questions, and we go about it in this, you know, very rational way, and we apply human rights law, and it's thoughtful, and there's an opinion. And, you know, my feeling is like, whether you agree with the opinion or not, it matters less than that somebody went through the exercise of coming up with a rationale which never existed before, and that was completely inaccessible before, and it offers a degree of predictability. Now, you know, what's the limitation? Well, you know, we've dealt with, you know, some dozens of cases, you know, out of the enormous ocean Now, not all of those cases, and appeals really raised those novel issues. And it's actually been a process for us to try to hone in on those that do in the beginning, we would take some cases, but actually, when you got right down to it, because we don't have lower courts. So nobody's distilling the issues. And sometimes we would get to the bottom and find actually, there's no real debate here. And Facebook also makes a lot of enforcement errors. So oftentimes, we'll go to them with a case and say, provide us with your rationale for what you did they say, oops, you know, it was just an error, an automated error, a human error. And that sort of voids it out. So that's piece piece, one in terms of our value, and kind of the the limitation of that. Second, is really our ability to probe and shed light into what the company does. You know, and I'll give this example of, you know, we're now working on a policy advisory opinion on COVID misinformation, it's and you know, what, you can sort of get to the bottom of how they've actually handled that and how it's changed over time. And when the opinion comes out, everybody will be able to learn some of what we did, you know, with the piezo, on crosscheck. You know, there were pieces of information we didn't get. But there was also an enormous amount that we did learn that honestly, was surprising about how the system worked kind of a two tiered system, and that those who are beneficiaries of the system, have their content remain up on the platform, even when it's found to be violating while it goes through a whole sequence of Appeals, and during its peak period of virality. So, you know, it is a large loophole that I think a few of us, myself included, didn't understand well. So, you know, there's, there's great value, and we have the ability to ask questions, to make recommendations to get them to respond to those. So I do think it's something that, you know, we need to take advantage, we take that responsibility seriously. You know, at the same time, they remain a private company, and ultimately, they dispose of our requests. Some of them they see to some of them, they don't, and you know, where they don't, we don't necessarily have much recourse. That'll get me to the fourth thing. You know, the third is our ability to drive policy changes. And I think there have been some important ones, they have to respond to all of our recommendations, we always state in our brex, what it would take for us to consider the recommendation fulfilled. So they've accepted a bunch of our recommendations on cross check, including the kinds of users who are going to be eligible for protection under the system. They've just reevaluated and reissued their strikes policy to make that more transparent and kind of less punitive. They've adopted a new crisis protocol. They've done a lot more on translation. So there are a lot of concrete things that they've actually done in our response in response to those recommendations, but ultimately, again, And, you know, it's up to them. And they're important things we suggest that they don't do. I'll just say the last one, which is our ability as a board, spur regulators and other players, I think this is, you know, sort of the next frontier for things that we can't accomplish. Can we be kind of a springboard for those who have power powers beyond what we do to see what needs to be done, and to use their authority to achieve it?
I will disagree with you on the fact that I think you have a power when they don't want to enforce it's like this analogy would be like ringing the bell and screaming shame. I think the just the publicity of Facebook not complying with something the oversight board recommended, I think, is a good enforcement mechanism for what it's worth. Yes, Rebecca? Yeah,
I just want to add something. One of one of the things that I've found so meaningful about the oversight boards work is the application of international human rights law, universal human rights standards, to the actions of global of a global platform, that also in the US context. So I remember when the when the ruling on the Trump deplatforming decision came out. And that analysis based on international universal human rights standards, for kind of what the implications were, was really important, because I think one one experience I've had just in, in working with folks from Silicon Valley, is that, especially like 1010 or so years ago, when I would go to meetings with people in Silicon Valley, I'd mentioned international human rights standards, people didn't even know what they were. And so the the fact that we now have the oversight board, making very clear, detailed rulings on how you apply human rights law, to the platforms, is really significant. And while in the US, because of the First Amendment, there's nothing the US government can do. In Europe, actually, you know, human rights law is being used as the basis for a lot of regulation, and the regulation that holds up to challenge. And so when you have an analysis that shows ways in which a platform is failing to protect human rights of users, in accordance with international human rights standards, that potentially has a regulatory impact globally, that can't be ignored, even if there's no direct application here now, but it can also be used as a way for other stakeholders, investors, and others to hold platforms accountable. David.
Yeah, I think we know how difficult these decisions are, right? One of the early cases, kind of preceding the oversight board dealt with, you know, Facebook, kind of getting in political kind of trouble for for taking down. Napalm girl, right? If you just view that on its face. That's an underage, naked female. But because of its political significance, it's something obviously we'd want to protect and political speech being kind of more central. The point I'm trying to underline is that speech is nuanced. And speech is complex. Even when we think about the thoughtfulness that goes behind the oversight board. One of the things that I would like to just kind of emphasize is, I think one of the Achilles heel of social media that we can never seem to get over is that social media platforms are just when you think about startup ethos, so he's on like talking to a startup person, they're always concerned with scalability. And even when we think about the founding of a lot of these social media platforms was based on this kind of hockey stick growth and scalability. Speech is the antithesis of scalability. And I think now we're reaching this friction point, where we see that speech takes a lot of time, and energy and nuance to go back to the strongly worded emails that I'm sure we all receive. The misconception that the general public has is they say, Oh, my God, why did this get taken down? Why am I being penalized? Didn't you look at three posts before and understand the context? And didn't you see the poster behind me? And it's referenced? Didn't you know that? Didn't you do your research? Didn't you go to Wikipedia and read something? And you say, Well, have you done the stats about how quickly content moderation decisions are made? It's it's merely seconds. Right. So how can a thoughtful decision be made in a in a framework that is based on scalability? And obviously AI has not proven for the AI? Yeah,
I will. Yeah, we're gonna go straight into AI but I do want to say that David has an article that's Cold What if tech companies were nonprofits? So I could feel some of those themes here. So I recommend everyone read it. So let's talk about AI. AI has been at the top of mind this year, partially because there's a Supreme Court case about algorithmic recommendations. And if it's protected by section 230, or not, there are three different panels today on certain 230. So we're going to try and like talk about other things. But this is all related, obviously. When is, when is it? Okay, like, what is the line? When can you use AI? Because obviously, there is not enough people on the planet to moderate the content we have on current platforms, let alone on the platform that maybe pops up tomorrow. So where do you see the line? Where do you see kind of the correct use and incorrect use? Should there be a human moderator checking? Whatever the case is, for about just in general? How would you see AI operate within this? And obviously, it's already being employed?
So I mean, act defense bills, a lot of AI solutions. And you know, just to frame it, we're talking specifically about AI for content moderation. As David said, it's extremely hard, especially considering context for, you know, algorithms to make decisions on what's right or not. I do feel like AI is improving to a level that it's able to take into consideration a lot of the context like you mentioned, did you see what I wrote in my previous post? Did you see the picture behind me? Do you see the org that I'm a part of? Do you see who my network all these things are currently, you know, more and more being considered by AI? What we put a lot of emphasis in the AI tools that we provide is explainability. Right, showing why the AI decided what it did, because of something in minutes, two and 10 that this person was saying or doing or the gun that he was holding up, or whatever that is, we're trying to give moderators as much context as possible. So the moderation decision can still happen in a few seconds. But it has a lot of this context and a lot of this data presented to the moderator. So they can make the most informed decision granted, that there are many mistakes that are going to be made, where we see a lot of kind of where does AI fail? Generally, we see it when there is kind of novel current events, geopolitics involved, you know, which is a lot of the areas of misinformation, we see it when there are, and this is probably more predominantly when bad actors are involved, right? The bad actors, you know, not the 15 year old saying something racist in a chat group, but actually, a neo Nazi trying to recruit pedophile trying to groom wherever those kinds of instances, folks that are on these platforms in order to cause harm. They are sophisticated, they are keen in terms of understanding how you're AI and how your algorithms are detecting. And they are constantly poking at the system to find out how what they can get away with and how those are the cases where platforms can't just lean on AI and needs to deploy, you know, either kind of means of other means of protecting their users, whether it's through investigations, and you see a lot of these departments form up in Trust and Safety departments. To me just in the several years that I've been in the industry, I feel like the amount of folks coming from three letter agencies and folks with experience and kind of counter intelligence coming into the trust and safety industry, because the industry understands that you're dealing with an adversary. And when you have an adversary AI alone is going to break.
Suzanne, do you want to talk about AI and content moderation? What are your thoughts?
You know, I mean, I, I, you know, we certainly deal with a lot of automated content, moderation, that's very flawed, we deal with a lot of human content, moderation, that's very flawed, and oftentimes, the same piece of content will go through multiple layers of review with indeterminate results, and they're kind of mistakes along the way. I mean, it's, it's incredibly buggy. Like, I hope you're right, that it is going to be honed over time. And I can imagine that, you know, like, for detecting, you know, changes in, you know, a bodily organ, whether there are early signs of cancer that ultimately, like it's easy to understand why AI ultimately would be better at that. I mean, they could just incorporate millions and millions of data points to see whether the most subtle changes might be indicative of anything, and I can sort of see some of the same but I also think it really calls for robust and immediate appeals processes because it's just an error is inevitable. I mean, they're, you know, they're words that have multiple meanings or words that have different meanings in different dialects, all of those subtleties. You know, we have to get into we have to have a interpreters, you know, often several of them are the same piece of content to sort of explain, you know how the meaning might be a little bit different for different communities. And so, you know, how long it's going to take for these systems to integrate that level of sophistication, I think is a big question. And there's, there's, you know, obviously, great scope for error in the meantime. And there's also sort of the faith we place in our Nai, and I think the risk of kind of over deference to those systems,
Rebecca? Yeah, I think we need to be careful when we're talking about content moderation, and, of course, automated systems in relation to content moderation, to not just talk about content moderation as this as if it's one model, right. So you all are talking about content moderation, done by large commercial platforms that are kind of general purpose, people go on them to share pictures of cats organize a revolution, anything in between? Right? And, and they're there, people are paid to moderate the contents, the rules are set up by the company, with Wikipedia, and other Wikimedia projects is a very different model, where you have a very specific purpose of the platform that volunteers have come together to create, and they've set rules around what is reliable, what are reliable sources, for well sourced neutral content on, whether it's COVID-19, or a historic event or whatever. And it's the community that's enforcing that. So it's, it's it's not paid moderators, it's, it's people who tend to actually have some expertise, or kind of be from or adjacent to a language community or a cultural community that the content relates to, and people have debates. And, you know, you were just talking about, you know, you know, but what about, you know, didn't you see my other post, and, you know, the the community, the volunteer editors are all kind of pointing to the context of this, and what did they actually mean? And, you know, what, what's its purpose and kind of looking at that context and making decisions, right. And you can do that, when you have a platform that's for a more specific purpose, whose community has some general consensus about what the purpose of the platform is. And so the reason why I'm going into all this detail, is because I'm concerned that when policymakers start talking about legislation around automated content, moderation, or content, moderation in general, they're only going to be thinking about what you all are talking about, they won't be thinking at all about content moderation models, such as Wikipedias. And and then policies will get made and laws will be passed, that will be quite harmful to the communities that have set up platforms for Public Interest purposes. And I just going to the machine learning and AI piece for a moment, you know, people in in sort of the Accountable AI space talk about the need for humans in the loop all the time, we talk about a machine in the loop, because the content moderation is always controlled by humans from start to end, but we use but but the content, but the moderators, the volunteers, lean on some machine learning tools to help them detect sort of abusive behavior on particular pages, or what's known as sock pocket puppeting, when people are creating multiple accounts. And so there are some machine learning tools that are used to help wikipedians maintain the integrity of the content, but it's always with the humans in control. And the models for these things are discussed with the community. It's open source code, it's it's very openly developed and shared. So I guess what, my point here is that we need to make sure as we're developing public policy, as we're talking about what the trends are, again, to envision what we'd like to support what we you know, if we want communities to be able, you know, let's say, you know, a community around, you know, a university wants to set up something around that's very specifically for dealing with the environment in their community. They want to set rules that are very related to their community's environmental issues. They want to have their own enforcement mechanisms. They want to use some machine learning tools to enforce, you know, to do their content, moderation, that we're not setting laws and regulations that actually make that harder.
Absolutely. So time flies when you're having fun. So I'm going to do one more question before we go to audience questions. Rebecca, you mentioned government interacting with platforms and sometimes Jo bonding them. There was actually a hearing, I think few weeks ago that was very entertaining. I highly recommend you watch it. If you want to hear what Trump admin, which Chrissy Tegan tweets the Trump admin asked Twitter to take down google it. I can't say it on air. But what do you think should be? So there's actually, I think, a bill right now, that kind of ties it to the Hatch Act, and just limits basically says that you can, using your government power in general, just being a government official. Have this playing the refs situation, power over platforms and what they take down on blood, they keep up? What are your thoughts on this? Do you think maybe we can stop this, maybe we should have like a public database that says, French government asked us to take this meme of Macron down or something like that. We'll start with you, David. And then we'll go this way?
Well, I think this is reason why we're talking about a splinternet. And just how do you have these kind of platforms that don't have a border, but then we're still kind of confined and where they start from? And obviously, we know a lot of the debate, even going on with different different social media platforms, and they're in their origin. I think this is where we're kind of at a pivotal moment right now. And Society of deciding, are we going to take a global approach with this? Or is it going to be dramatically different country from country of how they're going to create kind of restrictions around it? I think the biggest part that's going to happen over the next couple, couple years is one of the things that I didn't hear in the earlier discussions around section 230 that I attended, was really just this shift of power. And what that's going to mean, right? If you really think about the last decade of shifting power that traditionally was confined with government bodies now going to social media platforms, but the end of day wasn't a power grab a platform, we just want to sell you ads, right? Are the major social media platforms that referring to some of the major ones. Yeah, they were referring to not the kind of more social purpose. So that itself leads to some of the complexity of even how we think about this. But the larger point is going to be where do we where do we take this is this something where government is now going to get more embedded within each each of these major if we're talking about something like meta or Tik Tok, or Twitter, or snap? Or Roblox, you name it? Or is it something where it's a clear kind of separation? Between the two? I think that's going to be the kind of struggle that I'm seeing. Yeah,
that quasi jurisdiction? Rebecca?
Well, you know, English, Wikipedia is the same all over the world, right? It's not different in the UK, and in the United States.
Even the spelling?
Yeah, it's, it's one way, you know, it doesn't change, like, depending on what I, you know, ISP you're using. But, you know, so so it's problematic. I mean, we're dealing right now with the online safety bill, in the United Kingdom, which wants us to age gate or, you know, the current proposed wording would force all platforms, and it doesn't really differentiate between any types of platforms. It would force platforms to age verify users before visiting. And also, you know, at least in terms of, while for adults, it's only requiring takedown of unlawful conduct illegal content for children's there's, there's this whole category of lawful but harmful content that that platforms are supposed to be proactively removing. And basically, it will break our model. And it's not like we can do one thing, you know, the global community of people who edit Wikipedia, who are English speakers all over the world, right? Like, how are you going to wait, it just doesn't work? Right. So this is this is the problems. And, but but the other the other thing is, is that and we've encountered this with a lot of governments and democratic governments who say, Well, we're trying to solve this problem in our country. And it's a big issue in our country, you are sympathetic to many of the problems but you know, you saw, first of all, you know, there's a question whether what you want to do is even to solve the problem in your country. Plus, it's basically going to mean that it would if we were to comply, which we won't, it would force us To collect data on users that could then be discoverable in other jurisdictions, right? Where you, you know, in any case, in any country, you're one bad election away from some really awful things happening to people's data pretty much anywhere. So I guess the point being is that this is where I come back to holding governments accountable. And we need accountability and transparency by any Democrat, any government that claims to be democratic in the business of protecting its citizens rights. We need, we needed transparency around what's being required, we need clear accountability around that. And we're not seeing that. And when you get to things like network shutdowns, where governments are demanding that ISPs kind of shut down the Internet and large swaths of you know, entire provinces or entire cities, and India's the biggest culprit, the ISPs aren't even allowed to tell their users why the Internet was shut down or that it's being shut down. Right. And, and that kind of lack of transparency and lack of accountability, with citizens around why people's access to information is being restricted. That's even in democracies. It's highly problematic
or developing democracies, I think, UNESCO in Paris a month ago, they were talking about how Turkey keeps designating journalists as terrorists. Suzanne pan America does a lot of great work trying to fight back on government limiting speech. So I feel like this question about government and transparency when government officials are, you know, trying to twist the arm of platforms is a great one for you.
Yeah, look, I think it's a huge area of concern. And you know, what, we have some degree of transparency reporting about formal requests for takedowns. I think there's a whole layer of less formal interactions that happen with the big platforms that we know very little about. And, you know, part of it are governments that are pointing out content that violates the platform's Own Terms of Service, but doing that at scale, so that that content gets pulled down, fast and furious, and preemptively just to keep things on good terms where, you know, critics and dissidents, you know, are not subject to the same kind of protection. And so, you know, to me, when you talk about transparency, it's not just about the data, it's about how those relationships work. And, you know, it's about people worked in these companies who have things to tell us about, you know, the nature of that cooperation and collaboration. And, you know, it's complicated, because there's some areas where you think, you know, it may be necessary on national security grounds, you know, to deal with disinformation on this country, you know, that has been in debate, you know, should there be more interaction and even cooperation? And obviously, you know, there are obvious dangers with that, I think everybody's, you know, they've kind of come close to the brink of trying to facilitate that, and it does happen, but also an enormous amount of leanness, also, how it can come back to bite politically. And, you know, one of the really interesting things has been how this perception of anti conservative bias has fueled legislation. I mean, you're talking about what's happening in the UK? And what about Texas? If that law stands? What's you know, will there be a Wikipedia or for that matter? You know, a Facebook in Texas, because it'll be, yeah, a very different set of rules. And we haven't I haven't heard at this conference, any discussion about it? It seems like the one legislative proposal that's getting the most traction right now is to ban tick tock and I know they're one of the sponsors of this conference, but that you know, that is you know, gotten an enormous amount of momentum over the last couple of months so you know, how you get out and I don't think the traditional transparent with the way we think about transparency actually does not shine a light in these particular corners and corridors. And I think, you know, that's something that we need to focus on it it's something we're discussing as an oversight board. So
I just want to see how many questions we have we have like about 10 minutes left, so Oh my god, okay. I'm gonna take them two at a time and then my panelists can address whichever they want. Let's go Baron, and you
are in so good to free force as I showed him up a circuit first legit place transparency rules. we and others have asked the court to walk in said there's some field lack of clarity Decatur line showing case law. Without purple twisty temple, it's just out here whether it finds its empty context of advertising, which was why controversial thing was stuck with reasons to think why Trump's Alici that Republicans have grabbed skutki If Reifen isn't actually constitution, so what to hear and candles thoughts about what what it is you think are quite key constitutional Simon currents, social media, core to abortion January 6, committed to the court was never included a word that was somebody who's elite five Washington Post drowse. And with that they skleer Instead of be the real Java, like, what's happening here is that social media particles have bent over backwards, wait, you're forcing your post, usually, because they've been seeing what they've advised against our political outcomes for the winds, q&a speech and this information. So as we all know, as the case it's not an accident Republicans are welcome to visit was discourse around transparency, because they called it the war, the inherency, that you have come to companies, the more than the companies will better without words thought to have content. So the words transparency, while it seems separate from content operation, is so deeply in pain than just creating what reasons and the first segment is lack of big government, whether it's Florida or for some federal law to enact will often be weaponized against court judgments and sways.
And the second question was, we're just gonna two at a time we're gonna we're gonna cluster them. What was your question?
On two therapies be on the eastern a tree kind of thing. I wanted to go back to what Suzanne was talking about in terms of the ship and human rights breeding work international human rights to being worked on the rule eight into the oversight bore into that may spur additional legislation domestically, and how that really influences regulation around the world cities have this kind of bigger national view, the bright screen worth being here us into domestic national walk by a tech blogger, Seth Gordon, and I just wanted to see if I had any examples of that. So any kind of keys and you're grilling fat now car cleaning the car?
So Rebecca, Suzanne, you want to take us away?
Well, it'd be in the Digital Services Act in the European Union cites international human rights law, I believe in a number of places, as I'm not talking about US law. But but but in Europe, international human rights law, and also European human rights standards are frequently cited, and codified in a range of different ways.
There are some individual court cases that have cited oversight board opinions, I can't rattle them off, but it is something that we look at, and that has happened. So you know, I think it's sort of the the juridical mind is being applied to these questions. And when other jurists look at them, you know, they want to see, you know, where the board has come out of it that, you know, something we thought about in our nudity case, because that's a case that, you know, has been the subject of many court decisions in different jurisdictions. And so we, you know, we're definitely thinking about the fact that, you know, we were kind of weighing in on in in that arena. Right, so I saw some more questions there. Tomorrow, yeah, maybe
just one comment that because I can't really comment on what would cause what would be constitutional and what wouldn't under any kind of regulation that's passed what I'd like to bring us I think something that I didn't hear too much in this panel, and all the 230 Talks is what's happening in the field, when there is absolutely no protection for what kind of trust and safety measures and means and processes platforms have to have in place. And when there's no transparency on reporting, what those means are, what we're seeing now, in the past, you know, not six to nine months of recession, is that these companies are thinking much more cost effective cost efficiencies. And these departments are getting cut slashed, if not gutted by, you know, whoever's buying them, or whoever owns them, right? When there is no, so you can definitely point at a lot of things, adverse effects from any kind of regular regulation that would come in for transparency, or what process of procedural regulation, but the fact that what's happening in the field is, I feel should be concerned to every American citizen, every citizen around the world right now.
Suzanne, I was just gonna say one thing about your point about transparency, I think it's very interesting. And it's something that we're seeing at Penn America, in the offline arena, which is transparency laws, for example, about what's going on in schools requiring teachers to post all of their syllabi and their curriculum and all their readings online, and it's like, oh, well, it's just transparency. But why is it being done? Of course, it's being done to empower people who may object people who you know are self appointed police for these now curriculum restriction laws. And so I think it is something that we have to you know, we have to be attentive to as sort of a community that's accustomed to being all in favor of transparency.
Well, this goes to, you know, what is the purpose of transparency? Is it an addenda itself? Or is it a means to an end? Right? And if it's in service of civil liberties and human rights, or is it in service of something, maybe
I can maybe dual use? Yeah.
Yep, exactly.
Let's, let's get some more questions.
Yeah. Hey, very Bernie, with the National Science Foundation set of art in the arts. I can have both human rights question actually cut into Dover's point. But I was curious about these conversations not include any more discussion about content moderation, as an industry to outsource to Bulwark unities add at they did a lot of reforms, he has called it but does show port shots of an interested and the heavy responsibility that's required to draft a larger commercial survivors to be accountable for that, especially as these verbs need, got it within our country, and then probably in the further outsource, and if there should be international law or actual return policies, and that kind of covered this very exploitive process. So protections for the theaters?
That's a great question. And it's, you know, I think that most recent reporting was about open AI. So it's not even, you know, the big tech,
everyone uses BPOS, from, you know, small companies that have 50, moderators to giant platforms with 50,000.
I think this just goes back to the scalability, right? Because the reason why it's outsource is to save money. But if we actually took the thoughtfulness and more, let's say it was American based, expensive, start adding up. I mean, you've seen that early on a few years ago, with every time there was a controversy at Facebook, they would say, Hey, we're gonna hire more moderators, we're gonna hire more moderators, but just start doing the math, think about how many moderators, we would need how many moderators, and then take out a calculator, it's going to add up to a lot of money. We've seen the tech downturn recently, so no, no company is immune. So I think that's a larger issue. So I think you're referring to like, Casey Newton's work, or maybe before that Sarah T. Roberts, who's done a lot of great work in this kind of content, a Commercial Content moderation, type of field, but no, I do think they have responsibility to the larger point is, the social media founders didn't get into this to get into content moderation. But they found herself knee deep in complex speech issues, where everyone hates them. And it's coming from the left, and it's coming from the right. And then to talk about Boris's earlier point. You even have the complexity now where people are trying to work the rafts. And and that's its own issue. I think the the larger question is the governmental bodies are going to have to decide, okay, are they separate? Or is it something where government is going to have to become a lot more embedded into the system, which is why you refer to kind of like an earlier article about should social media platforms be nonprofit, and just this larger concept that we're expecting platforms, the major platforms, we know Wikipedia is in the public interest. But for other ones like Tik Tok, and Snapchat and Twitter. We're expecting them now to be in a public interest to care about the well being of children. And we know that mental health, we're in a mental health crisis. But how do you have a private business we're trying to maximize right the profits to shareholders. Now, that's a natural tension as we've seen, well,
the thing is, so the global aspect of it is also that you need to have local moderators to understand the local language and the region's issues. So it's the balance there. We are running out of time, and we have a hard stop. So I'm going to go from Suzanne down and give every each of my wonderful panelists 30 seconds to like for final thoughts. Plug in your, your Twitter, your Mastodon, your company, Suzanne list,
I'll just plug my article, The Wall Street Journal, I said, There's no quick fix for social media a few weeks ago, and I think, you know, that's the bottom line. I mean, these things are really messy, I would, I would just say, stay in touch with the board, because we do have certain abilities that are quite unique. And if there's a way we can link up to your work, let us know. And we would love to have that dialogue. Thank you.
Yeah, well, I'll just say, look, let's think about the kind of Internet we want to have. And beyond just punishing the bad things we don't like. And let's get away from thinking we can fix the Internet, saying we can fix the Internet is saying like we can fix crime in New York City. Maybe you could, but you'd become Pyongyang. There's a reason we don't want New York City to be Pyongyang. And if you want to have rules and enforcement of rules that are appropriate, you know, that respect the rights of the communities that they're in, it's like policing debates, you know, they never end they will never end as long long as humans are human, let's hope they will remain human. And you know, and similarly, we we're not going to solve content moderation, as long as humans are human. But there are models that for some communities work seem to be working better than others. So let's try and empower and protect those things that are working, that are creating good in the world that are valued. And we're gonna continue fighting till the end of time.
All right, in my life, I get to deal with a lot of the worst of humanity, but a lot of the best of humanity. So it's something to point out is that you might strongly disagree with people on stage, but you should, because the future technology is intertwined future democracy and the human condition. That is a big deal that is worth fighting and disagreeing about. But that's where we come together. And that even though we disagree on the surface, underneath it, we know that the status quo is not sustainable. And then in order to co create this tech future that is aligned with our interests, we need to come together in some manner. So you can do that. I'll take a team, we've got a Slack group, lots of ways to get involved.
Thank you. Yeah, just definitely agree with a lot of things that have been said on this panel. And I do think that we're always going to be fighting and it's impossible to fix. But that doesn't mean we shouldn't try. And I think section 230 has been kind of static for 26 years since it was founded. So doing things and experimenting and even breaking things, right, like was said in previous 230 panels and understanding what we broke and trying to fix them. I think that's the path that's the roadmap to creating a better Internet, perhaps not fixing it. So that's what I know. I would like to see and again, I think spotlighting the, the decision of inaction is is also dangerous because what we're seeing today with there's no protection for any of these departments or, you know, companies could cut all of trust and safety tomorrow and not face any accountability. So, okay, so knock everyone.
Thank you for coming. Join me in thanking the panel.