SOTN2022 12 What's Next for Online Content Moderation?
11:07AM Mar 3, 2022
Speakers:
Rebecca Kern
Will Duffield
Rebecca MacKinnon
Julie Owono
Keywords:
platforms
moderation
content
companies
oversight board
people
ukraine
rules
governments
harm
war
rebecca
transparency
state
community
russia
thinking
human rights
internet
debate
Thank you guys for joining a very timely session, I would say, on What's Next for Content Moderation Online. I'm Rebecca Kern. I'm a reporter with Politico. And we have a really distinguished panel today with us, we have Will Duffield, a policy analyst at Cato Institute, and then Adam Kovacevich, founder and CEO of Chamber of Progress. Then Rebecca MacKinnon, Vice President of Global Policy at Wikimedia. And last but not least, Julie Owono, a member of the Meta Oversight Board, and the Executive Director of Content Policy and Society Labs at Stanford.
So just, we're gonna start with the news of the day, and it's developing by the minute on just the handling in Ukraine, and Russia, on the massive increase we've seen on the disinformation front from Russian state media and their accounts on a lot of major US tech platforms. So, just over the weekend, we've seen a real interesting approach, varied approach by various social media companies, we've now seen YouTube and Facebook announced they're not going to allow monetization on RT state run media accounts on their platform. So that was a development there's there's several more and we can I just want to open up to the panel, how you are observing these US companies approaches, if you feel like they're more on the defensive or if you feel like they're following a playbook here. It's obviously happening, you know, very quickly, so I just wanted to kind of see your observations, we can go down the line, hoping it can just be more of a conversation.
All right, well, to start, nearly all platforms have found themselves with new content moderation problems as a result of this this war. I've seen businessmen organizing International Brigades on LinkedIn, teenagers creating tank driving tutorials on tick tock, these aren't the sorts of things these platforms usually deal with in their moderation. But already, there are a few broad lessons for content moderators that seem to be emerging from from this conflict. Firstly, making harm the objective, moderation prevention efforts, just breaks down during wartime, in a just war, you're one one side, whoever you're supporting is attempting to do harm as part of that conflict. Its inherent and so if you as a platform, set a policy that aims to prevent harmful Stop it will revenge render perverse outcomes in in wartime. There's a good example of this with Patreon over the weekend, the Ukrainian government had created a Patreon fundraiser to support the purchase of weapons systems. Now, Patreon prohibits fundraisers for activities that could lead to harm and pulled this this fundraiser. Of course, this this struck a lot of observers as kind of wrongheaded or illegitimate in the context in which the need to purchase these weapons was, you know, understood to be necessary to prevent harm to Ukrainians and the destruction of the Ukrainian state. I think secondly, this sort of world war time exigency, has really scrambled the traditional justifications for counter misinfo. We've heard a little bit about platform efforts to counter Russian information, operations and propaganda. We've heard very little about Ukrainian propaganda and information operations, and both sides are very much spreading fake news in an attempt to bolster their countrymen, rundown the morale of the opposition, that sort of thing. In war, misinformation born myth can be a valuable resource. And so platforms don't really have other than taking sides in this conflict. A good way of dealing with that and given their current counter misinfo policies, which treat falsity itself as the harm rather than sort of looking downstream to consider which particular harms where we're thinking about preventing. And finally, I think the sort of failure of Russian mis info attempts or or just Russian efforts to shape the narrative around this for should should cause us to look back skeptically to 2016 and some of the understood effects of Russian propaganda campaigns in this country. It seems in retrospect as though that relied much more on their claims and the divisions that they were poking at being live low hanging fruit than the actual efficacy of the propaganda. It worked then because people wanted to believe it. It hasn't worked now, because people don't, it isn't mind control.
Great. So I want to say a couple things to Rebecca's question about the playbooks around content moderation, poor content, moderation, trust, companies trust and safety, operations platforms trust and safety, moderation operations, it tends to be, in most cases, a very rule bound process, right, which is to say that you're doing content moderation at scale, you need to give moderators really a sense of what's allowed what's disallowed, so that they can make decisions at scale. Everyone who's involved in that profession, which is a growing field, is really trying to do that. One of the inherent challenges, however, and we'll touch on this is the question of novel issues and novel questions that come up. I think, Will has explained it, that when it comes to war, that's different than most of the kinds of policy questions that trust and safety teams have had to develop, which is why, frankly, I think if we were to peek inside the headquarters of most of the companies today, they are probably all having essentially running conversations and debates about new questions and new things that come up.
When I was at Google, where I was for about dozen years, on the content moderation side, the Trust and Safety team adopted kind of a spectrum approach, which is to say that certain types of services, they would err on the side of leaving information up, for example, in the organic search results. But when it came to monetization, the company might be more quick to demonetize a certain information. I think what you see here is that you're going to continue to see, frankly, this debate about how platforms handle different speech playing out in real time. Senator Warner encouraged platforms to demonetize advertising from Russian content, which was probably not actually bringing in that much revenue, but that was a relatively easy decision, I think, for many of them to make. On the other hand, you have the Ukrainian government encouraging the platforms to essentially remove access to all of their services within Russia, on the theory that that would help sort of contain spread of propaganda within Russia. I don't see the platforms being very eager to do that, for the reason that banning propaganda is also potentially harmful to banning sort of good information. So, you're going to continue to see these things and frankly, if you're an outside voice, or you're a government or regulator, you're frankly pretty incentivized to make these suggestions publicly, because having been in the companies, they respond and deal with the company, those proposals and requests as they come in publicly.
Okay. Well, I think we have to be careful not to think of platform as one monolithic thing. Different platforms exist for different purposes, have different business models. Some exist to to monetize traffic, and content. And some are, such as Wikipedia, and the other platforms, supported by the Wikimedia Foundation are nonprofit, and the sole purpose of these platforms is to enable people to share knowledge. And so with that, I just those of you who have your laptops up or phones that that are with browsers easily at hand, you might want to check out the 2022 Russian invasion of Ukraine Wikipedia page, just go ahead and search for that. That page was created on Wikipedia on February 23. Over 700 editors have worked on it since then. And the reason why I'm bringing this up is, this is an example of how community driven content moderation on a platform whose purpose is knowledge, and the the community has come up with rules about what constitutes knowledge, what kinds of sources are considered credible, and not, and enforced those rules. And so, you can see completely transparently on that page, there's the the tab where you can see every single edit that was made on the page, and then there's a talk page where people are debating what's allowed and what's not allowed, and there's also a whole bunch of information about kind of the rules under which this particular page is being governed, and because it's dealing with very sensitive content, it's it's getting a lot of attention. In fact, over 580 editors as of this morning, were watching the page, keeping a lookout for efforts to vandalize it, disinformation etc. There are pages on this topic in over 90 languages, and so millions of people, over seven and a half million people, or views, have already taken place just on this article alone. And so, this is a community that's dedicated to, again, compiling --they're not reporting from the ground, they have to people have to cite sources that are verified from elsewhere -- but the purpose is to ensure that the public has access to facts about this particular situation. So, it's a very different purpose.
What the point of all this is, since we're at State of the Net, and there's a lot of discussion of regulation, legislation, you know, how, how we should as policymakers think about platforms, don't think about platforms as monoliths. Think about the purpose that a platform is serving, think about what kind of business model it has, because they don't have all the same business models, some of us don't really have a business model. We're just trying to serve the public interest, and make sure that you're protecting the right of communities to form online in pursuit of the public interest, in service of the public interest, and to set rules and enforce them without getting sued or prosecuted into oblivion.
I think, as we're thinking about, what types of regulation need to be put in place to deal with the ills, that we all recognize online, we really need to keep in mind this crisis today, When we're thinking about, it's not just about who we need to punish, but it's about what kind of Internet we want to build, and how do we ensure that open and free societies are protected and supported through our Internet regulations globally? Thanks.
Yes. So, I completely agree with what you just said, Rebecca, and I would say that, in general, what what strikes me, particularly when it comes to online content moderation is, I guess, this kind of disproportion in preparation. Or rather, if I should put it another way, when there are serious challenges -- I mean, the serious challenges are going to exist, that should be a principle for any companies, no matter your size, no matter your business model, and it involves human, there will be crisis. So, when you say that, what is the consequence, and we are seeing that there is a disproportion in when the consequences happen in the United States, or in the European Union, or when they don't happen in that space, and I think Ukraine is -- we all saw it coming, we all read the news the past three, four months. It was obvious that there was going to be a conflict of some sort. But yet, here we are, again, in a situation where emergency measures are being announced after the fact. In comparison, when the George Floyd verdict was to be announced last year, there was more communication from the part of many, many platforms on what measures they had in place to make sure that the necessary public debate would happen, without necessarily entailing harms for some communities, in fragile communities. But yet, we're not seeing that even in the case of Ukraine, despite the fact that again, we saw it coming.
So, what I'm what I'm trying to do personally, in the work that I'm doing at the Content Policy and Society Lab at Stanford, but also at the Meta Oversight Board, is to say, the crises are going to come, and they're probably not going to come where you think they will come. The novel crises, which you alluded to, Adam, are in some places of the world not that novel. They're the norm. It would be interesting to see how we can learn from those crisis in order to be better prepared when they unfold their effects, the way that we're seeing, for instance, for a conflict like Ukraine. When we're talking about the Russia in Ukraine invasion, we're not just talking about two countries. I can tell you right now that Russian disinformation is extremely well organized in French, and French is French is spoken not only in France, but by hundreds of millions of locutors around the world. These are people who are going to be influenced, and are going to be influenced in thinking that there was one aggressor, and in that case the United States and NATO and there is a victim, which is Vladimir Putin, the big tsar who apparently was born in -- his mom was saved from, I don't know, whatever facility during the Second World War, and apparently, it's been verified by Wikipedia that's pure disinformation. But yet, I've seen it so many times on my timeline in French, and not being challenged. So, I guess my message that I would like to convey is, better preparation means looking beyond the direct the direct environment where we are, and also listening more to signals in places where we think that yes, it's going to cost us too much to invest in that. But yet, it can avert more serious crises in the future.
I have to agree with Julie about these these problems pre-existing the current crisis, and in many cases just happening in parts of the world that we didn't seem to care about as much. In particular, many platforms, I'll call Facebook out here, because I'm thinking of them, have been very state-centric in their governance in the past, and when non-state armed groups come up against governments, there's usually a presumption in favor of the state, regardless of the aims of either party. We saw this sort of viciously in Syria when Turkey joined against the Kurds, and they came into conflict, and Facebook ended up removing certain YPG pages. Now, again, God forbid this happens, we pray it won't, but Kiev may fall, and the Ukrainian Armed Forces may look more like a non-state armed group in a few months from now. I would hope that platforms are now looking back to the problems the Kurds faced in Syria and vis-a-vis Turkey, thinking hard about how to draw lines that provide for continuing support of Ukrainian resistance, even if their government is decapitated.
I have another example.
Fine. Yeah.
We all remember, of course, we all remember that some parties in the United States contested the election online, there are so many examples around the world, I can cite at least two where that's happened, and for more valid reasons, that's fair personal opinion, but like really rigged elections, and Facebook is being used to push that message. So, again, a missed opportunity to train, and think through, what happens if that happens here or in France, or I don't know, anywhere else.
I think we're seeing that you kind of can't remain neutral right now on these platforms, they are put in a position wherein you either go and take down the direct harmful misinformation, or you don't. And then you have direct members of government from Ukraine, and we saw a letter of from four Balkan state leaders calling for direct action from the top tech companies, immediately, to remove RT and disinformation. So I'm wondering, how to handle the free speech aspect, while also not serving an authoritarian dictator in supporting the spread of that disinformation? That's a it's a fine line to walk. But like you've all been saying, this isn't the first time these platforms have had to struggle with this, and it seems that it's hard for them to learn some lessons. I don't know, I'm just curious your views.
To start, we've been thinking a lot about this at the at the Oversight Board. One lesson that I think can be learned is that focusing too much on the content is not necessarily the way to have the best answers with regards to the solutions on which everyone will agree, but rather focusing on processes. And one of these processes being more transparent, first of all, on what you're asking your user -- well, not even being transparent, having the rules published. It seems so obvious, but I was surprised and shocked even to realize that a big platform used internationally, like Facebook,was not translating its community standards. So, these are the rules that you have to respect when you sign up on Facebook. They were not translated in Punjabi. Punjabi is a language spoken by 100 million people around the world, and many of them are located in a country that's so important, which is India. So, that's the first thing. Well, telling your rules.
Secondly, being transparent with regards to your content moderation processes, and having in place necessary safeguards to evade arbitrariness. The decision should not rely on just one person, or just one of the ways that people think, but rather should be the result of, yes, principles, the result of transparent process. Also they should be some form of accountability. I think that type of conversation to me seems more interesting than debating on the content, because we will all always disagree on the content.
Yeah, if I could just add to what Julie says, and I've known Julie for a very long time. One of the things that I really appreciate from the Facebook Oversight Board is that the rulings have really been drawing on international human rights standards. And so, in looking at what, in the opinions on what Facebook should be doing differently than it's doing, in reading the opinions that that you all have issued, you're drawing non First Amendment, which, of course, doesn't apply to any users outside the United States, but rather international human rights standards, and Article 19 of the Universal Declaration of Human Rights for freedom of expression, which, as with all rights -- basically the way in which human rights doctrine works, is that free speech can be restricted in order to protect other rights, for example, privacy, right to life, etc. non-discrimination, but any restriction has to be necessary and proportionate, and there has to be due process, transparency, logical approach to enforcement. I think it would be very helpful if all of us as platforms -- the Wikimedia Foundation has a human rights policy. I know Facebook does, a few other platforms do, not all of them do. Not all of them really use them as well as we could to really consider, yes, you're balancing a lot of issues. But there's, there's a way to try and do it also more openly and fairly.
It's gonna say, I think all this kind of speaks to the this idea that content moderation is almost a discipline in transition, which is to say that I think most companies, most of the platforms, certainly started with a default of kind of Western liberal tolerance of towards most views. But the fact is, we've seen many challenges to that norm. And I think what you're seeing is different companies taking different and different platforms, taking different approaches to developing almost what what's next what kind of not replaces the liberal democratic norm of allowing most speech. But I think Rebecca's point about the oversight board and routing its jurisprudence in human rights as one. Certainly Rebecca also talked earlier about the Wikipedia community's routing its edits in a set of rules and a set of rules about citations, and what counts as a citation. What counts is a source that's its own set of values, which is another important experiment. You see, Twitter, for example. Participant you know, exploring things like blue sky, decentralized content, moderation, but also the same time experimenting with things like what I would call speed bumps, are you sure you want to tweet tweet this before reading the article? And, you know, I think the oversight board is a really interesting example. Because I think the fact is that, you know, some people criticize it for for not sort of having the force of law. But the fact is, it's grappling with these issues. And I think that's the most significant thing about it. That what you really want is you want all the service to be grappling with what comes after Western liberal openness that is not always getting the job and are yielding the results that people want. And I think it's great that you see all these different approaches to it. I don't know that one's gonna win or win the day or others but
if I could just add on on that are kind of qualify that a little bit, you know, human rights are universal. And there are authoritarians who will try and claim that human rights are a western concept. But actually, you know, there, there are human rights defenders in every single country on Earth. Some of them are more free to self identify that way than others. But these are really universal values. And so, you know, and and very much necessary for open and free societies. So I don't think it's a progression away from open and free society so much as perhaps a D colonialization of it, if to use a term or a more diversification of freedom and openness.
So I just want to return to the idea of neutrality and whether that can be some kind of goal to aim for, if one must must pick aside, I see neutrality here as existing on a kind of continuum. There's a difference between removing Facebook services in Russia, removing our tea entirely, or de monetizing it. And, you know, this, this may feel like a pretty clear cut war in terms of who's right and who's wrong. There will be other much messier conflicts in the future. There are Messier conflicts going on in other parts of the world right now. And so I think rather than expecting platforms to pick and choose aside each time, I'd like to see goals for moderation during war time, that are simply cognizant of the fact that these are states engaged in war, that, you know, military recruitment will more directly lead to harm, you know, men will pick up arms and use them to kill other men. But that that is, you know, again, expected during wartime. And instead, the goals of moderation should fall back to things like preventing harm to civilians, or punishing violations of the rules of war, rather than laying out harm, as you know, an unrealistic thing to be prevented.
Yes, there's just one, one idea that I forgot to, to develop also on your question, which is, you know, what, what we would want to see with regards to moderation, and focusing on content, the only the other thing that I that is important to mention is there has been a criticism with regards to the way content moderation is being done. Now, that does not take into account the context and the linguistic diversity, and so many other very granular information that are necessary to make an informed decision. And I've been thinking about this a lot. And one of them, one of the things that I'm exploring right now is how the, well, the lessons that we've learned from multi stakeholder approaches in different other sectors of activities, could help bring some responses to that issue. And I'll give more specific examples. I know personally, that many companies, when I say personally, it's not me truly using Facebook. Now it's me as a, as a person who's been in this field. We know that companies do have well, engagement, departments and people working to engage with more communities around the world. But I can express and I hear a lot of frustration being expressed by dosing communities, which say, Oh, we spend lots of time engaging with this companies, we spend resources even to be at those convenings. But yet, we don't know what happens after the engagement. I think this is very problematic. If I spend my time, I want to make sure that my time is being valued and valued by a company that explains at least well, we didn't take this into account for these reasons. But yeah, your feedback matters and your feedback is being it's being circulated internally at the company level. That's one thing. And the other thing that I think would be important here is when we talk about content moderation, and I'm so glad that Adam, you referred to trust and safety until I came to the US, I didn't know what Trust and Safety was, this is a problem. Because no matter what activism I do, it's never going to be efficient, because I'm going to talk to policy people and tell them, Oh, I'm seeing this, this is not okay. Well, obviously they cannot find a response that's satisfying to me because they're not the ones who have to deal with the content right on a daily basis and who have the means to action on the content. Had I understood this I probably would have done my activism differently. And there are certain these thinks that the company that I interacted with would have understood better if I could speak to them in the language that they understand. So this comes to my second point in terms of what can be done being more on the part of the structures, organizations, companies, whether they have business business models or not being more transparent also on what organizationally, who are you? Who are we talking to when we're talking to policy? What are what should we talk about when we talk like policy people? What should we talk about when we talk to product people, design responsible tech, whatever else teams, there's out there, I had to be on the board to find that out. And I don't think that's normal, if we want to be efficient.
And there was a panel earlier today that discuss the concept of the future of the Internet alliance that that the White House is continually working on. It sounds like it's coming in a few weeks, we'll see. But seems like it's a good time to have an international approach and agreement on how to what is a free Internet? How should we approach that? You know, I'm just curious, you know, do you think that there can be is, is war a reason to finally get agreement on this, you know, or is it kind of have we have, we divided the blocks enough that it's going to be harder to form that future there and airlines, it does seem like, it's a timely thing to have I
guess I just struggled to see how America and Europe, given their distinct legal traditions regarding speech can ever completely see eye to eye as far as the responsibilities of platforms in the United States, platforms will never have the responsibility to remove what might be imposed on them in Europe. And I think that will always remain as a stumbling block, so long as we keep our separate superior speech norms.
I mean, I don't want to re litigate a panel that already happened that I wasn't here for. But But But certainly, the US in a number of European countries are all part of the Freedom Online Coalition, which which is a group of governments who've committed to promoting a free and open Internet, you know, on that we should be able, if we can't see eye to eye on the need for open, interoperable Internet, we do have a problem, and we won't be able to help civil society in places like Ukraine or anywhere else. And, you know, there's other common things that I think this this moment is really calling on democracies and open societies to stand up for it, for example, you know, committing not to, not not to use, you know, cyber attacks against each other citizens or, or, or, you know, not to engage in what is the behavior that a responsible state does not engage in, if we want to, again, protect and respect civil society around the world, not only in existing democracies, but in countries where civil society is under threat and where their governments are trying to squash it, sort of how do we ensure that democratic countries are fully supporting both technically and with policy, the ability for civic space to, to grow everywhere?
I think it's really time to have such alliances, whatever look like because people are genuinely lost. When I say people, I mean, actually, governments, even even in in the EU, despite the fact that the Digital Services Act, which is going to impose new regulation and compliance obligations to platforms, content platforms, specifically, I think a lot of people are lost on what it is what it looks like to in 21st century to fight the harms while preserving the benefits of the Internet. And I hope to see more of this conversation happening in the future, especially at a time when so many countries around the world are contemplating adopting regulation. And we were so my lab we organize workshops and multi stakeholder workshop where we have companies and organizations, civil society organizations, academia as well, diplomats, something we did very recently diplomats in conversation with tech companies and academics. Well, what they told us the companies is that we are going to end up in a world where on this part of the world, we're being told to do this on these other one, we're going to be told to do this, etc, etc. Well, ultimately, we're going to accept that the Internet doesn't exist anymore if we go that way. So I think that speaks to the urgency of having a role would map at least principles what it is to fight the harms while preserving? Yeah,
I was gonna say, you know, we're we're long past sort of like techno optimism everyone has the same Internet we have Splinter nets, the analyst Ben Thompson, sort of he calls it the four internet's which is the the US Internet model, the European Internet regulation model the, the Chinese model, and then when he calls the Indian model, and I guess you'd probably say the Russians have now the Charter, the Chinese model. But I don't think it's hopeless. And I think to the extent that, you know, war, may force Americans and Europeans, for us to realize that our approaches are not radically different on all things. Obviously, there's a little bit there are some important differences, but differences that may be more meaningful in peacetime than in wartime with respect to global Internet governance.
Right. Yeah. And I mean, where where we're seeing the action in terms of regulation, at least right now is the DSA and Europe, there does not seem to be bipartisan agreement on how or whether to reform section 230. In the US and our Congress, there's been one bill advance so far that earned it act out of committee and no timeline, if at all, to go to the floor, there's been a lot of pushback from very diverse groups with the media to ACLU to tech companies, raising a lot of concerns about encryption and privacy. So that's the main example I'm seeing in terms of a very narrow approach to 230 reform in the US, and it's created a very unique group of bedfellows and opposition, making it you know, difficult for passes, that seems in my view, so just just kind of your view on like, you know, it is 230 form, even in a very narrow approach even possible in the US Congress now, and or are we just going to be seeing these same companies operating on different rules in different parts of the world? Yeah.
Well, for Wikipedia, section 230, is kind of existential Right, right. Without section 230, or community wouldn't be able to actually set rules and enforce them for what actually is knowledge without getting sued out of Oblivion every time somebody doesn't like their Wikipedia page. Right. So So is absolutely existential. And it's not to say that there is no possible way that 230 could ever be reformed, that that wouldn't be harmful. But but the Getting it right is going to be very, very difficult, in a way that doesn't inflict collateral damage on on communities, you know, just putting aside commercial platforms, just on communities, right to, to, to participate in projects that serve the public interest. And there are so many other things that can be done, regulatory approaches that can be taken, that don't involve section 230. that have not been tried, for some reason. You know, privacy law, for example, more transparency. You know, there's, there's, there's a whole list. You know, everybody's heard them of other ways we can address the reasons why disinformation, harmful content get amplified and targeted and end up being weaponized. So why don't we deal with the measures that are less likely to hurt civic space, less likely to hurt open societies, less likely to hurt people in Ukraine who are trying to use our platforms to share knowledge? You know, rather than go down this path, where we're just not sure there could well be some tweaks, but you really have to do your due diligence around them. And that due diligence has just not been done.
I think whether it's tinkering with 230, or some other vehicle, any kind of speech effecting legislation just breaks over the rock of partisans wanting different things out of content moderation, right now, and this is a, you know, broad brush, but uh, the right seems to want platforms to be required to host things that they don't currently host and the left would like to see them required to remove things that they currently don't remove. And to the extent that any piece of legislation is seen as an advantage in one of those goals, some folks on the other side come up and have an issue with it. Even when you look to, you know, bills, ostensibly in other spaces, this antitrust open app markets Act, to the extent that the right thinks that some of the sort of app store you know, anti anti competitive behavior provisions would prevent parlor or gab from being kicked off of app stores, then some in the center left see that as a potential problem with the bill. So outside of a legislative vehicle, which totally kicks the can down the road, like a really broad duty of care to be interpreted by executive agencies or judges. I don't see any substantive way forward right now.
I think also, you know, we've been debating some of these big, meaty, philosophical questions at the heart of global content moderation, it makes the section 230 debate look like such small potatoes, right? I mean, let's be real, like, you know, section 230. And it's European analog, the commerce directive, striking, amazing balance that both allows for user generated content and encourages and incentivizes platforms to do content, moderation. Many of the proposals, the Digital Services Act is even really interesting, because a lot of what the Digital Service Act does is is put in place, transparency and due process requirements for platforms around content moderation. And I would be willing to bet that when this is all said and done, that most people's problem with content moderation isn't actually transparency, due process. It's the decision itself, it just don't like the decision that's made. And so you know, I that's fine. That, you know, more transparency can be useful. But it's, it's nibbling around the edges. And, frankly, I think just, you know, this is kind of inspires me that to have more opportunities for this kind of deep philosophical question about what content moderation rules ought to be on platforms? Should it be a human rights standard? Should it be other thing? That's, that's extremely useful, and I think beneficial for both users for society for platform's themselves?
If I could, you know, I completely agree with you, Adam. And I think that even those who should have that conversation should be the legislators themselves. I mean, I would love to see. I mean, I don't know.
You're more optimistic than I am. But I do think that you earlier made the exact right point, which is the companies are not yet fully set up for having these kinds of debates about something before it happens. The oversight board is encouraging that conversation encouraging more thoughtfulness around it, but mostly, it's large, mostly most hard content moderation, a lot of it lurches from crisis to crisis. And that's not good, either. But you can also imagine a future where that's, that's not necessarily the case. Right? We have looked at in the national security arena. And, you know, you always hear about national security types doing wargame exercises, right? Why don't why shouldn't content moderation have sort of wargame exercises right? Now, I mean, seriously, where we really debate but you can look at what's happening in Ukraine, right. Okay. What what would happen if Facebook blocked all of its services in Russia? What would be the likely effect not just on Facebook services, but global politics? I don't know that there's much opportunity or venue for that kind of conversation in gaming. And I don't think politicians are the ones gonna have it. Sorry, but I do think it needs to happen.
Yeah, well, thank you. Thank you. Thank you, Rebecca, for this open brainstorm. I know we'd love to because I'm asking this because, first of all, we should acknowledge that there is a lot of misunderstanding, or probably lack of understanding of what section 230 is in the first place outside of the United States, took me three years. And thank God, the Berkman Kline Center for Internet and Society at Harvard, where I was first a fellow for educating me. So even before discussing all these principles, we should even agree on what we're talking about. And I'm sure that there many European lawmakers who don't know that much about section 230, who read it only from newspapers or from scandals. And that is a problem in itself. Because there is a philosophy behind section 230. There's a history. So yeah,
well as some of them know, intermediary liability. And and, you know, a lot of governments impose strong liability on platforms for what their users do. China being perhaps the strongest, India putting them placing rather strong liability as well, which is one of the reasons why takedowns are so frequent in India. So there are a lot of governments that are actually tightening liability on platforms around the world, which which in itself is a troubling trend, because in countries where liability is strong, you see Oh, Over censorship, you see platforms erring on the side of taking stuff down, you see also abuse of the law by authorities. And so that that is the future when you when you increase liability, unless you can come up with a way that that won't be the future and really game it through. But I've not seen anything remotely resembling that.
I think we can open up to audience q&a If anyone has a question.
Thank you, Adam, and will have been the descriptive element here. And Julie and Rebecca have been the norm of what should be, you guys are focusing on the practicality of exigent circumstances and the trade offs that are made, and you'd like more transparency and process and you'd like us to use international human rights law. But everything is designed in in the context of an example. So I would just ask Rebecca, and Julie, I mean, looking at the example of Senator Warner saying, Take down Russia today, or the Lensky saying, Take down everything that has anything to do with Russia. In those two examples of content moderation. What do you think they should do in terms of processes transparency and human rights? Thank you.
So what the company should do? The first thing is, when we talk about transparency, we remember thought, three or four years ago, when many platforms, most of them American decided to label state operated or state funded media. And many of this decision targeted Russia affiliated organizations. And I remember that there was a lot of frustration, including in France, and in asking, Well, why specifically Russian, it says if that rule was made just for the Russians, so I think, explaining that it's not it's based on objective criteria, for instance, how many how much money sorry, that the media relies on from the government or government subsidies to to operate, these are kind of, sorry, objective criteria, that could be explained to an audience so that the decision to well make the call of sensor wealth censoring, it's not really censorship could be debated, but at least limiting the reach of these state operated media and state that is now accused of being a violating international rules with regards to the war and peace. Well, could be better understood if there had been that, you know, that explanation work done by the platform's, or at least that objective criteria basing the decision on objective criteria? I'm not sure that such thing has happened yet. We just know now that Russia today is censored. But I bet you that many of the users, especially in two markets that I know most of which are European market and French market, particularly where Russia today is the TV is quite popular, to be very honest. These two markets, and I bet you that many users do not understand why that decision was made. But it does rely on very objective criteria. That's one example that
Yeah, I mean, with Wikipedia, it's the community that decides which sources are acceptable in what contexts and it often depends on the subject matter, right. So if it's COVID-19 pages, the the the the editors who are involved with those pages are, you know, have banned a lot of different sources as being not credible, in that context. That wouldn't be banned on Facebook. Right. And, and have very strict rules for what the sources are. So it's very contextual, and very driven by experts around a specific situation. Right. So, I mean, I think you're you're you're looking you're fishing for, you know, should should, you know, Congress ban RT? Well, I don't think they can, because not in this country, because of First Amendment. It'd be challenged, I would assume. But, but I think these these, these situations are very contextual to the platform. And in terms of the problem you're trying to solve, which depends on the context in which that information is being shared. Our team might is shared in the context of, you know, an ad targeting kind of situation very differently from an online encyclopedia that, you know, maybe sharing information about, you know, tourist sites in in, you know, somewhere, right and might cite something from RT, you know, and so, so it really kind of depends, right, and this is, it comes down to transparency, it comes down to having an accountable mechanism, right. I don't think you can necessarily sort of just blanket say, here's what we should be doing. Here's what all platforms should be doing in all circumstances related to AR T or any other source.
If you want to make a final remark, oh, I
mean, if we're in the space of talking about what platform should do, I don't think that they should remove our tea or pull their services entirely from Russia, I don't think either leads to a sustainable rule that can be applied to to other situations. You know, at the end of the day, if people can see our tea, but they can also see plenty of other things. I'm not too worried. Nobody worried about people watching Baghdad Bob back in 2003, or whatever. They can say what they want and the reality on the ground will be what it is. What matters is people having access to that reality, more than than whose propaganda gets pulled.
I know we have a few more questions about I think we have to wrap it up. But I want to thank all of our wonderful panelists for like a wonderfully engaging conversation. Thank you.