he says. Bu School of Law and also a Yale ISP fellow and I will hand it over to her to introduce our panelists.
Thank you. I'm excited to be here again sooner than that is always an amazing conference and we had a great slate of panels already yesterday and today we have another great lineup. I'm sure you've seen in the schedule. I've really experienced people with a lot of think useful and interesting information about content moderation free expression, and all of that. So I want to very quickly kind of situate the panel a bit so they can think about how to approach this conversation. And we want this to be a really open conversation so if you do have any questions or you want to bring something up, please feel free to ask something with the q&a function. And we'll try to get to your questions at the end of the panel. So we're talking about this really interesting time we're in for content moderation and online platforms. We've seen a lot happened last year, both with the election, demick, and with social movements around the world. And now we're seeing a little bit of a tension between the values of free expression, and some other possibly conflicting values, right things like you know concern for individual health and safety online concern for election integrity, and so on and so forth. So with that, I would actually like to ask each of our panelists to briefly introduce yourselves and give maybe you know one to two minute high level overview of your thoughts on the situation. So where are we in 2021 with free expression and content moderation with tech platforms. So we don't have an order here we're not sitting in a row today. So whoever would like to start with that.
Glad to start Tiffany. Go ahead. I'm Steve del Bianco with net choice trade association that tries to make the internet safer free expression and free enterprise. So I think it's as we look at 2021, the way you've teed it up. I think it's still true more true than ever that Americans love to share their own news and views, without the traditional media filter that we'd love to write and read reviews of restaurants, travel, and resorts and places like Yelp, we, we love to look at product reviews on places like Amazon or Etsy. We'd love to buy and sell on platforms where a small crafts person or a service provider can come together at sea and thumbtack for instance provide that kind of a benefit to all of us. And I think it's more true than ever that posting and contributing on places like GoFundMe and Kickstarter is really popular with Americans. And it's also true that all of those platform activities are made possible because of Section 230, since it avoids the ability to be sued for things that others have said, or sued for taking down things that you didn't think were appropriate. I would also add for 2021, that this country still has the First Amendment. and the First Amendment makes it very difficult for government particular Congress to step in and regulate the editorial content that happens on these platforms. So 2021 will mostly be about working the reps, working the reps on social media and content, mainly because it kind of works, and partly because because of the First Amendment that's pretty much all the government can do. Now, for those of you that enjoy it. It'll be fun to watch the executives of my industry squirming in congressional hearings. And I can tell you that from the standpoint of our industry, it is frightening to have those attacks from both sides on social media moderation though one side that wants more moderation and the other side that wants less. So sit back and enjoy the show for 2021.
Hey Tiffany, I'm happy to jump in. My name is Nora Bennett vetos, I'm the director of us free expression programs at Penn America and it was so great hearing that introduction, at least from a completely different sector I don't know about you I often time myself to hear how soon into a debate section 230 gets introduced. So, it was about 30 seconds in, but we just got it let's rip the band aid, but you know at Penn America we're an organization of writers and allies, promoting the freedom to write and freedom of expression. And over the last many years we grew concerned over the way that words have been weaponized to sway public opinion to sow doubt division and chaos. And over the last several years we have been investigating what is happening, the ways that it has become harder for us to come together with shared knowledge, the ways that disinformation and online abuse, and the kind of toxic vitriol online have created. Very weird ecosystems, all the while local journalism has declined and people often don't quite know where to turn for credible information. And I think it's Tiffany laid out in the beginning it's really kind of been a perfect storm of all of these compounding crises. And this last year, really seemed to illuminate for so many experts, why there is urgency, how there is a very direct corollary between what we see about the info demmick and what we do with our bodies and to keep our bodies safe. I think then the election showed us that the kinds of misleading narratives and realities that people have believed in for years, have real world effects on our democracy. And so we see our role as really trying to make sense of for people what's at stake. We work on disinformation online abuse and anti harassment techniques for journalists. Of course I am a lawyer by training and I run our First Amendment agenda and our national advocacy at Penn America. And I think in the next year we're going to see hopefully a flashpoint where everyday people and experts can actually see that these issues have very real world consequences, and where the platform's have accountability I think we can talk a lot here today about removing reducing and forming users, and how interdisciplinary solutions are necessary ahead.
I'm happy to jump in. Unless you want to start drawing. Please. Okay, I'm Jamal green. I am a professor of law at Columbia Law School. I'm also maybe as most relevant to this panel. I'm a co chair of the Oversight Board, which people in this audience are probably somewhat familiar with but it's a, an independent body that was set up by Facebook to be kind of the last appeals process for content decisions on Facebook and Instagram. We are a new institution so it just started last summer, just took our first cases a couple of months ago, expect to be announcing our first cases, literally within days. And as many of you are certainly aware. In recent days Facebook referred to the Oversight Board, its decision to indefinitely suspend the account of Donald Trump. And so, we're in the process of deliberating about that right now to the question of 2021, you know, I think it's in many ways, certainly continuous with 2020, except that maybe there's a little bit it the stakes are maybe a little bit clearer to people than they may have been in the past. You know, if you are someone who believes that speech can never harm anyone then you got a wake up call. A few weeks ago. If you are someone who believes that speech should be readily regulated. You may not necessarily want the CEOs of social media platforms to be the ones who do that. So I think as we think about what's what the future looks like. Certainly from my vantage point in my corner of this conversation, building new institutions building institutions that are not necessarily in bed with the tech companies but also not government institutions and trying to figure out how to, how to structure those institutions. I think that that is the future of content moderation, whether it's the precise structure of the oversight board or something else. I think remains to be seen but. But I do think, trying to create these kinds of intermediate institutions, is going to be incredibly important.
So, thank you very much. My name is john barnatan I'm a fellow at the Center for Internet and Society under the cyber Policy Center at Stanford University. I also work for many international organizations around the world on freedom of expression media regulation issues in different countries and regions. So I mean I think that I'll take basically a legal angle to this discussion and what I see if when it comes to I mean 2021, is that apart from, I mean this discussion is on content moderation, I believe we'll see lots of discussions on regulation of content moderation regulation, either by independent bodies independent new self regulatory bodies as it was just was just mentioned, but also regulation via statutory statutory statutory laws and statutory provisions and, of course, I mean, what comes to mind. In this context is section 230 but today I would also like to talk a little bit about the Digital Services Act, because I think it's a very interesting initiative, not only from the European perspective, but also because I believe that the debate during 2021 about the DSA which will be really intense. I think we will have a strong influence on American debates, and we'll have a strong influence on the way, freedom of expression is reconsidered within the world with the online world and this time because of timing because of political reasons, etc. It seems that the European Union will be taking the lead, not only with regards to the United States but I believe also that the way this conversation takes place in Europe will also have a strong influence in the way new laws in this field are adopted in Southeast Asia, Latin America, etc etc. And we already saw in the past that some laws and regulations adopted in Europe and have been completely twisted in authoritarian environments to restrict freedom of expression. So I think that this is something, as I said this European experience is something that has to be of our concern, beyond a robe, because I think that this is a conversation that will be really influential beyond the EU borders, and particularly if I may say, in the United States. Thank you.
Great, thank you to all of our panelists. As I think as everyone can audience can see we have a breadth of experience here on many different topics. And it is something that is really crucial as for freedom of expression for the US and the private sector, public sector, but also as john was mentioning, for the International world right, freedom of expression globally is something that is currently potentially at risk. And there are things that could be done maybe from our tech sector, and possibly from our US policymakers as well. So I'd like to start with just a very basic question. If we start from the standpoint of every one valving free expression, and we all sort of believe that that's a good thing to have and to support. What are the greatest threats right now. What threats, do you see to this idea that we at least once had, which was that the internet. Could it be a vector for free expression. What are some threats either. No market based or government based or society based that you see coming now. I think it's important to understand these we can try to figure out solutions. And you can just raise your hand, either virtually or physically and I think I can see from here.
Sure john. Thank you very much because this looks a little bit like a legal question. So now what I would say is that, I mean, from my point of view, I mean the most important thread is these intention by governments to force platforms to do more, because this has been some sort of a rationale or some sort of the music that we have been listening to we have been hearing in the course of the of the last years, platforms should do mass do more. when it comes to dealing with terrorist content elections, COVID, the dissemination of information or hate speech in, let's say vulnerable environments so on and so forth. So I think that states may have the temptation to force via legislation platforms to police, not only illegal content but also harmful content. So in other words, states may be tempted to use platforms to take down content that they couldn't take down by themselves so they would instrumentalize delegate to platforms. But this function of restricting speech beyond the boundaries of what would be acceptable in terms of international standards so i think that i mean the single most important threat. Now is the change in the terms of the conversation when it comes to the regulation of social media, because the stress is put on risks, the stress is put on the idea of risk mitigation and the, the assessment of the impact on freedom of expression. Basically, I mean it's very hard to find it I mean you you hear some general statements made by companies that don't work but in any case we will respect international standards, but we still don't know what what that means exactly. So I think that I see these as an important threat. We've seen that already in countries like the United Kingdom, or Germany for example. And I think that we may also see these in other parts of the world.
A Thank you. And Steve.
Thanks, Joanne, when you bring up Europe as a comparison. There are two important differences, I'm not a lawyer, but I do understand that in Europe and the rest of the world, the English rule prevails for lawsuits, to the English rule the loser pays so aggressive lawsuits brought against platforms for things that were said or taken down, don't happen nearly as much around the rest of the world. They happen here. And the second is the US much stricter rule, through the First Amendment, then the freedom of expression principles that are embraced in other parts of the world. So that creates a very particular environment in the US where lawsuit abuse becomes a key problem, the threat, if you will, there's because Tiffany asked us about threats the key threat is lawsuits lawsuits over content that was posted by someone else or lawsuits over content that a platform decided to take down. And that is all under a microscope, because on January the sixth, we all watched what happened at the US Capitol and Who among you wouldn't have seen that, and been actually frightened. If you were at the controls over Twitter, YouTube and Facebook you would have cranked down the dial on how much speech, you're going to allow the afternoon of January the sixth. And that's why I think we saw a rather significant reaction. Because content moderation decisions are not done in the abstract they're done in the context of real world events, they're done in the context of which side of our governments in power, the one that wants more moderation, or the one that wants less moderation. And those are the realities and so is the competition or plenty of other social media platforms whose moderation practices are different, perhaps then yours. And that might be a that might be an attractive for audience and advertisers to move over there. So all of these taken together are threats, and I would love to compare more to Europe, but let's keep in mind those fundamental differences of lawsuit abuse, and the First Amendment.
So I mean, it's hard to think of what are the most urgent threats because there's a variety of entry points. But I, there are two that I would sort of put forth. One is the ongoing and what I think is his perennial problem of the speed of proliferation of bad content. And I'm making no real distinction as john did brilliantly between harmful versus illegal content. The spread and the speed of spread is so hard that from a pragmatic perspective, it is almost impossible to ever really get in front of the problem. If we're looking at automated and human solutions, whether in concert or humans reviewing what AI does, it's really hard to ever get in front of something. And we know very little from the platform so the unknown aspects of, you know, impressions of a certain piece of content on people. We don't have that from Facebook, so that's a problem. And I think that the speed is one where it's just, it's really hard to tactically imagine how to implement solutions. so that's one. I think the other though is the potential threat in this moment and in the next few years of overcorrection, that we may in fact reach a point where we are eager to set precedent. To use a legal term but for those that aren't lawyers to think about how something like the oversight board or another mechanism that can govern would essentially create precedent that has really troubling consequences. In the future, whether it is for other international leaders or others, and that in this moment so many people have called on, you know, we know experts for example are generally supportive of the ban by, you know, by Facebook and Twitter of Trump. And yet, what will we see when overcorrection becomes something that actually hampers free expression, instead of really allows us to think through what are the threat assessment models that we could be implementing what are the very careful nuanced framework that we need to be asking questions about now.
So I'll I'll jump in both because all others have spoken and because Nora mentioned, the Oversight Board,
I think, was perfect.
Exactly. I think generally speaking, if I were to as Nora, did I struggle to think of what's the overriding threat there, so much of this is all interconnected. But if I had to put my finger on a single underlying problem. I would say that the efficiency of what I'll call information capitalism, which is this that data is valuable in all kinds of ways. And when you have something that's really valuable, people are going to want to commodify it people are going to want to opportunists will take advantage of it and I try I say that and I don't say that in a particularly pejorative way I say it in the sense of just in the sense that there's a lot of stuff going around, and there's no way to manage it with clear clean cut. Rules. You can only manage it contextually, and yet it's happening at a speed and data scale. That makes that essentially impossible. And so, as I think about how do you address that kind of problem. I'm not, I don't have all the answers for sure. But I do think that just to, again, focus on the Oversight Board at least. I think the thing that the Oversight Board, can, can try to deliver is some degree of trust, which is to say that there are all kinds of reasons not to trust government to regulate content. There are all kinds of reasons not to trust private companies to regulate content. And what we're trying to do. I think with the oversight board and again I, I want to be humble about this ambition, right is to try to create institutions that don't have perverse incentives knowing that these are always going to be hard questions. Try to try to try to construct institutions that will answer them as thoughtfully. And as independently as possible.
Tiffany Can I do a quick follow up. I completely agree with Nora's point that we want to avoid overcorrection. But I do want to be. I want to do want to be understanding about Jamal's point that we may have to experiment with different approaches the oversight board as an institution independent institution is an experiment worth having. We always fear overcorrection. There are other experiments, the social media platform me we doesn't rely on ad Rev. To provide a services the charges charges me $5 a month to be in the platform. So we're gonna see experiments that may not rely as heavily on the monetization of data, we'll see whether people want to pay for what they get for free today, and and perhaps an oversight board that allows a transparent discussion of an important question like for political leaders, what are the cultural, and what are the political and legal ramifications of giving them a voice or denying them a voice and what degree of moderation is necessary. I'd love for us to be less focused on Trump. The person, and what he says and focus more on what the rules should be going forward for platforms that would adhere to the Facebook Oversight Board those platforms are going to want to understand what are the future rules when the leaders of a country, use the internet as a soap box.
Really interesting. That definitely a lot of issues that you all raise right now. I'm really interested in this idea of context which I think a number of you mentioned this idea that content moderation has to be done in context. And that's been a core complaint, especially among people or users in the global south I've heard, because a lot of the content moderation decisions aren't done in the content moderators aren't situated in the places where it's enjoyed being made, where the policies are being written in one country and implemented on another country. So if we think about the internet as a you know a global community. And each of these platforms as having potentially global reach. How do we balance these ideals. No us space first amendment free expression, and you know the strength that we have for our private sector here versus some countries or some regions that don't have the same values. How do we deal with those tensions and is there a way to move forward that you know allows for us to support free expression on a global global level and not just on a US level or not just on an EU level.
I can try to get started on on this which, which is, of course, a very difficult challenge. I'll just note that we can think about this, through a couple of different lenses so one is. There are lots of platforms. There are lots of ways in which people, lots of, lots of bat different balances that different platforms strike. And at some at some level, you know if you, if you don't want to be on a platform, whose policies are being set in Silicon Valley. Then, at some at some level, that's got to be a choice that platforms are able to make about how global or how particular. They want their, their policies to be, um, there is a point at which. And this is certainly relevant to the Oversight Board appointment which kind of global human rights standards become relevant right so if we're talking about when does what what counts as incitement or if we're talking about hate speech or we're talking about various kinds of misinformation or disinformation Facebook and a number of, and most of the platforms that we're talking about here have committed themselves to, to being guided by international human rights principles. And those principles have three decades now struggled with this question of how do you adapt broad principles to local context. There are some mechanisms for doing that. In human rights law we sometimes refer to what's called the margin of appreciation. There are other ways of work where you give some deference to local practices but not too much deference, I, I also taught it's also I think the case that when we talk about about context. You know the same piece of content might be more harmful in one context than in another context or in one geographic location, than another a word that seems. That sounds perfectly benign to an American ear translated into a different language or different culture might sound very different, so it's going to be important going forward. I think to develop and continue to develop local expertise. That's something that a number of platforms are working towards but are aren't there yet. The Oversight Board relies on local expertise in arriving at its own decisions and that's something that we will be continuing to try to get better at as well.
Yes, thank you. No, I just wanted to say, I mean with regards to these, this idea of universality versus fragmentation if you wonder it's true that platforms are global that they have a global set of community standards that are inspired, let's say, in the First Amendment and illiberal speech traditions, but on the other hand we also need to understand and accept that platforms, accept the jurisdiction of different states, including authoritarian states because it's the only way for them to stay to remain active in that state and let's think about this very recent example of Turkey imposing on platform certain obligations, otherwise they would have to leave including appointing a legal representative in the country, and most of them have accepted. And, of course, I mean the way for example speech is regulated in places like Germany or Thailand to just to mention two different examples is based on the way on the legal standards, that's published at the national level so let's say platforms have accepted these and four platforms are so rich it's easy to say, Well, sorry we're respecting national legislation. What else no button, respecting national legislation means respecting all kinds of kinds of legislation. The second thing that I would say with regards to I mean in this context this debate with regards to the oversight board and this is something that is, I mean connected to the conception of the Oversight Board case that unfortunately the Oversight Board only makes decisions on the basis of community standards, or when there's an issue that is connected on the way the law was interpreted and applied in a certain country, then this, the Oversight Board does nothing, it can do so so that that is a problem, but say I mean, perhaps these will change. I mean, in the future, but of course this is a limit to these kind of bodies that can only deal with at the level of community standards but not not another level. And last but not least I think that when it comes to complex I think that I would say platforms still behave in a very discriminatory way. I mean, they look a lot into context into, let's say, into language, etc. in places where they have the resources and the interest to invest in quantum operation whereas in other parts of the world. I mean the same effort is not applied. I mean that is that is obvious and the fact that Donald Trump has been the first president to be done considering the very crazy things that other presidents have been saying in the law in the course of the last years on social media proves. These, let's say, in contrast, when it comes to content moderation, depending on the area of the
different context, and the way that Jamal and Xuan have addressed it might be different than the way that Nora and I would talk about context. It isn't just geographical and static cultural differences to create context it's also the temporal context like what is happening now in a country. Are we two days before election, an election that's going to change the context of what gets moderated. That's where the hunter Biden laptop story went through that kind of a lens, right, for moderation purposes. Another would be if you we are in the middle of an insurrection or a movement that's showing violence. Then suddenly the potential to incite violence is dialed way up in that temporal context. And finally, if we're in the middle of distributing vaccines after so long, and we are trying to put needles in arms, it in that within that temporal context I think that misinformation, not even intentional disinformation but misinformation about vaccines is certainly protected by the US First Amendment and perhaps global freedom of expression, but in the temporal context of vaccine distribution in a pandemic. I think social media companies also have to consider that its context in how they regulate it, and each of them may choose to do it differently. Right. And it's not something that would be subject to lawsuits where or where misinformation spreads, and it's not something the government could crack down on, but it is something that social media platforms are going to try to be responsible for be responsible for.
know I've been sitting here thinking about
what. what does
thoughtful useful fragmentation look like, and what are examples in practice of context I mean humor is a great one satire. You know what lands with literally a Steve may not land with me, you know, or with Tiffany. And so I think it's really hard to have such a high level conceptual conversation in some ways about context when we're not actually talking about how something will be applied, which is quite meta in a way, but, you know, I think, American exceptionalism is baked into almost all of the first tier platforms. From an ownership perspective, from the way we talk about in sort of the cacophony of media, and the buzz around what platforms are doing. And I think it's actually been a really meaningful several years I would say the last five to seven years where you know the platforms have begun to really examine how what context looks like that, there may not be fact checkers with language skills for a community that literally are desperately in need of understanding what they're seeing. And so part of how pan America has come to this issue and the kind of really, I think, impossible question of context is that we also have to have other types of solutions and one of those is media literacy, but we're never going to appropriately crack the context not. And so we need to have techniques and tools that people can use to make sense of what they see online. It's a kind of empowerment if you will so that in the absence of mitigating mechanisms that the platform's implement there are other things they can rely on. And I just think we're never going to reach the point or at least not anytime soon, where we feel we have appropriately dealt with context. You know we can't even get that here in the United States. So scaling that globally and for tiny communities, you know 140 people speaking one dialect. I think it's going to be really, I mean, insurmountable. And so I think it's really also incumbent on us to think of some of these issues as human issues, and what are the human ways that we need to help you know bootstrap up a kind of digital intelligence for people and that's part of why we work so much on training people to identify what they see whether it is learning how to get to the point where they know how to even flag misleading content or potentially learning how to apply a set of techniques to, you know, being a more credible source themselves.
if I may just have a very short thing. Yesterday there was, I think it was yesterday there was a law fair, a very interesting article by Jakob Chang i think that's that's his name saying look, I mean judges to decide on content cases like hate speech etc etc. I mean, depending on the country. I mean, take an average of 1000 days 500 days, etc. And sometimes we tend to descend with the way judges have considered context. So now let's translate this into the world of what say the way content moderation operates, and we will realize that it's absolutely i mean impossible to pretend that I mean, proper assessment taking into account context etc etc. is made. On top of that, there's also very interesting discussion raised by the Trump case and whether context by platforms also means context within the platform or context, also taking into account what happens outside the platform. It's the second, then, I mean things become far more complicated no and if I have to confess. And one of the reasons why I'm looking forward to the decision of the, of the Oversight Board in the Trump case is, whether it will set the standard that when analyzing content decisions. It is relevant to look into what happened inside the platform or house. It is also important to look outside the platform, I think, but in any case I mean the second option would be really challenging for platforms, if applied to every single case.
Can I just jump in, just very briefly without, without, without previewing. Unfortunately for john out. What we'll say in the, in the Trump case. I thought I'd just make a couple of points, just a clarifying point so one is just on this point about time. If this is something that I'm very acutely aware of because the nature of the Oversight Board is as a deliberative body that talks about content on the internet, which is not a deliberative space right it's a space that that operates at a much faster pace than the board does. We are trying to move more quickly than the average judge would move or the average court would move, but it's it's crucial to our kind of DNA that we're a deliberative body, and not a not a policing body. And that's, you know, trying to just add that into the mix and try to see how, how we can make progress by adding that kind of voice into the mix. The other point I'll just make is, it's just on on diversity and American exceptionalism that is certainly absolutely the case that the first year platforms have been have been American exceptionalist, Facebook is no exception to that. The board's trying to, to, to, you know, it's a global institution so we are members from all over the world. There is no way of getting at the granular context of global content issues. If we have you know we'll have 40 people at maximum strength if we had 10,000 people we still wouldn't be able to do that. But I, but I it as I've sat on this institution I have seen already ways in which even being from the region can just totally transform the way someone thinks about an issue, and to just make sure that that person in the room when important decisions are made, is really important last last minor point I'll make. It's just a point of clarification that I think we're, I think, on John's point earlier about what the board is bound by so we are, we can't we can't change the national laws or change the interpretation of the national laws of particular jurisdiction. Absolutely. But we do make reference to international human rights law so it's not just the Facebook community standards but it's also external sources as well.
Great, that's great context. Thanks Jamal. So definitely I think we see a lot of difficulties here right with thinking about the sort of regulation coming from the law, but also regulation coming from the platform's themselves right based on their own community standards based on outside sources but done internally. And that's a lot of I think what we're going to see moving forward. We've talked a lot about many problems the threats to free expression, and also these problems of context and as Nora mentioned scale and speed. I'm glad you know it was such an experienced panel. None of you said that AI will fix this. Because I think we all know that there's no way to automate ourselves out of this content moderation problem, particularly given the context issues that all of you are mentioning context in terms of location in terms of time on platform off platform. So these are just thorny issues that will have to be dealt with. But I think we've been seeing a lot of innovative solutions so the Oversight Board definitely is an innovative type of solution. Recently Twitter also announced birdwatch, which is a program that will allow users to kind of rate or review different tweets, in terms of whether or not they include misinformation or other types of harmful or bad content, it's not quite clear yet. In a prior life. I worked for the Wikimedia Foundation which is a nonprofit that runs Wikipedia, which has a lot of internal mechanisms for fact checking for community standards. It's really a community based moderation and regulation. And we also have Mike Nelson from the Carnegie Endowment in the q&a mentioned this idea that jack Dorsey mentioned last year, which was having third parties providing some moderation. And this may be was related to what was announced recently. This idea of other types of content filters and other types of content moderation. So we've seen a lot of different proposals right we see proposals from Facebook from Twitter from YouTube just added fact checking links for example, Wikipedia, from all sorts of different platforms. Now what I'm interested in though is what happens now. So we have these companies creating their own proposals for bettering content moderation on their own platforms, and even creating some sort of oversight, or regulation internally for content moderation, the same time we have countries and governments like john mentioned, which are trying to mandate their own standards for how content should be moderated, which might not be at all refundable. The companies are doing or what the companies can do, as you mentioned, you know, you might have. You're doing the work of your 10,000 judges right.
how do we deal with this how do we balance these two pressures that the companies have. And what's the best way forward do we try to clarify the legal standards for governments, we try to rely a bit more on companies and what they're doing for their own communities. And what's the path forward for this sort of, you know, public private tension in terms of who makes a decision for content moderation,
ultimately will be based upon adverse decisions that courts make new laws that a government Institute's executive orders that might shut down an entire service. A nationalistic policies that favor domestic social media over International. There's a whole collection of things that a government can do to push American based content moderation. Behind the scenes, and they will be successful from time to time, and therefore the platforms will have to make decisions about how much risk they want to take, how much expense they want to incur at very country specific moderation, storing the data within a country under data nationalization laws. So these are these are great questions Tiffany, and yet those decisions are made by each company in its own way, jack Dorsey talked about his ideas we've heard what Zuckerberg believes so I told you what me we've decided to do they've decided to just charge and not have ads. So all of these different experiments are going to be boiling along in 2021. And right in the middle of these experiments just when you thought you could conduct and all things being equal experiment, stuff's going to happen. Right, there'll be incidents that will change the temporal context, in a way that all of your predictions about the costs and benefits, get blown up in a crisis. And part of that crisis is fed by our traditional media the traditional media loves it when the internet based social media struggles with these decisions because it does make make the internet a place that's maybe a little more dangerous for audience and for advertisers. And it contributes to the long term goal of bringing audience and eyeballs back to the mainstream media. This is why you see such sensational headlines in traditional media anytime that social media is having a concern.
If I may think you know i think that i mean now, it could be interesting to mention, I mean the kind of proportionate solution that the DSA proposal contains in this specific field because, in a way, what what the DSA is saying, and I think that this is the most interesting part of it. Even if it still contains controversial matters. But it says well you platforms you can use Feel free to moderate content, we encourage you to moderate content beyond the law, but I mean if when you do so, you need to be transparent. And this is the first time for example then that clear transparency obligations are set in a piece of legislation, I think that is interesting. Whether that is realistic or not or it's sufficient or not is something that still needs to be seen. It also says that you need to have appeal mechanisms, and for example I think that in a lot of these are the way the the provision is drafted, it opens the door for a market of oversight boards of competing oversight boards, let's say, and also it says that you need to mitigate risks and you need to tell us tell users how you mitigate risks how you identify risks, and how you mitigate risks. Well, that is interesting. It is also dangerous in terms of freedom of expression, because if it's not properly done, it is it is dangerous but I think that here what I see is a certain tendency towards introducing stricter regulations in the sense that we, I mean state we're not telling you what should delete what should not delete what the parameters that you should use but are we are telling you. I mean, how transparent you need to be the mechanisms that you need to take in place, and the basic principles you need to apply. I mean this is a proposal that I think, I mean some of the proposals made regarding section 230, at least there's one that includes something similar to that. And I think that this will be the most interesting conversation around, I mean the reform of the Digital Services Act, and many things that these can be the solution, when it comes to regulation, not to a targeted regulation in terms of attaining a certain objective or unique, you must get rid of hate speech, but a regulation that in a way defines the behavior, the way platforms should behave when regulating content, although they are still free to decide exactly which principles which, but say values, I mean govern their contract regulation policies.
Great. Thank you. Anyone else have thoughts on that.
Tomorrow I'm I don't really, in the sense. And I mean that as my answer which is I don't know what the what the what the future holds, I think, I think that what will be quite important is to maintain a spirit of experimentation and a spirit of humility, as Steve mentioned as well, which is we don't know the challenges of a year from now, are going to look different than they look right now, some of the solutions people try will be flops and some will, some will do better. And, and so I think just keeping an open mind recognizing that these problems are hard. They're not. They're not hard because you know someone's someone out there is being malicious they're hard because their heart. And, and their their perils and on all sides. And I do, I do think, I guess the one thing I'll say substantively is I do think that the future lies in process as john mentioned, in terms of making sure that, that there are mechanisms in place to allow people to know what's happening to them, or to their content to be able to challenge it when they think there's a problem. I suspect that that's coming at a broader scale than it exists right now but again I'll keep that spirit of humility and say I really don't know.
I'll add one thing which is really about the spirit of, you know, r&d, if you will, an innovation and that I think it's so it's so easy to be critical of what the platform's have not done up until this point, and that there has been too little, too late.
am optimistic I'm very cautiously optimistic about birdwatch actually, I think that at the base level of process of transparency and data transparency is critical. We need that across all platforms and the processes that they go through should be transparent. I also think the values that are used as sort of the framework for those processes need to be discussed and, and of course I think civil and human rights need to be elevated to an extent that they have not been. And that so often those are sidelined in the way algorithms push content, and that that ultimately, you know, is sort of talked about but really by a certain set of experts and sectors. And so I think that the transparency will be critical and, frankly, as Jamal said, you know, we don't quite know, but if the question at some level is what is the tension and what will the public or private aspects look like something that tries like birdwatch to merge, a kind of community moderation model, like Wikipedia has done is exciting. I think that there's a reputation question though, the reputation of the Wikipedia moderators is just so strong and so community focused that and Tiffany you probably know even more than me, but it's something that where you know Twitter's a really different platform and the culture is different there. So the solutions across platforms, I think have to take into account what those platform specific cultures are. And that's also going to be, I think, a necessary discussion but one that ultimately means we have to have nimble solutions depending on what's at stake.
You know, Tiffany the.
The point about the point
about traditional media competing with social media gets an exclamation point when you look at the problems we had in the US on election disinformation about the election being stolen, Harvard had just completed a study where they looked at the election disinformation campaign the amplification of it, and they concluded quote that a highly effective disinformation campaign with potentially profound effects for both participation in and legitimacy of the 2020 election was driven by elites was a mass media led process, and social media played only a secondary role and quote went on to say that quote it's especially hard for social media to curb the spread of misinformation. When the legitimate mainstream outlets are publishing it. And they did on all ends of the political spectrum, the stolen election news story. It led on both ends of the media and then in the United States, and that sort of amplification is not something that social media can do too much about it doesn't mean that social media can just let its guard down. We have to be attentive to very disruptive problems and disinformation campaigns that can that can harm people, but at the same time, it's the mainstream media that bears an awful lot of the responsibility for what we saw. That's a
really provocative statement. Thank you. No, definitely I think that what we saw last year was tantrums and misinformation I think there are a lot of people to blame for that, and certainly I would agree it's not just social media platforms right we saw these stories published everywhere. We have just a few more minutes so I would like to just close with maybe if you have just a very quick thought we do have a new administration. We have new congressional makeup. We have all of these problems Still though. What is one thing you hope for from the US government let's just keep it in the US context, in terms of you know what was one thing you hope for that might come out in the next few years, from policymakers on content moderation or on free speech online. Could be a
real fast one I hope that the binary administration everyone in Congress focuses uniformly on getting us out of this COVID crisis, the health and economic effects of this. And in doing so, they turn to the technology industry, whether it's ecommerce or information sharing, and we'll do all the help we can and try to avoid disinformation that could get in the way of that recovery. But let's get back to normal. I
have two one is, you know, Penn America put together a set of policy recommendations for the first 100 days of the Biden Harris administration. One of those is the creation of a disinformation and free expression Task Force. And so we will be pushing for that and the creation of that in the next, however many days it is now part of it is working with experts, like many of the people here. I also think that we need mandated media literacy curriculum in the United States and across various age groups and this is sort of one of my soap boxes but I think that, you know, I saw in the chat a question and sort of comment around the empowerment aspects of these tools, informing content educative content for users will be critical from the platforms and the government solution PIR Tiffany's question is really, I think, to set people up for what is an issue and a way of life and connection that is not going away.
I would say a couple of very basic things first is that before adopting any law regulation, think, if that will be better than what you already have. And think of the implications that it may have and the second thing is that don't aspire to solve deep societal issues societal politics purlins polarization by fixing platforms, because I think that you need to understand and separate the nature of the problem.
All I wish them is wisdom.
It's a very succinct answer I also wish them with Tom, I think we all need it. Thank you to all of our panelists. This has been an amazing discussion I have personally learned a lot from all of you and I hope everyone in the audience enjoyed this as well. I think I believe there will be a recording up later. And or a transcript I'm not quite sure. But again, thanks for joining us and we'll see you throughout the rest of the day today. And I'll turn it over to the, the safety net team
now. Thank you.
Thanks Tiffany and everyone for joining today, and I will be handing it over to Shane today to introduce our next keynote. Thank you. Thanks everyone.