05_Case_study_presentations

9:33PM Feb 29, 2024

Speakers:

Emily Taylor

William J. Drake

Milton Mueller

Eduardo Diaz

Jonathan Zuck

Derrick L. Cogburn

Lucien Taylor

Kathleen Scoggin

Keywords:

disinformation

platforms

people

government

countries

framework

mechanisms

adopted

spread

state

information

international

problem

good

question

democratic

treaty

responses

actors

internet governance

Hi, everyone. Welcome back. I'm Kathleen Scoggin. As I said before, and this panel is titled case study presentations examples of successful initiatives in promoting truth and transparency online. So since long before the Internet people have been spreading false or inaccurate information both intentionally and unintentionally, but as we heard earlier today, what is different is the speed and global reach. This information can attain coupled with the scale complexity and communication abundance. Digital media and especially social media enable people to produce and rapidly spread incorrect information through decentralized and distributed networks where we often don't know where that information is coming from in the first place. In some cases, motives are malicious to promote preset beliefs with potential harmful societal impact. This new environment seems to introduce a new era in information flows and political communication that demands an analysis of what disinformation truly is and what we should or should not do about it. And is there one definition of disinformation? Sneak peek? The answer is no. The impact of disinformation can be devastating in every aspect of life from election security, individual mental health and criminal activity. This is not the portion of this that is necessarily debated. The question comes in as we attempt to answer whose responsibility is this to remove this content? Should we remove this content? What content should we remove? And there are a few forms of responsibility for disinformation or the impacts of disinformation, the first being criminal responsibility, so should someone be prosecuted if the impact of an election is different as a result of misleading information? Or if you've followed any sort of recent US Congressional hearings, you will know that the US Congress seems to want to place the burden on CEOs of a lot of major tech companies in AI? Who is responsible? If the AI has detrimental impacts, should there be someone, there is also a form of civil liability, if that has existing precedent. So if someone spreads disinformation, about you personally, that is an area that we have sort of already addressed, but continues to be challenging with issues such as AI. So I'm going to turn it over to Bill Drake to talk more about the specific initiatives that have come out of this question of what to do about disinformation, different models and to see what has helped and what has hurt.

I again, there was supposed to be two speakers, but one has pulled out and so I'm left to do this. So what we're gonna do is I'll talk a bit, and then we'll just open it up to a broader discussion a bit earlier than we would have otherwise. I think that that's perfectly fine. It's pretty clear that people here are very participatory, and willing to jump in. And so I think we can have a good conversation together. What I thought I would do, rather than trying to delve into all the dynamics of disinformation was to talk about it from a particular angle, which is to say Internet governance, and the effort to design, international collective responses through governance arrangements are different sorts, and how they're how they're doing. The title of this session invites me to talk about successful cases. Unfortunately, I don't know of any. So I am going to be unfortunately not able to quite come up with really great encouraging outcomes to focus on, but I can at least talk about some things that have been tried. So the, the point has already been made, that the definition of what is disinformation is is inherently problematic. Because there are different kinds of definitions floating around being used by different entities. There is no internationally accepted definition of disinformation, within, for example, the United Nations system, and yet you have international institutions and governments within them trying to adopt international law and policy based on an undefined problem. So that gets to be a little bit hairy as you might guess. This is not a new phenomena. If you go back to the earliest days of electronic cross border communications, even for example, the first multilateral agreement on international telecommunications anybody happened. No it that was the first international agreement on international telecommunications. The Treaty of Dresden of 1850, which was signed by four German states, established the Austro German telegraph union, its arrangements were later pretty much adopted whole cloth on a broader multilateral level in 1865, by the International telegraph union 20 countries that came together in Paris, which later grew to become the ITU, the International Telecommunication Union, from the origins of this framework, back in 1850, they had built into the framework, the idea that governments had the right to monitor all transmissions across their border, to interrupt them, and to terminate them if it's suited their purpose. And one of the concerns that was always raised in this context, was the spread of information that governments considered to be politically problematic, and which in their language at the time would be what we now call disinformation. They call it different kinds of things, incitement to problematic information, incorrect information, etc. In fact, you can even go back further prior to electronic communication. You had the Treaty of the Carlsbad commitment was a group of your German states that got together and agreed that they would prevent anybody within each of their countries, spreading information, cross border into other countries that could incite political unrest or disquiet, the rulers of those regions, so you have a long history in international politics, of governments viewing the spread of information that they might consider to be problematic, as something that has to be penalized, controlled, regulated in some way. And that has continued into the contemporary Europe. What's changed, of course, the problem is, is that now disinformation is a completely different turbocharged kind of environment, you have globalization, you have an anonymity, you've got high velocity, you've got rapid technological change with AI and things like that. You've got commercialization and the development of a whole industries of people will generate disinformation as well as other kinds of cyber attacks for money, often tied to particular states, and so on. And all this stuff becomes very difficult to try to discipline through either national or international frameworks. And the standard sort of toolkit that governments have tried to use of deterring the spread of disinformation through technical means, deleting it after the fact, demoting it via algorithms, disclosing the sources, delaying distribution, all those things, they don't work very effectively in the context of turbocharged commoditized. highly distributed high velocity, disinformation. So you've got all these efforts, then by governments to say, well, we have to do something collectively. And you see this at the regional level, you see this at what you could call the plurilateral level, that kind of small, my small and multilateral, particularly groups of countries that are not inclusive, the whole international community. And you'll see it at the Broad multilateral level, and you see it to multi state level, multi stakeholder level, all kinds of efforts to try to design international policy frameworks. I thought I'd just talk about a few examples of these, since this is a course on Internet governance. And these are examples of trying to do when I talked earlier today about Internet governance, covering both the underlying infrastructure and the use of information. And communication over the Internet has been part of Internet governance. If you had effective governance mechanisms around disinformation, these would constitute a form of Internet governance. So one of the most widely noted frameworks we have at the regional level is the European Code of Practice on disinformation, the European Union adopted in 2018 and strengthened in 2022, a framework that is that is intended to force the major platforms in particular, to be very transparent, and report about how they're dealing with disinformation, what are the procedures and enter into dialogue and collaboration with European governments around these issues? It has not been very effective. You Elon Musk basically told him screw off. I'm not going to, I don't want to report this information to you and told them you know, I just I'm not participating. And so then Europeans were like, come on. Now what do we do? because Twitter is a big source of disinformation. And they haven't quite solved that problem. But now, because we just had the adoption and enforcement of two new big Platt policy frameworks the digital services agreement and digital markets agreement within the European Union that this disinformation initiative is tied to, this becomes something that can actually be backed up by enforcement measures they can have, they can actually penalize financially, very large operators of social media platforms for non conformity with the guidelines, and the principles that they've set forth in this European Framework. So there's the possibility that going forward, we may start to see some actions against particular platforms. Now, here's the interesting point about all this. And this is gonna be true of all the inter governmental actions, the focus of almost all the activity, all the discussion is on who to platform operators to private companies that provide the means by which information is disseminated. But who are among the major sources of disinformation, governments.

And the governments are not in any way, penalized or regulated or controlled. By this framework, the framework is focused entirely on the guys who have the responsibility of establishing the infrastructure through which the stuff flows. So you can obviously see that there's the possibility for some real issues here. And whether or not this EU approach which was supposed to be voluntary, and kind of self regulatory, but is now going to be backed by the force of the DSA and DMA frameworks, you know whether that's going to start to have any more bite, but it's very focused on particular players. It's the the Europeans have adopted all kinds of policy frameworks now, on digital in recent years, and many of them are very geared towards American based platforms. They're very much all about kufan, Google, Apple, Apple, Facebook, Amazon, Microsoft, that's been the big obsession of a lot of these European European policies. There's actually no European entities that are covered by the DSA DMA framework in terms of being very large operators of search engines or platforms. So this is a little bit of an interesting dynamic. And we'll see once they start trying to do enforcement actions, whether the US government, for example, raises any concerns about you know, the fact that it's all American companies that are being brought to task there. So that's the European Framework. It's a framework where you establish guidelines and you say, here's how you ought to act to try to combat disinformation. And if you don't, then we want to enter into discussions with you. And if we find on a repeated basis, that you're not doing stuff, we can adopt certain penalties. Then there are plurilateral actions we're particularly groups of countries have come together, there's been one that's been notable, particularly the participants in the global Declaration on information integrity online. This is a coordinated by Canada, Netherlands has 26 industrialized countries, and five developing countries, including several from Latin America, Brazil, Chile, Costa Rica, Dominican Republic, and they tried to establish a framework of shared principles. Just again, going back to the definition of Internet Governance Principles, norms, rules, etc. A shared set of principles to shape responses to disinformation, and encouraging the development of information integrity, which is to say, information that has a closer connection between factual empirical demonstration of reality and the information is being disseminated. And this involves things like, you know, we're all the parties the this framework agree to respect human rights is leading a kind of consideration here. So that's recognizing that there's freedom of expression dimensions to disinformation, measured adopting measures to establish legislation, and information integrity and platform governance, monitoring, monitoring technology for harms, promoting user awareness, civic education, strengthening multi stakeholder cooperation. And, importantly, Oh, these are mostly they're all democratic governments, abstaining from conducting or sponsoring disinformation campaigns. They made a mutual commitment to each other, that they shall not engage in this. Now. The United States is a party to this and we'll see after November, whether the United States can continue to be an effective participant to that, but you will see I think other coalition's of like minded coming forward to try to adopt frameworks like this based on sets of principles with varying kinds of enforcement and implementation mechanisms. Then at the Broad multilateral level, there's been a whole variety of initiatives. You may know that right now, there's an ongoing negotiation over a cybercrime treaty, that was stimulated by the Russians and supported by the Chinese, which was adopted. The motion in the General Assembly to negotiate this treaty was adopted on a very divisive 88 to 58 vote with 34 Abstentions. And an ad hoc committee was established in 2002 22, which is now met seven times. They're trying to hammer out this treaty, and they just had a big meeting in New York earlier this month, that failed to reach agreement. So they're going to have scheduled another meeting, but they're trying to have something by the General Assembly meeting in the autumn where they will have an international treaty on cybercrime. And guess what, many authoritarian governments are pushing very hard to have built into this language, criminalization of spread of disinformation. And guess what, there is no definition of what this information is. So, you can have an international legal mechanism put in place when what and once enough countries sign up to this thing, it becomes international law that will be cited by governments as creating a mandatory framework where if somebody in another country for example, is spreading information that country X does not like, like, say a dissident from Country X is living abroad, and is spreading information that that government considers to be a disinformation under the treaties mechanisms, the the host country where that person is living, must enter into collaboration with Country X, and potentially turn that person over for prosecution. So, here, we get into the business, the real core of the matter, when you start to have international mechanisms to try to combat disinformation, and you have no clear agreement on what disinformation is and no clear agreement about the bandwidth or the boundaries between freedom of expression and disinformation. The possibility that authoritarian governments will use any kind of instrumentality you create, to then go after their their opponents is very, very high. And we're seeing this already with other kinds of mechanisms that exist. For Interpol, for example, with the red red warnings where Interpol can get asked other governments go chasing around after Citizens they don't like. There have been all bunch of discussions in the UN General Assembly around this too. And the UN General Assembly has adopted resolutions on countering disinformation, saying again, that the platform's bears primary responsibility for doing something. And again, it's, you know, look at the General Assembly, you've got 50 Something governments that are non democratic, many of whom have massive disinformation programs. And they are adopting these international declarations requiring Facebook and Twitter and so on to stop disinformation. Not their disinformation, but somebody else's disinformation. So this is going to be an ongoing issue. Now we know also in the context of the UN's global digital compact initiative, which we'll be talking about more on Friday, the Secretary General Gutierrez has tech envoys that he's appointed as part of this initiative. And they've been pushing the idea of a new UN code of information integrity for digital platforms. And so this was proposed last year, and in the policy brief, and they are now in the process of trying to come out with the draft text. They've had the open consultation took inputs from around the world, totally non transparent process. Nobody knows what's going on. And they're going to come out with a text. And they're going to try and get it adopted by the General Assembly is part of the the future.

What's it called? I'm blanking the sun summit for the future in September. And so this is going to be building on some of the previous initiatives such as the UNESCO platform guidelines that were adopted last year for social media. And again, it's all about creating frameworks, whereby social media platforms have responsibility to block did some disinformation even though disinformation is often generated by states or entities tied to states. And the Secretary General is hoping to establish what in un parlance is called a dedicated capacity within the Secretary, which is to say, a new organizational structure that would have the responsibility for overseeing the implementation of this. So just just to mention one other thing that non state responses then we'll open it up. There have been various efforts among non state actors multistakeholder groups and so on to do stuff about disinformation as well. You may know there's an international panel on the information environment that a group of academics and NGO people have gotten together, and they're trying to put forward frameworks for ethics and monitoring the spread of disinformation, have no legal force or anything like that. There's other mechanisms that can deal with this sometimes, like the Facebook oversight board can have responsibilities around this as a broader framework. And there's the new tech accord to combat deceptive use of AI in 2024 elections, because 4 billion people are going to vote this year. And at the Munich Security Conference earlier this month, 20, firms got together and announced a shared framework to try to combat the spread of disinformation on their platforms using AI. But again, with no clear kind of commitments about specific actions, timelines, resources, and so on. But they're saying they're going to work together to try to combat disinformation. And again, here we'll have them making the gatekeeping decisions about what constitutes disinformation, and what measures should be taken. So to summarize, Internet governance responses to disinformation. There's all kinds of efforts underway now because there's this disinformation panic in the world, somewhat justified, to try to create frameworks that will allow governments and other actors to respond to disinformation. And they tend invariably, to focus in particular on the conduits for disinformation, rather than sources. And there's a real possibility that these mechanisms can be very much abused in ways that are contrary to human rights and other things. So the real, there's some real questions, then how do we go about trying to tackle disinformation, while respecting human rights, taking into account changing technology? And how, how much more complicated it gets with AI, et cetera, et cetera? So that's the menu we have. And now we can have an open discussion around all these kinds of issues among everybody.

Thank you so much.

Yeah, go for it. Is this one a few? Hello, okay. I have a question on the this thing about those coming up about this information between there could be international law. If I'm a dissident in, you know, in, in, in a country, and I'm, you know, the concrete, and I get interview on my views, and he gets put in a paper newspaper, which eventually goes into the Internet cannot that happens that that country can call this country say, send that guy by back home? No good.

You know, one of one of the big problems that they've had in this negotiation is that actually, it was supposed to be a treat on cybercrime. But it no longer really is, it's a treat and crime. Because a lot of the governments that are involved in this discussion, have been trying to very aggressively expand the reach of the instruments that are being put in place to go beyond simply narrowly constructed. You know, we have existing mechanisms now, like the Budapest convention of 1981, which focuses in on cybercrime, that's to say, crimes committed with using technology or against technology, where the technology is the fundamentally core factor. But what you've seen in this negotiation, is that many of the less democratic or non democratic governments have said, No, we want to go far beyond that. We want to be able to go against anything we deem to be criminal behavior. So like, if somebody basically has an email address, you know, and does something in some other context, we can say, Oh, well, that's the Cybercrime too, because they may be they they get Edward Eduardo Git gave an interview with a newspaper, it was only a local little newspaper, it didn't have an online version. But maybe he emailed with a colleague and sent a copy or shared information about what he was doing. This is all part of the same action. And so therefore, we can make a claim under this convention, to have him to try to use the mechanism against him. So of course, the democratic countries have been trying to push back a bit on some of this stuff. But I've been actually really puzzled by their negotiation stance, because, you know, the one of the problems is that states, even democratic states, like to have tools at their disposal. We see this all the time, right with the National Security Agency in the United States, and intelligence agencies, law enforcement, all these guys, even if they're working for governments, whose countries are nominally Quite democratic, they all want to have a strong capacity to be able to, to pursue any kind of criminal activity, or intelligence risks that they deem necessary. So they're often willing to sort of allow a little bit more expansive language than you might expect them to. And so I've been really surprised by some of the negotiating positions, and what has been deemed acceptable texts, a lot of ideas, really bad ideas have stayed on the table without the Europeans, the Americans, the Japanese, the other industrialized democracies, really saying line in the sand, we're not gonna accept this language. So it's all still out there. By the way, this should not be just like this, this should be like this, right? I mean, we want to have chemistry. So not just questions, but comments, views, whatever. Milton.

Yeah, you were talking almost exclusively about disinformation. And that's, I think that you meant a much larger bag of things. So there's misinformation, which is just false information. There is a very clear definition of disinformation, at least in the social sciences and communications, in which disinformation is considered to be intentional. You know, it's false you to spread this information to achieve a strategic objective. And generally misinformation and disinformation is our contemporary terms. And older terms for the same thing. Were called Well, a few years earlier, it was fake news. Now, I actually liked the terms propaganda and or influence operations, which is the more military doctrine term, but what I really think obviously, when you're talking about misinformation about regulating, you're talking about the regulation of truth. And we I think we discussed this morning, why that's not a good idea for governmental agencies to be doing that, but but disinformation. If it's intentional, then it really is a sort of like state based propaganda campaigns. And we've been studying this in connection with some of the nuclear incidents in Ukraine and versus Russia, and we find that both sides are doing technically disinformation, they're spreading things that they know are not true in order to gain an advantage in some kind of political or military contest.

Yeah, my, my understanding of the session was, was that the focus was disinformation. So that's what I was talking about. But yes, misinformation is not necessarily intentional, right, but a way, propaganda that earlier, was talking historically, if you go back into the early 20th century, onto the League of Nations, and then later in the United Nations, all kinds of declarations were adopted about propaganda, and propaganda was almost coterminous with what you would call disinformation today. And there were all kinds of mechanisms put in place to try to combat it, usually, by historically by governments agreeing to repress the spread from their country into another country. That was done both under under telecom law, and under League of Nations. There were specific, there was a treaty in 1931, I think it was, and the spread of propaganda, to incite international conflict. This is when people were worrying about the rise of Nazism, and so on. So this has a long pedigree. And the problem is, of course, again, as I say, it's all been so turbocharged by the fact that it's no longer just traditional state actors. There's a whole variety of actors that may have fuzzy relations to the state that are working with them and so on. And so very often, you you've done work on the attribution issues before, you know, that the Russians can say, wasn't us, because there was some guys in Mount Moldova, who they paid through a third party bank account in an offshore place in so and so. And somebody came with a barrel of money to the Moldovans and gave it to them. It's not it's not us, you can't prove it's us. But of course, it's not them. Right. So it gets complicated.

So just to add to that a little bit, so the disinformation campaigns that are there now have certainly moved beyond state actors that are engaging in them. So we have you know, we have the state actor examples, you know, the Internet research agency being one and others, and then you have political parties, you have all kinds of non state actors that are engaging in this process as well. I On this active deliberate spread of false information, one of the things that I think is really challenging about this and Bill, you talk about the supercharging of it, is the micro targeting. So what's happening with these disinformation campaigns is that because of all the data that's being collected on people from a wide variety of sources, the Cambridge analytical was one example. But micro profiles are being developed on electorates on populations. And then very targeted, specific disinformation campaigns can be levied against just a small segment, you know, with entities pretending to be LGBTQ friendly, and trying to get LGBTQ people to not participate in something so they can, they can make it sound like they're with you. And they encourage you not to do this, or they can target the African American community to say, don't go out and vote or go vote on the wrong day. So these micro targeting efforts at disinformation are incredibly challenging. And I think, you know, Milton's right, you know, there's been lots of, you know, study of propaganda. But I think looking at this is these non state actors engaging in this kind of micro targeting disinformation is a whole new ballgame that we have to understand how do we combat this? And I think that's to me is it's it's such a, an important and pressing problem right now, for this election season.

Yesterday, there was a primary election in Michigan State of Michigan in the US, and you had, what was it 13% of the Democratic vote was uncommitted. This is a protest vote against Biden, because of Gaza. And it's been found that there's all kinds of information targeting Arab American Muslim communities in the United States, telling them that, you know, the Biden campaign is giving the tools for torture. And the IDF is doing XYZ. And it's specifically as if you know, Biden was sitting in the White House going, Yeah, I want you to go after and target that teenager. In Gaza. It makes it sound like there's like this very comprehensive direct connection. And, you know, you can see why the Russians would want that they want Trump back. Right. So I mean, this is going on now, in real time, for sure. Big time micro targeting, highly, highly specific. I, you know, there are Facebook pages from 2016. That is using my classes, were you that only appeared in certain people's feeds, that if you are a Christian fundamentalists in Kansas, who lived in these particular congressional districts, you were given a message that this particular event that happened at a local supermarket, whatever was because of blah, blah, blah, and it was all stuff got traced back. And very often it was being generated by troll farms, and others in Russia and other Eastern European, but there are people all over the world. Right? So the problem is it can be somebody in the Philippines. I mean, there isn't, didn't they ultimately find out that Q is in the Philippines, right? It's like some, if the, if the stuff that I've read is correct, that they found this guy who they believe is he was the originator of the Q anon. And he's like some some doofus, American, in Manila, you know. So I mean, all this stuff gets picked up and amplified. And this is the other fun thing about to write is that the state actors that want to spread this stuff, they don't even have to create it themselves. Because there's so many other actors out there creating stuff that they can simply pick it up and amplify it. So somebody puts out some idiot, skew stuff about Democrats, the babies, and then you know that, and then the Russians or whoever can pick that up and say, Yeah, this is a good one, and circulate it and keep their fingers clean. So it's an incredible session. So just imagine the UN General Assembly is going to adopt frameworks that are going to be responsive to all those distinctions, and highly differentiated kinds of ways in which information integrity can be violated, no, they're going to adopt very bulky, heavy kinds of mechanisms that basically give states a great deal of leeway to go out and discharge their authority as they see fit.

I think it's also interesting to think about, for example, the unit, the US Department of State put out a call recently for research and proposals into freedom of expression in Ukraine given the use of martial law. And it called into question a lot of things in in the research, that if you are trying to both promote democratic values that promote A, you know, freedom of expression, open exchange of ideas, while also combating an actor that does not think that any of those are of value or necessary. How do you engage in warfare and both protect democratic values and protect trademarks question?

So anyway, let's get more people involved. But the important point to think about here? Well, I think probably most of us are kind of genetically disposed to think international cooperation is good. We want to, you know, the global community to be responding in some manner, here's these huge problems. But we don't have yet any examples of international responses that are not problematic because of the nature of the International polity, that would be responsible here. So anyway, who else has some some thoughts to share? It doesn't have to be a question. It could be just an observation about disinformation or international aspects.

I'm actually curious to hear from people about their experience with some of these industry led efforts. You mentioned the Facebook oversight board. Twitter's been an interesting example. Because it put things in place and then remove them. I'm just curious if people's experience has been with both of these. Because some people feel like they get dolphins get caught in the tuna net, sometimes on on these things that I'm wondering whether people's experience with them has been good or bad.

How many people here use x a lot? Formula formerly known as Twitter. Okay, so for you guys, have you seen in your feed, more disinformation, or problematic information generated, since the removal of the content moderation stuff, you're shaking your head, you want to get something? So I know Milton disagrees, right. But before we move forward, Milton disagrees. Tell us what you're thinking.

Well, I let me say it this way, I have a crazy aunt who has strong political views. And all every once in awhile view some of what she does. Gosh, this is recorded sorry, auntie. But the, I guess the challenge with that is if I even view it, or if I comment against her for it's like that sentiment is then taken, and I get this amplification of similar trash, and things that I don't agree with. And, you know, it just seems like it's entirely there to cause people to react and engage more, but it absolutely has no substance or value. And it seems to just get just more and more vitriol comes towards me than actual beneficial content. So the answer is somebody you know, but the algo is feeding, aligning information along with it. Milton, do you want to say why this is not a problem? Freedom of speech is the answer to everything. It's coming?

Well, first of all, I just have to say that my experience is that it hasn't changed that much. Right? There are certain things, some of the good that is are not as suppressed as they used to be. But in terms of who I follow, and what I see in my stream, whether it's for you or exactly who I'm following, it just does not seem to be as different as some people are claiming that it is. And I, you know, the algorithm was always designed to sort of, there have been no major changes in the algorithms, tendencies, there have been removals of certain kinds of blocking or moderation or limitations in moderation, but most of it is still there. It's really not that changed.

But further to Derek's point about microtargeting, your experience with your feet may not be indicative of what's going on in the experience of other people in their respective bubbles. Right. So there's a lot of people who have his perception that in fact, there's a lot more crap now. So, you know, I'm not sure we can generalize. Yeah, Ben, go ahead. And then, yes.

So I'm going to talk about my experience on disinformation from from experience on Facebook. So our government so we know that sometimes about a year or two ago, Twitter was banned in Nigeria. So what happened was Facebook and others kind of aligned immediately to whatever government so government became the de facto source of truth. Now, people experience things they know that Oh 20 People were harmed in this place. But as long as you you post anything that's contrary to the release of the government. Facebook flashes it as disinformation or something that is not accurate, then six months later or three weeks later, then that becomes the truth. And Facebook is not humble enough to come back and say, Hey, we're sorry, you are correct. You know, and they just so they start manipulating what is true and what is false. And they tag anything that is contrary to whatever the government releases as truth as this information and tell you like, Oh, so now you now have a platform where people don't tell the truth. They just paint on pictures of what is acceptable. So people are telling. So people have learned to communicate in ways that the de facto like, you know what I'm saying? I wouldn't say it, but you know it. And so I don't know how that is helping with platforms like this making people to censor themselves by not saying what they really want to say just because they don't want the algorithm to track them into disinformation. So the whole process of disinformation is totally biased now, because you don't even know what is disinformation, or what is true anymore. Thank you.

Just to add on its cushion that Newtonian, what's the name? John, Jonathan? Okay. Yeah. I think that the thing here is the when we are thinking about the algorithms and social media, they are trying to keep our our interest on that and interact. And the way this work the most, is by having something that kind of makes you angry, right. So the thing about liberal freedom of expression is, we are being actively being pushed toward content that we disagree, that makes us angry, that makes me interact more in social media and makes me stay more time, keep more of our time social media. And I don't know if that's really something that enhances and improves the kind of liberty of expression that we're going for, and not even entering in a discussion of liberty of expression as the maximum value that associate salt sociation should have. But mostly on the team, that this wouldn't really be an improvement on liberty of expression, like having someone choosing what we what you will get to see what we will you will interact in this thing being moved toward towards something that you hate that you do not like, and you want to say something against it is? I don't know. It could be final expression, but I I'm kind of in doubt about it.

Yeah. The only one Mike. Is there more than one mic tick and rover on mixer?

This is this is probably going to just be a comment on what Pedro say. And just a follow up on that discussion, maybe not a question. So bedre, what you're looking for is actually responsibilities. I'm a lawyer in Kenya, and our Constitution has rights. And those rights, one of them is the freedom of expression. But you cannot argue the freedom of expression without the responsibility. And that one of the responsibilities to that is, whatever you're going to speak should actually not infringe on someone else's right. And I feel I feel like that's, that's a way of telling you to speak, but at the same time, limit what you have to say. Because you might say something that someone else would not like, but in this instance of Internet governance, I feel like that's a very limiting, right and freedom. Because if the government is not at your will power, you really would not be able to speak much. And I don't want to mention names of other countries in Africa where whatever the government says, You cannot post against that. They're actually countries in Africa where whatever you post on Facebook is watched by the government, and you can actually be jailed for just posting that irregardless of the fact that Facebook page is just a page with your name. So I don't know how or when we are going to get to the point where we really say we have the freedom of expression, without barriers, you know,

So, just in terms of the the Internet governance aspect of the international cooperation aspect, I'm I'm gonna guess that if I asked were to ask you people, how many of you think something should be done? About disinformation? Many hands go up. If I asked you how many of you think something should be done by governments? Those of you from democratic countries, perhaps, might raise your hand. But then when I say, Okay, what about international cooperation? What about Internet, international policy frameworks that involve all countries? I suspect, you get more cautious. So then the question becomes, how do you begin to tackle this problem in a diverse world, where you've got 50 Something countries that are non democratic, maybe more depending on how you want to count where speech is being heavily penalized? Because disinformation is inherently transnational problem, right? No one country or set of countries can can address this information, effectively, you need to have some international framework, presumably, in principle, but in practice getting it becomes difficult because of the composition of the International polity. So then the question becomes what to do about that. An interesting trend is that you're seeing now more reliance on guidelines, rather than traditional kind of treaty type structures. So the UN is promoting this notion of guidelines. The European Union's adopted guidelines, guidelines is something that nation states implement at the national level and may implement differently. It's more permissive than a strict treaty language that specifically designates particular speech acts, or whatever and mechanisms for addressing them. guidelines is a more normative kind of framework, unless it's backed by something like the the DSA and DMA in Europe. But it's what exactly can work as an instrument remains a big open question. So it'd be just interesting to hear more from any, if anybody has any thoughts about do you? Are you optimistic about the possibility of international cooperation on disinformation? Or, or conversely, do many of you feel that nothing should be done? And we should just try and stick with existing procedures and try and address these things? Via the company's platforms and stuff? Yes, heavier?

That's okay. Thanks. I don't know if there's like an international solution to this. But I think our first step could be that democratic nations at least place a similar level of of regulatory weight or legal wait to social media companies as the press does at least at least some level, maybe it's not completely symmetrical, but some level of accountability on content, maybe a maybe if democratic countries figure out how to do this in a way that that's okay. I know it's hard. And freedom of expression varies a lot, even within democratic countries, but some level of legal accountability, at least similar to the press in that country. to social media, it's a first step, and maybe I don't think it's going to solve the whole situation. But at least it moves in a direction because I think, I mean, I've been I've been a defender complete defender of non regulatory approaches all my life, and like, soft touch, and self regulation. I don't think it's working. And I think something has to be done, at least in democratic countries to put some level of legal accountability in social media.

I will know, I'll let you talk about that. But I think that an important caveat to that is the press has, as you said, self-regulated to an extent. And they are often talking not a lot about the topics that people are talking about on social media, where the idea of what is true for one group versus another is so incredibly variable, and then you get into conversations about satire, and how do you flag that on social media? If you're using it? You know, the words might say one thing and the meeting might be another, how do you kind of try and answer any of those questions? If you're trying to hold them to that similar standard? That's not to say that they shouldn't be held to a standard at all, but that it becomes way more complicated, and you don't have a marker like opinion or satire as you do with the press?

So have your Are you suggesting that platform should be regarded as speakers with editorial? Because it did. The CNN Supreme Court is dealing with us now, right. And some of us may not have the most confidence in this particular Supreme Court us to come to the right decision on this. But I mean, it's interesting because we I, at Columbia, there's a law professor, and many of you probably know his work, Tim Wu, who was the guy who coined the term net neutrality and involve various other things. He wrote an op ed the other day, supporting the Texas conservative government's position on social media, and saying that we need to be able to hold the platforms accountable, and go beyond section 230 kind of treatment of them, to make them actually liable for content that they choose to make available. So the aligned political alignments around this good, interesting. You, Milton, you've often talked about this, how people on the left and right are sort of coming together around sort of various versions of sort of censorious responses to, to information.

Yes, it is kind of interesting how, what you would think of as being ultra left Marxist. And now the new populist right, are both in favor of bringing the state state power back in. And so it's kind of like, I don't know, the 1930s, where you're given a choice between communism and fascism. And it's like, somebody needs to hold the liberal center and say, you know, you don't have to choose between communism and fascism, you can choose the sort of a rule of law, cooperative, freedom, supporting order in the world.

So this group is, you guys are all here for it. I can meeting, you're getting immersed in the language of multi stakeholder cooperation and so on. Does anybody see options here for greater multi stakeholder responses at the international level, to try to combat disinformation? Any thoughts on how that might be done? Or is it possible? Derek, and then whoever's in the back, Derek, let's wait and get St. Lucia. And I can see ya. Hey, there. Welcome. Hi.

Shall I go foot? Go ahead. Yeah. Hello, is Lucien Taylor from the DNS research Federation. Yeah, multistakeholder ism, RK. Let's try. I think there is a role here for standards and research to empower the individual through standards. So you're not going to deal with the hyperscale of, of misinformation through the social media platforms. They're not going to be able to do that and respond to that in a timely way. But you should empower the individual to be able to register their protest, or say there's something that has been spread about me is wrong. And I think there's a role for standards in that approach, to encourage all the platforms to give people a standard approach and say, something has happened here that I'm not happy with. I'm not I can't take people to court. I'm not going to lobby my government to act because the government's are very slow in that. But I just want to register my thing in the standard way that you register it. And I can refer to that forever. And and that needs to be tested through research and academia. We're seeing a lot of things rolling out by the regulators without proper testing. And I think so I'm just giving a shout out from our organization, which is develop the standards and test them first. Thank you.

Great, thanks, Derek.

So I think one possibility is the kind of cooperation that's happened around cybersecurity training and personal awareness, helping people to understand how to combat phishing, exploits, and so forth. So the widespread collaboration, on raising personal awareness of what you need to do to combat phishing exploits, for example, people we need to have similar types of training for people to understand how do you combat disinformation, how do you recognize it, you know, higher levels of media literacy and critical thinking around content that we're receiving. So I think campaigns like that, that can you know, that have been somewhat successful at raising awareness around phishing exploits and so forth could be be helpful in this case.

Yes. Hi, Emily.

Welcome really well, thank you and really thrilled to be here. So Emily from the DNS research Federation, I just thought I'd jump in on your question Bill about, you know, what, what multistakeholder responses are we seeing in to combat disinformation. I'm on Friday, I'm going to be talking a little bit about a study we've done on IGF impact. And one of the sort of sidebar of that is the IETF. The Internet Governance Forum, has a role as a sort of decision shaping forum. In that it's it seems to be a place where people take tough, difficult issues where you do need many different perspectives to try and, you know, stumble in the right direction, as they say, and there is definitely evidence to show that it's played a really significant role in the online harms debate now, whether legislators have ended up in the right place on that, and I'm coming from the UK, whose own Online Safety Act has been very controversial. It still highlights the role that the IGF has played successfully, in actually taking issues such as violence, gender based violence, or disinformation, and trying to really talk out the issues which are complex, and I think, particularly complex for democracies to uphold, as Milton says, you know, we seem to have lost the middle ground in our debates, we seem to have to be faced with binary choices in all sorts of different contentious areas. So I just thought I'd add, highlight that sort of evidence pointing there.

Regarding action, to address disinformation, for platforms that have global appeal, and people use to spread disinformation or spread, the ideology and all of that, I believe that platforms such as that should also have point of presence in almost every environment, where these things do happen. In a place like a continent like Africa, you have two or three representative of x or physical is utterly unacceptable, at least in each countries, they should have Office presence and should respond as quickly as possible to governments, societies, or the demand when they see the defects are what they are platforms, help influence. And that will be really, it will be a starting point. Because all the things we're talking about right now. It's about essentially platforms. But most times, these platforms are selective. I was shocked when Facebook and CO and all of that went to the Senate, and they were apologizing in the US. But I doubt if they will respond to any government in any other country in that way. Or in some regions of the world in that way, they will probably send some guy with a typical charge GPT kind of response to those places. And these are things that have even more amplified effects in these regions than you can ever imagine. There are people who will tell you, I saw it on Facebook, hence is true. But then Facebook doesn't even know that those places exist. But they will take advertisement money from those regions. So for a start, they should be present to respond in all of these places. And I mean, I don't like what the Nigerian government did, but a call for them to have a physical presence in that country. But they only have it in maybe Lagos or Abuja. But it'll be fair if they have it almost everywhere, because there are engineers much that could make them provide as many presents in all of these places. So that was the starting point. For me.

I think your your basic point about how the, the lack of lack of presence in the developing world in particular is highly problematic. I mean, so you've got all these people sitting around in California, who are making decisions, and they're not even cognizant of how the information is being used against Rohingya in, in Myanmar, or whatever it may be. And they don't have people who speak the languages and so on. Whether forcing them to have more staff in those places is a practical response is another question how usually they can manage all of it. But still, by what you're saying, Ben, though, we're still focusing entirely on the platforms, right. And so this doesn't allow us to get to the fact that so much of the disinformation that's out there is being generated by or supported by, or redistributed by state actors, or parties that are affiliate able to state actors, and nothing is penalizing them or constraining them directly. Instead, we're gonna put all the responsibility on Facebook x and a couple other key players, one has to wonder is that an effective? Can that be an effective response?

Okay, so if it's a website is easy to track, I mean, however it is, but most of the thing we're talking about, it's because it's related to the platforms. Now, in some cases, doctors give, pay a lot of money to in terms of advertisement, or create a lot of accounts on these platforms. And it makes on the platform can either help. By routing them, I mean, routing them out and say, Hey, no, we don't do this. And that way, destructors are powerless. But when they use this, I haven't seen his doctor built their own platform and use it. If government even government tried once, nobody went there, nobody believes what they're saying. But when you use these other platforms, or have them, you legitimize them like, oh, this the Office of the President, and the office of the president says anything via these platforms, you can go and hold the state Act or the government because they're already occupying the authority places. But the ones the work, I mean, you can lock your door to listen to the government, but you can lock your devices from Facebook, because you want is our new market space, our even why you try to ignore is still filtered to you either through your neighbors or were founded on the platforms, though. So that's why I said is the starting point, I'm not saying that's the endpoint. If we start here, gradually, we might get to the state actors in one way or the other. But state actors do fund a lot of things. And they have a big pocket. And it's attractive to the platforms. So that's the way I see it at the moment. But if we start from the platforms, gradually, we might be able to narrow it down once the platform issues is out of the way. That's what I think.

Well, certainly one would think that in terms of the role of multi stakeholder cooperation, that gathering and disseminating information, by multi-stakeholder actors in some kind of institutionalized mechanism, would be a way of trying to address the state actor aspect that you can't effectively do through an inter governmental process where the state actors are themselves the parties that make decisions. So this is the question is whether you can stand something up. And like I said, there are these efforts like this international panel, Phil Howard, you guys probably know and others are doing this thing. There are various other initiatives out there too, to try to create mechanisms for at least monitoring, tracking and reporting on the spread of disinformation. And maybe if we could support that activity more and do more naming and shaming more effectively, maybe that could start to make some small difference. But it's it's to be determined. Emily? Yes.

Thank you very much. It's such a fascinating conversation. And you you were talking about the role of websites, in in actually being an anchor point for publishing disinformation. Of course, the platform's play an enormously important role in amplifying them, and the way that the algorithms work to, you know, to sort of up the prominence of really divisive and emotional type of stuff. But it's one of the things that I think there's a big gap in the research in relation to the role of websites in distributing and publishing disinformation, and also the funding sources, you know, look, we did a small study with with Phil's team at Oxford Internet Institute on COVID origin stories and the role of disinformation there. And we were seeing all of these sort of not just state actors, but actually a lot of ultra right, evangelical groups in the United States, getting a load of traffic and all carrying mainstream advertising on their websites. And to, to me that, you know, I think often it's it, we should hold the platforms accountable, because they play a role. But it is not the only role. There is a whole ecosystem of disinformation that we're not really paying enough attention to, and I think the role of the web and you know, who, who, who is helping to fund this through advertising? What are the responsible registries and registrar's? Now that's not all of the problem, but it is part of an ecosystem and I think to understand it and respond until it properly, we need to kind of open our minds to it. It's not just a platform problem.

Yeah, just that I think that that speaks to an area that we haven't really discussed, which was the infrastructure layer. We've talked a lot about the platform layer and what you can do there. But there have also been examples at the infrastructure layer of CloudFlare, for example, making the decision about the daily Stormer to stop services to them. That's also a choice that they were allowed to make, based on information that they saw that the daily Stormer was reporting about, click Cloudflare as an organization. So that's a whole different level of providing services to them in the first place. Also true with New Zealand and Australia is ISP responses to the Christchurch terror attacks. So how can companies assess whether similar interventions are justified? I think is an interesting again, just an interesting layer to think about. And yeah,

that's some other questions that you were suggesting we could talk about. Yeah.

Yeah, for sure. So one of the questions that I had thought about was how do you anticipate emerging technologies changing the way that we think about disinformation? So AI generated content that may be wrong, for example, whether that was intentionally made for some political purpose, or whether the model itself was given inaccurate information by accident? So I don't know if you want to talk about that. But

well, no, I did note in my comments earlier that there was this ticket court agreed at the Munich Security Conference, and they you've got 20 big players coming together and saying they're going to try to focus in particular, on AI generated stuff for around the elections. Of course, they came out with the statement after there had already been major elections in some key countries where God knows what disinformation was out there. So you know, a lot of this, the companies find themselves trying to maintain reputation and avoid damage by appearing to be doing something when the actual doing is can be quite thin, or prospective, or something that they aspire towards, etc. But you don't know how it's gonna be implemented, etc. We don't know what's going to happen, that 4 billion people voting this year. Right. And there's a lot and in each of those countries, I mean, you look at, you look at, I'll just take my own, I am quite certain that there'll be quite a lot of disinformation around the American campaign. And whether we're going to have any institutional capacity to deal with that, particularly given that one of if I may say, one of our major political parties is explicitly pro disinformation these days, is kind of problematic, right? You have a real limitation of when when the Biden administration tried to put in place some mechanisms to at least begin to build a government capacity to monitor what was happening with disinformation, they got blasted out of the water, and had to backtrack, and now is effectively very little going on. So you know, the doors are all wide open to be pushed on by disaffiliation actors in the 2024. Election. So anyway, we don't need to go further. I mean, I could talk about this for hours. But the idea is to try to get you guys engaged. The question, I think I just leave with you. And we can end early I suppose if we were supposed to have two speakers, we don't. So we can end early if we want. But just to have it in your mind. We do all have this kind of general predilection to believe international cooperation is good. We do have a general predilection to believe that multistakeholder forms of cooperation are particularly good. Do we see what is the scope for international cooperation in this space? What do we expect could really be done? Is the cure going to be worse than the disease? These are all questions? I want to leave it? Oh, nothing has a closing thought for us. So that's good.

Well, just on the subject of international cooperation in content, there are multistakeholder efforts that have not really been mentioned. Emily briefly mentioned the Christchurch call. I was involved in that. And actually, we withdrew because of this tension between states and the multistakeholder community and the unwillingness of certain states to actually commit themselves to they're all in favor of restricting speech, particularly when it suits their interests, but to recognize limitations and multi stakeholder input proved to be very difficult for some states. However, I do want to I think we're kind of losing context here, in the sense that disinformation, like you say, and a lot of people, as long as people could communicate using language and technology, they they have been lying. They have been manipulating people they've been trying to persuade people. And one of the things that has happened with digital social media is that it's actually much more transparent, right? If somebody is disseminating a lie, everybody can see it, and you can expose it. Whereas a lot of that stuff in the past was just whispers, right, it was not transparent, it was not as easily objectify audible. And it's not clear to me that the clearly there's more communication and therefore more opportunities to lie, and manipulate, but it's not clear to me that the situation is worse now than it was before. It's just kind of globalized, and, and scaled up. And I do believe that people also understate the degree to which existing platforms are already doing stuff about this. Right? So I know, on Twitter, I noticed that the things that to get the most likes and most follows are actually positive things like somebody saying, Hey, I just published my first book. And yeah, you know, and nasty, critical things that I sometimes say may not get the same level of support. It's not like these things automatically get amplified and in the world is going to hell because of it. I think that we we are over emphasizing the the pathologies of social media and underestimating some of the self corrective mechanisms and the the good things about the way it allows more information to spread more rapidly. I didn't

actually hear though, that the algorithm is weighted towards those Sorry,

didn't we actually learn? I think it was the Facebook whistleblower that suggested that the algorithm was actually weighted towards the really fiery face, you know, the sort of, what do you call it? You know, you really hate something that a little fiery face, you know, that that was weighted like 10 times above a like, controller. But But you just said it didn't happen. It did happen

in the early stages. And then they tried to tweak the algorithm, and they discovered it had other bad effects, and other good effects. It was just like, This is not simple, right? This is not like, this is all bad, or this is all good. It's like this is a complex social system with lots of forms of feedback and interaction. So and then they're they're trying, I mean, the platforms. First of all, I have pressure on him from advertisers who don't want to be associated with a lot of really bad stuff. Secondly, they want their users not to be flamed out and not to be like, turning away, like many people have done with Twitter. So I think they do have certain forms of checks and balances on them. And I agree that they're like the Facebook oversight board is a good example of a somewhat multi stakeholder initiative. The global Internet forum to counterterrorism is semi multistakeholder is not much there is civil society input into that, but the power decisions are really made by the private sector platforms. So there are multi stakeholder initiatives that are, you know, doing things that are positive in this space.

So Eduardo, you should have asked Milton to be speaker on this panel instead of me, because I didn't feel I don't feel like any of these mechanisms is working effectively. And he seems to think everything's self correcting, and the problems are being managed effectively by the existing organization. So you should have been should have gone with with this chair, cheery fellow in the front row here. Well,

the idea is that there are different different views, right. That's the idea. You know,

how many people feel that the existing mechanisms are effectively self correcting and, and the issue is being taken care of? I note that only Dr. Mueller has raised his hand.

Before

I would argue it's much worse.

What's your benchmark? What's your measurements?

Well, these are the things that a lot of people have struggled with, but there are a lot of studies that tried to set out frameworks and baselines and and show incidence and spread of disinformation and trajectories. Are you raising your hand? Yes.

I would actually like to Will you join me? Milton on that one? Because it's, it's a prototype like it's not perfect. It's not as we would like to end it, but the oversight board is not making it worse, it's making it better. So we still have a lot to do, we still have a lot of errors to fix some more participation, more diversity, etc, etc. But it's a nice prototype to work maybe with other companies with some, I don't know, international national framework to it, we've been settled, but it's a start. It's better.

Thank you. I will add just by saying that I think it's important to also check all of our own understandings, we are all here for an ICANN meaning. And we are very cognizant of what we view on our feed. And we are probably evaluating this 100 times more than most of the world, and that a lot of the disinformation that, you know, is being used to spread harmful narratives is going towards people that are thinking a lot less about where that content came from, and exactly what's in it. So I think it's important to keep that in mind, even though it's important that we all have these discussions that we do not represent necessarily the majority of users and to keep that in mind when making decisions or opinions about disinformation. But thank you all. This is very engaging discussion, and I think we're gonna go to break thanks so much.