20220512 Content Moderation

5:21PM May 13, 2022

Speakers:

Rob Hirschfeld

Rocky

Claus Strommer

Keywords:

people

moderation

free speech

twitter

platform

case

seatbelts

content

espousing

ai

speech

problem

law

point

trump

bots

happening

amplification

talking

building

Hello, I'm Rob Hirschfeld CEO and co founder of rockin and your host for the club 2030. Podcast. Today's episode was about content moderation in social media. Fundamentally, it was about free speech and what type of speech we can control allow, amplify. The issues here are really nuanced and challenging. And at the same time, absolutely critical for our functioning society to get right. We go into a lot of interlocking issues, talking about what type of speech can be allowed and how we know how do we amplify things, what type of feedback loops are we creating? And what are historical precedents for building safe systems? Overall, a fantastic conversation, I hope you will enjoy our back and forth. So the topic of the day is content moderation, like how do we how can we do content moderation? And the and I can't resist not sort of throwing the Twitter question in. I think when we scheduled this, Elon Musk hadn't yet. I'll let me say it this way. The Twitter board hadn't yet capitulated to Elon offer. But we've been talking about, you know, fake news and bots, like we've this has been a pretty robust conversation for us. And I? Yeah, boy, I do. I think there's two questions. One is, should we you know, with Twitter and Elon Musk, there's a degree of do we need, you know, are we going to have content moderation, free forums? And we can talk about that. And then how do we do it? If we're going to do it? What what is content moderation in, in our, in the coming? Free speech, public forum? Utopia that we're building?

Yeah, I mean, the first thing I would worry about is that, you know, the, that content for so many of us is the same thing as porn, right? We know when we see it. And to one person, a man without a shirt is poured into somebody else. It's, it's a, it's a, it's a woman with a horse in the barn. And there's everything in between. And unfortunately, content is the same. And my my concern is, with moderation, is that we'll end up with moderation, where the freedom of speech is actually most important, and will not have moderation in the place where it doesn't make a difference. And it's just people wasting each other's time. Elon Musk, for all the hero worship he gets, was, would not have been my first choice for somebody taking over Twitter. And I said that pretty loudly when it happened. And of course, the first stupidest thing he announced, was his willingness to turn back on Trump's account. And so, you know, it's a perfect example of where moderation is actually needed. It likely won't occur. And in other forums, people will control moderation for from a political view standpoint, or that's what I'm worried about, rather than controlling moderation from a are you being a dangerous human standpoint? Right. And the simple, simple fact was that Trump and some of his acolytes were being dangerous humans, and they were advocating violence and illegal acts. And to me, that's a pretty clear case of Yeah, you should be turned off, as opposed to somebody just espousing alternative opinions on a subject that's open for debate. So I don't know I I could go on and on about free speech in general, I'm a huge advocate of free speech, but I also think that like the Second Amendment, too many of us have used the, the point of free speech as a weapon as opposed to the positive tool. It always should have been and and those that scream the loudest about canceled culture are usually the ones canceling people.

There is invariably when we talk about content moderation is also the model of abuse, which you kind of touched there. And, um, yeah, also when talking about freedoms is It is frequently forgotten that once freedom is not boundless, your freedom, your freedom only goes as far as the rights and freedoms of other people. If you abuse your freedom to infringe on the rights and freedoms of other people, then that needs to stop. And that's where moderation comes in place. From a more pragmatic perspective, also, like the content platforms have a legal obligation to moderate themselves, because otherwise they are held liable for their suing potential potentially harmful content. Yep.

Well, I have a question about that. Because when does the liability start? And where where does the liability end? Because whenever you sign up for a social network, you know that you're putting you're buying their control of your data. So is it the minute you upload something, they then have the legal obligation to moderate? Or is it only in the case where they find something egregious that someone else has written that they then kick in there their power of moderation, and I think that's part of the problem. Okay, any law that says, if you put up a social network, you are automatically liable for the content area?

That depends on the jurisdiction. In most cases, or in most legal cases I've seen, it gravitates towards the former, the as soon as something is visible on the platform, or actually before, as soon as something is visible on the platform, the platform itself is responsible for, for it. So they, they feel the need to monitor it before it becomes visible.

This is like this, I think, ya know, you're hitting your hitting Go ahead, because you're hitting a core issue, like point

in case being like, cases about digital rights management, or copyright, piracy, mature content. Whenever we've seen court cases about this, it's largely become the case that the platform's have been made responsible for it. And it's the government or the legal entities have been. I don't want to say requesting because that's too nice, perhaps commanding the platforms that they need to self moderate or face legal repercussions.

But I heard a really interesting story. I think it was, on the media, covering this, I've heard, I've heard a lot a lot of takes on on these things about the child, like were these commercial platforms. The if their content is, you know, to one sided or to you know, it becomes all pornography or it becomes all, you know, one side of a discussion that actually they become echo chambers and the value on the platform goes way, way down. On a social media platform, I think if you look at and we're talking about social media platforms, because if you look at, you know, cable news outlet, they actually move into a propaganda system into a propaganda outlet without real consequences to their their base. Right. It's not a not a human interaction thing is that's not free speech. From that perspective. Right.

Yeah. Sorry, I just wanted to ask the question, I mean, where is the differentiation from either FCC or You know, content law that says there's a difference between a media outlet and a social media outlet. And therein is part

of the problem. So, so differentiation in the US is section 230 Whether you're a publisher or not, and the media outlet is a publisher, and currently the social media outlet social media guys have avoided being labeled as a publisher,

their transit while we have people raising their hands, alright. Which always first I missed it, Mark musters.

Anyway, yeah, thanks, class. So, um, you know that every time I think about free speech, I think my first idea is that I found an obvious problem. And then the second I think about it more, I realized, it's really hard to put a wall around that problem to safely resolve it without causing other problems. And one of those ideas just now popped up. And Klaus, I actually want to bring you in on this, if it turns out to be appropriate. is, you know, when you think about racism, and activities that espouse or demonstrate a wanton disregard for race relations, or, you know, a point that enforces the idea reinforces the idea of racism. If, if mature business, in my mind any mature business that had an employee or leader, espousing any kind of racial view, their ability, my assumption is their ability to continue to function as an equity based enterprise is suspect, if they allow that person with racist, racist views to maintain their position. Right. And I say that, because how could you possibly have somebody who espouses racist views? How could you possibly assume that they would make equity based decisions on pay and hiring practices and job promotions and all that kind of stuff? For everyone across the spectrum? You can't, you would have to assume that they couldn't be provide an equity based answer. And so it seems obvious that if somebody is espousing racist views in society, they are immediately demonstrating the fact that they cannot be a fair, equity based contributor to society. So again, a very, very poor definition of what it is to the problem that I'm trying to get to. But this is also where I think that there may be a parallel and Klaus, Are you German?

Not directly,

directly. Okay. Are you familiar with the laws in Germany about espousing views that would a sport would support Nazism

remotely acquainted

with? Okay, yeah, unfortunately, I don't know enough to bring this into the argument. Now, I was hoping Klaus might have a little bit better experience with it. But there are very strict laws in Germany, about even though Germany generally speaking, is is free speech oriented. But there are very strict rules about the topic of Nazism in Germany, about how it can be discussed, et cetera, et cetera. And so, I wonder if there are ways to expand on on those kinds of models in other areas of opportunities. Anyway, I'll get off the Florida

Yeah, man. Boss, either one. Yeah.

Yeah. So Marco, so the the part that you said about about equity is, it's quite on point. And it also brings us on topic of the paradox of tolerance or the the seeming paradox of tolerance. That's where whether someone whether we as a society should to tolerate intolerance, and that the way that our the document that is the most agree with on that regard is that it is not a paradox to not tolerate intolerance. Because when you look at it as a net action like inaction on intolerance means that it implies that you sanction set tolerate intolerance and an approval of it. So, by not tolerating intolerance, you actually reduce the net intolerance in our system, which then comes again back to the topic of moderation and equity and fairness. Yeah, so, ultimately, we, what we see happening. I mean, it's it's hard to prove, but but it's at the very least apparent, I would say, is that, platforms that tout themselves as being supportive of free speech largely tend to gravitate towards or largely tend to attract audiences that have been not tolerated in, in other platforms. That is not necessarily bad in most cases like it. If it is acceptable for it, it makes sense, at least in the best case scenario, that a an audience that does not have a platform in an authoritarian regime should have the opportunity to discuss our drug free speech. But unfortunately, in reality, what we see is that it gets abused a lot. Right, John?

The i, okay, so because Canadian law is different, we have in the Charter of Rights guarantee of free speech, but there is a loophole to it, for example, if you willfully promote hatred of any sort, and this is very much to Mark's point about the German laws, and the case that's going to be made against Lufthansa for what they just did, for refusing to allow a group of Hasidic Jews to board a plane in Frankfurt of all places. You can actually go to jail for two years. So not only are you subject to termination of your employment, but you can be fined $10,000, for each instance. And then will if you are convicted of willfully promoting hatred, you can go to jail for two years. And there is no ifs, ands, or buts, there are defenses that can be made. For example, if you repeat something that somebody else has said, to prove their point was wrong, you're not actually promoting hatred. But it requires in the case of Twitter, and certain and Facebook to a certain extent. If you look at the terms and conditions it actually it because there's case law on this now, that says they have the legal right to moderate in the event that they are notified that someone is willfully preventing it willfully promoting hatred. So I think it's different by geography, but it's also different by if it's reported to the corporate entity or to the leadership. That's one situation, if it's not reported, but it's just out there. They may not have the same legal obligation to moderate and I think that's where some of this has to really be fine tuned. And the same would hold true by the way not to bring in another topic, but I have this ongoing you know, discussion with some other people around the ethics of AI and the fact that the data models that may be used to create the AI have inherent bias in them, which most of them do. So, how do you then you know, go through this sense of moderation or media or even mediated dispute? If you have an AI A leading or generating even in the case of a bot on Twitter, the promulgation of hatred, or willfully promoting hatred versus an individual who's doing it. And these two things are starting to come together in a very tangible way. Because someone can use a blog that actually has a racial profile or some other bias built in it then goes viral on Twitter, people buy into it, or try and subjugate it, then what?

You should mention the bots because that actually happened. Someone trained a natural language model on Twitter, and it's, it's being a hatred.

This is I actually, I'm glad you're bringing up bots, because I think that there's an element there's there's layers in this conversation that go beyond with social media, one person's speech and into the amplification of somebody's speech, the part of part of moderation, right, which is stopping a person from speaking or taking a person off of a platform, which is risk, which is which I think all the platforms view is risky, because what somebody could be making a point that is not popular, but it isn't. And I think Joanne's making the right point, hate is different, right, talking against other people is different than making an unpopular statement. And I think that those are those are explainable. But there's a whole nother layer to this in social media platforms, because of the amplification component. Part of the amount of moderation we're talking about is not do I stop a person from talking. It's, do I stop the, the, that, that person's voice from being amplified by my platform? And, you know, just that one piece to me is is as important or more important, right? There's a, you know, we were talking about bots, sometimes the platform's amplify by putting it in your feed using AI and find, you know, trying to target you. And capture your attention, which we've learned is potentially toxic based on the AI's ability to filter things that you might want to see to the exclusion of balanced information. And then the other side of it is the bottom bots that literally amplify people's expression. I have some more to say about that. But I'll pause to leave. Mark, as you raise your hand but also to let other people add add in on the amplification side of moderation.

Mark you muted.

Yeah, I think the amplification piece can't be ignored, right? Somebody espousing hatred on a soapbox on the corner of San Francisco with a megaphone has a much different impact on people than then someone on Twitter, but I think it's also true that there's, well, I shouldn't say it's also true, I would say there is a possibility that we must take into account the position from which someone begins their tirade or their RAM to or their espousal of hatred, as opposed to just the fact that a person did it. Right, like too many people on Twitter, Mark T. Lee has a ton of followers, that 19,000 compared to Trump, when he was on Twitter, I'm a fly on the back of an elephant. So I could espouse hatred all day long and probably have almost zero impact other than annoying people that follow me. But when Trump does it, there are or somebody like Trump doesn't have to be Trump could be somebody on the left makes no difference. But when somebody with that kind of position of power and followership, does it, they run the real risk of hitting that. I don't know is it 5% 1% 20% of followers who are willing to take what somebody says, verbatim and actually act on it, as opposed to just accept that this is part of those somehow new modern dialogue, and it's acceptable to make threats against people and stuff. Another way to look at this problem is right, I think, I think it should be obvious that the first time anyone, either suggests they themselves would want to harm somebody else, or that they suggest other people should harm somebody else, that the first time that happened, somebody, they should be taken off of any platform they're on. That makes sense to me. But is that the same as telling people that they should go protest in front of a lidos home? About the right to abortion? Are those two things different? And superficially, you can say, well, going and protesting in front of their home isn't necessarily advocating violence. But it's hard to argue that protesting in someone's home couldn't potentially put someone in that home at risk.

You know, I was just reading it's actually it's actually illegal to try to influence judges opinions to some of that, that there's understandable feelings on this. Yep. And there's an interesting component of of bright, it's actually interesting, we have a law that says you can't try to, you know, convince a justice to change their mind on a pending opinion. Seems actually sort of strange.

Yeah. But we could we could I use that because it was topical, but it could be due to AOCs home, or it could be to go to Schumer's home or

whatever. And being people people, and that is, right, people, we had a whole bunch of outrage over people, you know, talking to or demonstrating in front of politicians during meals, right or not serving politicians, because they disagreed with their political, which is, I, we have we have these interesting public forum, this isn't the path I was gonna go down. But we have these interesting public forum versus business versus right to intercede where, you know, we had a couple of months ago, where somebody at a at a restaurant, you know, a restaurant owner, and their staff decided they didn't want to serve somebody, a politician who they didn't agree with, and ask them to leave the restaurant. And there was, and it's a, it's a challenging place, because the, in a lot of cases, that restaurant owner should have a right to say, I don't want to serve you. And yet, we also know that if businesses are allowed to make choices like that, then they can be anti democratic. And so it's this is these are very, very challenging. Conversations is the same thing with the public square, like with Twitter, it's not a public square, which we should talk about. But it isn't, it isn't. And so if you silence a person in that square, then you are essentially creating, you know, discriminate discriminatory systems. But I want to before we go down that path, I want to pull back to something that y'all had gone through that I think is really interesting. For because Mark, you were talking about Trump and having a huge number of followers. And we've got we've created this feedback loop for people who are using the social media application, where in the past, we expected our leaders to be more moderate, as they became right like that, their their that their behavior as they became more and more publicly recognizable, would be moderated. And the social media feedback mechanism actually seems like we have created more sensationalist activities, right we have representatives who are literally beating into a loop where they are, they get amplification, because they are saying things that are would not be acceptable in normal public discourse, but they say them into Twitter because they can, or other social media because they then get amplified, they go through this cycle, right? Some of Trump's appeal to his thing was like, Oh, I'm gonna say Cafe cafe, not bad, but everything, you know, whatever he wanted in that channel without consequence. When store, you know, 20 years ago, they wouldn't have stood in front of a podium and made statements like that. And what's interesting is that we've actually bled from that reinforcement behavior into the public into truly public squares. Where we've created this this very, maybe, maybe I'm under arrest, maybe old enough now and I have the history to think about people standing on podiums and street corners with megaphones.

I could just walk away. Yeah. You could walk away. But I mean, you know, what you're describing is very much a chicken and egg scenario. Right? Because look at January the sixth. So we're looking at, you know, what led up to the insurgency is, he's told people in a public venue, there was no to me, there's not a lot of difference. And maybe it's just my perspective of taking, taking the Twitter following and inciting them to go and do violence or to protest or standing in the public domain in a public arena on a street behind plan a pane of glass with a megaphone and saying exactly the same thing. And, to your point, Mark, I agree with you that position matters. But I don't necessarily think it's the determining factor. Because if you with your 19,000, or somebody with 5000, says something totally egregious. The viralization of that, through your follower base extends out geometrically. So whether he has 20 million people following him or you have 19,000, it doesn't really matter. It's the act that you're promulgating something derogatory against a particular group or whatever. And from my perspective, I think the only way to reverse that is to either take the laws that govern that space, and make them far more finite based on situation, or just blanket across every potential media platform, whether it's Fox News, or Twitter, or the real public square, it should be one of the same. I don't know if that works or not. But

what I mean, I tend to agree, sorry, Klaus, just a quick response. But I tend to agree that it really shouldn't matter the number of people other than the fact that when you do have 20 million, you obviously have a much higher chance of finding those few people that are not jobs enough to go actually act on the stupid shit that you say. But generally speaking, I agree that if it applies to somebody with 20 million it is there's no reason why it shouldn't apply to somebody with 20.

I'm going to go a little bit back to Robert, what what you were asking about. I am going to disagree with your statement about the popularity of a public persona, driving them towards this centrist position. Just because there is a an exception to that, and it's a very strong exception. And that is populism, which is what we saw happening during the Trump period. And it's not something new either. We've seen it historically. Mark, McCarthyism was very much like that. Your Nazism was very much like that, like the Nazis, the Nazis wrote the handbook on populace, right? For sure. It is the same pattern that we saw, then that we saw now in that extreme position, a very anti social position can gain popularity, if it has a sufficient following, and it is targeted at a subsection of the population. And I hope fortune is also very dangerous and hard to stop, because it creeps up on you. It's very insidious. You don't know this, that it's happening until it's too big.

Right? Absolutely.

Do you feel like those platforms are fundamentally based on untruths? Because the things that you just identified are where you basically big lies turned into movements?

It that is a very hard philosophical question. Because unfortunately, what happens is, I love the people who listen to those arguments believe them as well. Even though they might be empirically untrue, in in the worldview of the person that follows them, they are true. That is, it's hard to get a personnel that work view. There is. I mean, there's further arguments as well about like, how, like this the sociology and and psychology aspect of how people get to that kind of worldview. We unfortunately don't have enough time to really cover all of that. But yeah, it's that there's a lot of mental gymnastics happening around it. And even if, again, if the no amount of empirical evidence will change someone's opinion about something on

the air, it's interesting. Boy, because content moderation implies somebody's providing a filter on these systems. The limit the spread of them, which fundamentally seems anti democratic to me. But you all have hands raised. So I'll hold my point

about this. Well, I wonder if this will feed into what you were potentially going to say, or at least following the theme of what you're gonna say, Rob, but there are a wide range of double standards and dichotomies in in American law, and they always happen, right? Things that apply to one don't apply to another. The constitution is conveniently used and not used, and historically has been treated that way. Whenever the government deems it appropriate, right. And that's just a fact. And if you guys, if anybody disagreed with that, we could spend hours finding all kinds of examples where the Constitution has been used, abused or forgotten, when it's convenient to the government. So freedom of speech, you can say on Twitter are apparently in public, the president can actually advocate for somebody getting killed. And yet anyone in the US who says I want to kill the President isn't just told you can't say that anymore. You're literally threatened with charges and the potential for being thrown in jail just for saying the threat. Right. And so right away, there's a dichotomy. And there's an assumption that if somebody is saying that they might mean it, if they might mean it, we need to protect the president. How is that not true? For every other person? In the United States? Yes, maybe more impactful, right? It maybe it's more impactful if somebody kills the president than if they kill Mark T. Lee. But the basic fact for why that law is in place is because there is a concern when somebody says that they might mean it. So why can't we use that same rule everywhere else?

conversation stopper? Oh, no, it's a good. It's a it's a question of where where a threat of violence goes. But we've definitely had people you're muted.

The other thing I wanted to mention, which is another, to me sort of an obvious corollary is that in places where we've accepted that our rights are perceived rights, because it's not always clear what our actual rights and what aren't or what should be rights and what aren't. Our perceived rights are impeded, ie, I can just drive 120 miles an hour on the freeway. Why not? Why shouldn't I be free to drive my car which is capable of going much faster than that? Why am I not free to drive well, because I might hurt somebody. How does that not apply to free speech?

It's a it's an incredibly important question. And there's a slippery slope arguments typically are fallacies in it are almost always fallacies. Right? And so

unenumerated right.

Here, that's a very good point.

It's true, it's true, but there's also freedom of movement. And we've accepted the fact that people have the right to force us to wear helmets and put seatbelts on. And, and that, and that right, is is argued to be infringed double. Because you can put a burden on other people, by you being hurt. So, again, if we can make those kinds of leaps, from a personal safety standpoint, then why can't we make that leap from?

AI? AI? Bots? I don't, I don't want to interrupt you. But I have a quick point, if that's okay, got. The bars did not start with seatbelts. They inflicted a lot of harm, before we chose to make them, make them default to safe Moats. And it's possible that we don't actually understand the harm of social media at the moment. And that we were living in the pre seatbelts era for cars. And we will, we will, in many, to many decades reach a actually, I think this conversation is part of that reach a point where we understand what a what a seatbelt safety media seatbelt looks like, and safety media, social media, safety media,

maybe that's what it will become.

airbags and, you know, turn signals and things like that, well, awesome.

They got everybody excited. There is also the fact that the default of not wearing seatbelts is much more easily our default of doing harm with a vehicle, it's much more easily quantifiable default of doing harm with speech. And that is a problem with that with, with loss in almost all jurisdictions. And in many cases, laws need to deal with intent. I'm proving that someone has had the intent to cause harm is very difficult. Going back again, to do the social media on echo chambers on search of the problem is also I think that the algorithms, the social media arguments, and now we've had discussion in the past about Elon Musk talking about the D algorithm, but the social media arguments are, are built to to amplify extreme positions, because those drive interest and driving interest is what social media platforms are about. That's how they monetize themselves. And the this goes against moderation, which tends to try to cap interest. That's right. So Furthermore, talking about motivation on on bike, then you're talking about that Well, what why like the house objective, it is when my house had been rules applied like a constitution and who gets to to say someone should be killed or not. This amount of bias as well. Finding an impartial moderator is extremely difficult. Perhaps, there should be like a global moderation algorithm or AI is that is not under your single jurisdiction. Can try to produce on as impartial as possible decision or not, but but then we then we go again, down a much, much deeper rabbit hole as well. With that. I'm stepping off the soapbox.

I had my hand up but I'm actually going to lower I've been talking too much

Rocky, Joanne,

Rocky, go ahead.

Well, so back to what Rob was saying about seatbelts and whatnot, I think. And also what Klaus was saying about AI and stuff, I think what we're seeing is attempts at laws to regulate speech, but they're starting in the Child Exploitation areas, see Sam. And they are actually making lots of mistakes and maybe making some right moves. But the as in the early days of trying to regulate cars, the regulation is going and starts and fits and, and whatnot. And it in some ways, that stuff is more obvious. So until they get that right, I don't think we're going to see regulation of dangerous speech. When it comes to ethics, and politics and whatnot, until after we come up with something that works for something more obvious, such as trafficking, and extreme violence and other stuff, the real stuff that is currently actually hurting moderators. Now, I don't know if you've been following what's what's going on in Nigeria. But in Nigeria, the people who have been contracted as content moderators have to view this stuff, and take it down. And the PTSD from it is pretty severe, because there's some pretty sick people out there that decide they want to use the platform to, to advertise. But the, it's going to take a while to get the laws, right. And with AI, the large model approach isn't going to work because that builds in bias. And they until they figure out how to keep bias out of large, large models, AI isn't a solution either. So we I think we've got a fairly long way to go before we get there. And one of the other things that one guy was going to mention is the fact that there's also the fact that once it's on the internet, it's always out there somewhere, either an archive, the Internet Archive, grabs it, or somebody else grabs it. And it can always be resurrected. So there's also that instance, where sometimes you want things to be resurrected, such as the Russian news feed that had all the real stories on it. And other times you want it to die a horrible death and never be seen again. Right?

Well, I think, you know, part of the issue is, hook, you know, to classes point about an international body regulating it. What I'm curious about, and what I think is going to be a very big discussion across the world of tech anyway, is if they open source, the algorithms, if he does open source, the algorithms on Twitter, how those are going to be used and abused, to be able to communicate and get around the filtering and the moderation. And I think that that's probably not so much cybersecurity domain, but more in the the overarching umbrella of what it will be for media, for social media and for out in the public domain of a town square. Right. I mean, the minute those algorithms are released, you know, that there'll be malfeasance and the right will assert some point of view and the left will assert some point of view. But really, the only way to do it of any ilk is a pre, maybe pre moderation before it actually gets published. And maybe that's the area that needs to be centric on an international basis. You know, it has to go through this set of filters before it's ever published. And I think that's the probably the only way to control it. The question is, how does I've come to be is it the, what 50 countries or 60 countries who want a better internet that all signed up to that? Or is it going to be some, you know, jurisdictional or regulatory body in each in each country that will have a member. But if we pre filter, then we can maybe leave alleviate a lot of this? Because I don't know that you could actually put this down as law. Appeal applying to geography. Yeah, well, the other thing is also we haven't really talked about it is what about those cases in chat apps, WhatsApp or whatever messaging app, where right now or where there was a case of, I forget the name of it, I know that there's a movie about it now, a couple where she was pushing him to commit suicide. That was actually a trial. So that could become the beginning case law, for whatever we're talking about. But it has to apply uniformly,

are there up there opt in things like that? I have, I have a I have a whole page of notes about democratic speech and the influence of money in this seen and unseen influence of money. So I definitely have things that I'd like to come back to

that just a note to consider. If we pick up the topic of, of global moderation, is we've seen historically how well that works with things like the Security Council. It doesn't fucking work at all, because China or Russia, or the United States can just make a decision that they don't want it to apply to that particular problem, because they influence that problem. And economic threat at a minimum is big enough that nothing ever happens. So I, you know, freedom of speech, goodness, just just like, just like the, you know, the Security Council or the Human Rights Council, freedom of speech will be approved or not proved, based on how a particular country feels, something being said makes them look. And and that's certainly counter to free speech. I mean, how if there was one thing that free speech should be available for it so that you can talk about the problems in your own government. And yet, that's exactly the direction that we're trying to get away from, is that the government is perfect, the country is perfect, and you can't question it. And if you do, then you're the problem, not the government. And we all know where that leads.

On that chipper note. Let us wrap up, take care of yourselves. Don't Don't worry about free speech for the day. We'll come back to it. There's a lot of work to do here. A lot, a lot of thought. Thank you, everyone.

Wow, what an amazing conversation. There is so much ground to cover yet to come. And we will be having these conversations at the 2030 dot Cloud. Please join us. We want to hear your opinions on this. The more voices that we have in the conversation, the more rich and robust that discussion can become. Getting speech correct is absolutely essential for us to be successful as a community. And we want you to be part of that. So come in, be part of club 2030. And I will see you there. Thank you for listening to the cloud 2030 podcast. It is sponsored by rackin, where we are really working to build a community of people who are using and thinking about infrastructure differently. Because that's what rec end does. We write software that helps put operators back in control of distributed infrastructure, really thinking about how things should be run, and building software that makes that possible. If this is interesting to you, please try out the software. We would love to get your opinion and hear how you think this could transform infrastructure more broadly, or just keep enjoying the podcast and coming to the discussions and you know, laying out your thoughts and how you see the future unfolding. All part of building a better infrastructure operations community. Thank you