Eric told me he's got to leave here at 330 to catch a flight and we have a lot to talk about. So let's get going. All right, I'm going to do a quick round of introductions even though I'm pretty sure most of the people in the room know everybody here but David Morar, next to me is a senior policy analyst at New America's Open Technology Institute. He focuses on tech privacy platform accountability issues and many other things. But we're just focusing on that for the moment. Professor Eric Goldman who's over here is at the Santa Clara University School of Law, say that twice. He focuses on Internet IP and advertising law and I've been fortunate,
Joly, did I just get cut off? Joly's like stop yelling.
So I've had the good fortune of being on panels with him before always, always a good thoughtful conversation. Steve DelBianco is the president and CEO of NetChoice where he works with his members and set sets execute next choice, his agenda of which this topic is a huge one. And I'm sure we'll be hearing about this. He also focuses on Internet governance as well as online speech and consumer protection. And Susan, I'm going to enunciate your last name, Benesch. So pretty, I'm gonna say it again. Benesch is the executive director of the dangerous speech project and faculty associate at the Berkman Klein Center for Internet and Society at Harvard, where she works on international law, human rights and freedom of expression online. And I was just saying to Susan, I was in here for another conference. She's going to have to manage that. Yeah. Okay. Gingerly. Very good. Okay. So we're just I'm going to just dive right in with questions if you kids are good with that. And, and these are the questions for my next panel. So let's see what that looks like. Steve, you've been doing a lot of work in the states these days. Tell us about some of those lawsuits you've got going on.
Thanks, Shane. Appreciate very much. Congratulations to State of the Net for 20 years.
Tim was lamenting the fact that so many of the so many of the talking about the same issues that we've been dealing with for almost 20 years. And in the states themselves, they will often step in to fill a vacuum, if they perceive the Congress is not stepping up to regulate. But just as likely state lawmakers of which there are 5000. Run roughly 2000 bills per state per year, that's 100,000 bills in a compressed framework of several months. And it only takes about 10 days for terrible idea to become a law in the state capitol. And all of those Bill introductions as well as the hearings are fodder for town halls and fundraising at the state level. And their messaging builds their messenger bills, much like what an earlier panel today talked about with regard to bills that were in Congress. So the hearing that we saw last week. So we do our best to testify, we show up naturally shows up in with written testimony. We meet privately and publicly with lawmakers with elements of their bill that are unconstitutional, and would have tremendous unintended consequences. Just give you one example, a bill that requires everyone to prove they're an adult means that even adults have to prove that they're an adult. That means that everybody using YouTube, Wikipedia, and Facebook who doesn't even sign in to use their Twitter account would have to not only get an account and identify themselves, for every search they do online, they'd have to provide two forms of government ID enabling the tracking of everything they do by a platform that would rather not even know what your ID looks like. So once we reveal the concerns about that 50% of the people that we do phone campaigns will say wait a minute, but you're saying my lawmakers are gonna require me 65 year old guy to have to prove his age just to use YouTube. I'm not for that. And over 50% of the people we call want to be patched through their lawmakers and in the Capitol. But even that doesn't dissuade them from the irresistible draw of a messaging bill that lets them say they're standing up to big tech, and they're protecting the children of Utah. So as a last resort, as a last resort, and that choice has turned to the courts. This morning, there was a panel in here talking about the natural use cases against Florida and Texas having to do with the First Amendment. But in the states we've had to sue California, Arkansas, Utah and Ohio thanks to great work of DWT and a couple of other law firms and obtained injunctions and all of them. Just two hours ago, we obtained an injunction in Ohio against their age gating built. And we do that as a last resort. Because the methods these states are applying our methods that are not only a violation the First Amendment so they're unconstitutional. They have unintended consequences that will not serve the interests of parents and children. They take the eye off the ball and what we really need to do, which is to go after C Sam, those who purvey CCA go after criminals whenever we can, and empower parents to take choices. Finally, Shannon might just add that one piece of advice we could add every hearing is when when a lawmaker says I think I want that curfew from 1030 at night until 630 In the morning, and you guys, tech, you need to make sure that those kids are not online. And so with all due respect, Mr. Chairman, parents should go upstairs and take, take the fucking phone away. There's just no reason to let them have that. And that usually, that usually stops the talking for a bit, but they pass the bill anyway,
There's no FCC rule here about you know, Janet Jackson, just in case anybody was wondering, that'll probably stay in the uh....
So those are those are some really strong points. David's got some thoughts on that. So you're writing a few things down?
Well, thank you. Thank you, Shane. And then happy happy 20th to State of the Net. It was it was very interesting to me to sort of, like see this perspective from from a states,you know, from what's happening at the states, because, you know, there's a clear conversation that is not happening at the federal level, right. The states are the ones that are, you know, 100 100,000 bills a year, right. it's a conversation that doesn't, you know, have the benefit of, of a lot of folks from civil society, either, right? It's a conversation that's kind of away from most of the people in this room. Right. And so I, to me, again, it was, it was a very interesting way of looking at the 230 conversation, because, you know, states can say, well, we're going to do things differently. And we clearly had an example in Utah, where they, they really went, I would say, maybe further than California and Texas. And in in going curfew. Yeah. So just a an interesting start to the conversation.
I don't know any kid who doesn't know how to get around a curfew. And they do concerned about them, this side of the table, but what any thoughts on this? And also the challenge of like, where are we pushing this on to the States? Because we have not done a nice, we've had a hard time with this at the federal level. So I'll add that to the mix.
A hard time... may I? A hard time is a very polite way of putting it. We already have got some some discourse norms on this panel that I'll be happy to join into. When the time there's,
This is a free for all. There's no time. The time Is now. Go for it.
So first of all, on the federal level, I would just remind you that during oral argument in the Gonzalez case, three of the justices said versions of I am confused. One said I'm confused. And other one said I'm thoroughly confused. And another one said it was it totally confused. They're very confused. And Justice Kagan said Well, we are not nine experts on the Internet here. And that was also a beautiful example of understatement. That is the case obviously in the courts. It is also the case in Congress. And it has been for a long time. I'd like to since since in some way were invited to talk about section 230 which is old and hoary, now nearly 30 years old. I just like to remind us that section 3230 emerged from more than anything else, a defamation case. And defamation is a harm brought about by one usually one person natural person against another one. That is to say it is a personal harm. It generally two things number one, very rarely implicates children. We are after all, we that is to say also legislatures and courts are either genuinely worried or in some to put it politely a useful but disingenuous way worried about children. In any case, defamation, the the route of section 230 generally doesn't have anything to do with children and secondly doesn't implicate public order. Now the sorts of harms that those who who would wish to improve things by regulating are concerned about our our public collective level harms brought about by types of content like disinformation and extremism, and therefore they call for a different kind of thinking different sorts of solutions. And then I wanted to quickly say one other thing, which is that this panel is about finding the balance the balance between speech and regulators or would be regulators. There is one factor and one group of stakeholders who are terribly left out of efforts to find this balance in almost all cases and those are researchers and research itself. We have heard today already so many examples of, of bills and regulatory initiatives that purport to protect children but either wouldn't or and or also would bring about lots of other harms.
In most cases, design interventions are invented and carried out by tech companies without any real knowledge of what they do of what sort of effects they have. There has to be space for rigorous research to determine what the real effects are, in order to make sensible policy choices, whether you're trying to protect against defamation or against big collective harm. So that has to be added to this effort to find a balance.
So I want to come back to the research question. And thanks, I think it's a really good point. But Eric, why don't we have you comment on this?
I do want to thank the State of the Net organizers for putting this panel and conversation together, what a great opportunity for us to celebrate 20 years of successful policy discussions. And actually, the question you tee up is even older than say, the net, this is a 1990s question, what level of government is the appropriate one to regulate the Internet? And really, you could summarize it in four ways. You could say, international regulation, national regulation, state or local regulation. And then the what we discussed back in 1990s, was some form of Internet is specific regulation or self regulation? There? This is not a new question. This is baked into our Constitution, this allocation of responsibility between national and state regulation, but the Internet is different than the questions that they were trying to answer back in the 1780s. The question that's on the table today is can a state run an experiment in this laboratory of experimentation at the state level, without affecting any other state? And the short answer now, today, even with all the technology we have is probably not it, the only way to do that differently would be to impose geographic authentication requirements, which themselves will create many privacy and cost implications, probably not constitutional. So if we don't do that, then the question is what how can a state launch an experiment? In this laboratory experimentation, federalism, and it's hard, let the local conditions dictate what local solutions are best. None of that makes sense for the Internet. Like it just it's like apples and oranges. And so we're seeing what it looks like when the state's unleashed the beast, what it looks like is they pass messaging laws that were never designed to actually be good policy. And then, and I use this metaphor on my blog, the dog catches the car, and then we take it to court. And guess what happens, the dog doesn't fare very well. So they're gonna keep yapping, but they're not actually solving a problem. They're only creating them.
Shane, there's a follow up to that you'll be amazed at is that as a consequence of the court injunctions, that NetChoice obtained, the states have gone another way. And Professor Goldman calls it the bounty hunter approach. So Utah, having gone back to the drawing board, has decided that the way to get around constitutional limits on such as the First Amendment, and Susan, none of these bills had anything to do with 230. None of these lawsuits had anything to do with 230. It's all about First Amendment. So they've decided they're going to copy the Texas abortion law and simply empower the plaintiffs bar to bring causes of action. Okay, with $10,000 Minimum damages per violation. So the new bill in Utah is, as Eric says, a bounty hunter bill, and it's already been copied in other states, and I predicted it's going to be in several more states before the end of the spring terms of state legislatures. But But consider that what it does is it enables the plaintiff bar to sue any social media company for an adverse mental health outcome arising in whole or in part from their excessive use of the social media company's algorithmically curated service, it presumes that harm was caused that's in the law, presume harm was caused, presume that the use was excessive. Unless the platform shows it does things It limits any minor, which is under 18, from using social media for more than three hours in any 24 hour period across all platforms and devices. Eric, when you catch that car, how are you going to implement that? I would wonder restrict a minor anybody under 18 cannot be on social media between the hours of 10:30pm and 6:30am. A parent or guardian has to consent to a minors use. You have to disable any engagement driven design for showing content that I've liked to see because my friends like to see it and you have to put content this is the best you have to display any feed in the order in which it was posted, rather than using any algorithm while order in which it was posted. Isn't it algorithm, but they're really saying that you have to all feeds need to be reverse chronological without regard to the age appropriate nature without regard to whether they're friends or pages that they follow, or putting out content that they like to see. So it makes social media completely unusable. And perhaps, perhaps that's the intent. And it will, in fact, change the entire landscape of the country if those lawsuits a $10,000 per violation are unleashed on any social media site.
Anybody want to, before I move on, thoughts on that one? That one's a pretty big one. That's kind of news to me.
I
I do want to mention just briefly about these bounty laws, because they're really I mean, such deeply insidious policies. I remember when I went to Cuba, on a educational research trip. And they were talking about how they have the neighborhood hot watch programs, which is that there's incentives for neighbors to call out any behavior by their peer neighbors, for that are not in introductions of the Cuba, Chinese, I'm sorry, the Cuba Communist Party, and so gratefully turned everyone into surveillance mechanisms of the state. And I was just squeezed out by that, because that's not how we run it in the US. Or maybe we do now maybe we all each other's keepers. And we are all responsible for going and effectuating lawsuits that are designed to basically be proxies of a sense Oriole government. I will point out that the main reason why you do the bounty program isn't because that's a good enforcement mechanism. It's terrible enforcement mechanism to plaintiffs have all the wrong incentives if the government actually cared. But the reason do is because it avoids constitutional review in court, it delays the constitutional review until after lawsuits have been brought. Which means that if no one ever brings a lawsuit, you just have the sword of Damocles hanging over the entire industry, because everyone's concerned that they could be subject to 1000s of lawsuits, none of which have yet been filed, that would trigger the ability to challenge it. So whenever you see these Valley laws, you should assume that whoever's doing it knows what they're doing is unconstitutional. And they're doing this solely for the purpose of waiting constantly, to show you not because it's in the best interest of their constituents.
So we're gonna get out of the Utah cul de sac, which is where I feel we are at the moment and go across the pond over to Europe, who has been kind of doing this from an another perspective, which is interesting, because they don't have a First Amendment. And there's a lot of people that would like to emulate some of the rules that they've put in place, which are watching come in with like the digital markets Act and the Digital Services Act. But you know, how, how do these companies? How are they supposed to manage through all of this process and continue to be of interest to a consumer, it gets back to your question, Susan, you're talking about research? I always wonder if the do you think the research can front run it? Can we can we get research about this that is nimble enough to manage the current challenges or because I you know, I think about Cambridge analytical and by the time we figured out what had gone on there, we didn't have time to self correct for that. And everybody's very scared and 2024 about how this is going to go forward with 16 major elections. I think I just had about 18 things on the table there. But so let's start with let's go back to the research questions. I think that one's important and to be like, how do we manage to make sure that we don't put enough information flow here that we can come back with some maybe more safe and sane thoughts as we try to decide what to do with this industry, which people seem to really love? They like using it?
That's right, so that you did indeed pack an awful lot into one class, including several suggestions about what's what could go wrong with research. One, it takes time. Yeah, that's true. But that's not a reason to
know. I guess my I love the research. I just wish I really like I'm like, I wish we'd known that faster. And that's hard sometimes to do and have very good, thorough research. So you brought when you brought it up, that's what I was thinking
Yep.
But the sooner you start it, that's true under the right conditions, with the right rule, the right constraints to protect user privacy to publish in peer reviewed academic journals, rather than to keep the results secret inside companies as is virtually always the case when rigorous A/B testing is done on any of the stuff which is already rare. Cambridge Analytica is mentioned endlessly by companies as an excuse not to allow serious ethical, rigorous, collaborative research with outsiders. That's pretty curious since it was a company that egregiously fucked that up and gave private data of users to completely irresponsible outsiders. And so now they have been whining for how many years? Has it been since then? We can't possibly do collaborative research because it doesn't work. No. If you if you screw something up spectacularly, and then use that as an excuse not to do it properly. Where else could you get away with that?
So I also can't fail to mention the Coalition for independent Technology Research, which four of us founded a couple of years ago in which is now up and running vigorously and inviting members. So that we can promote and bring about rigorous research, in collaboration with companies independent of companies, and also hold researchers to to high standards, and in fact, get this stuff done, get some useful findings.
Do you? So I can, I can I can talk about the research thing for a long time. And any other comments on the idea of we need to I mean, I'm 100% with you on the idea that we need, we need to have it and we need to have it study and we need to have regular increments. Hopefully, they will be able to I'm just very concerned about this current election cycle. So that's probably what's what's coming through.
No, I'm not worried about it at all.
Okay, that's fine.
On the EU?
I'm just saying a lot of alcohol and books. What?
David had a point.
Yeah, no, thank you. I just wanted to say, I think the the conversation in the US in order to get to the point where we are able to do, you know, good quality research with data from companies, we first would have to pass a comprehensive federal privacy, I think we need to have a baseline for for strong protections for, you know, users. And then on top of that, there are, you know, ways that maybe we don't need to have a legislative structure like the EU does now through the DSA, which allows for researcher access.But in order to get to that, I think, you know, we need to have some basic protections for that data. Right. Before it's it's shared with researchers, and even outside of being it being shared with researchers. But at the same time, you know, there, the DSA for its, you know, goods and Bad's definitely has started that conversation about researcher access in the EU. And, you know, that's going to Brussels effect is way over here as well. So, hopefully,
I'm sorry, can I just piggyback on that briefly. I do think it's interesting that the DSA does provide datasets for researchers. And I'd like to see how well that works. That's an open question. Europe routinely passes laws, and then doesn't go back and say when they accomplish their goals, and this is one I'm very curious if it will accomplish its goals. And if not, we definitely would not want to invoke as a model. But I think to Susan's point, I just want to reiterate the concern that people who style themselves as researchers could very well be threat vectors. And there's a variety of different ways that they could be acting not in good faith, and that we need to be concerned about that. And of course, things like once they get data onto their, into their possession, what steps are they taking to avoid creating another threat vector. And so for example, some of the bills that were proposed have said they should follow reasonable security practices like seriously, that's how we're going to solve it a one sentence statement without any enforcement action. Like that's not helpful at all. So in order for us to have, I think, a productive policy discussion about how we can enable researchers, we have to acknowledge them as a threat vector and build a robust infrastructure to contain the potential risks and make sure they're offset by the benefits. Shane with respect to the DSA, which we've just been covering the Digital Services Act in Europe. It is enacted on a continent that doesn't have a First Amendment. And here it says Congress shall make no law abridging the freedom of expression that doesn't exist in Europe or anywhere else for that matter, and absent that Europe has for centuries. enacted laws like French laws prohibit criticism and parody of the president. The Austrian and Finnish laws, criminalize blasphemy. Hungarian laws prohibit any pro LGBTQ plus content if it's accessible to anyone under 18. And Turkey and France ban certain types of offensive humor, as we've seen the dark side of that. So all of those laws would never be able to exist if a First Amendment constitutionally would govern, but they do exist and new ones are coming out all the time. So the DSA simply said, we're going to allow any of these national governments to tell a platform whether there's illegal content up there illegal content, based on any national law, or regulators ruling, and if informed our platforms have to take an act.
expeditious steps to remove and limit the sharing of the content, or they would face a fine of up to 6% of their global revenue. That's certainly going to get your attention. And the platform's are all trying to figure out how to handle that. But the good news is the platform's are not liable, if having been noticed, of unlawful content, they take it down. The many of net choice members have said, well do us give that a try, draw a nice, bright line between what is illegal and what is not legal, because only two forms of speech are illegal in the United States, child sexual abuse, material, and credible threats of proximal physical harm. And outside of that everything else is lawful. It's awful. But it's lawful. And it's because of that, that our platforms have had to take dramatic steps to get rid of dangerous speech, Susan, what your content talks about, we've had to take dramatic steps to limit lawful but awful content that shows up because advertisers and users can't stand it. And of course, that's what got us into trouble in Texas and Florida, leading to the case that's in front of the Supreme Court two weeks from today, because Europe is different without a First Amendment. And one of the thing that makes Europe different, is they use the English rule for lawsuits and the rest of the world. The loser pays the other side's costs. So the defamation suit that gave rise to Section 230, The Wolf of Wall Street suing prodigy in anywhere else in the world, The Wolf of Wall Street would have had to pay all of prodigies legal costs. If they had lost that case, well, that's not happening United States and United States, the plaintiffs bar can just promise enough money to the potential victims that they can finance a suit, bring the suit, and if they lose, they walk away, and they file a suit against someone else. So the absence of tort reform in Europe in the absence of things like tech to do 30 are owing to one thing, the rest of the world uses the English rule of loser pays. So we are unique in our First Amendment, we're unique in our loss of culture. So we have to come up with unique solutions,
staying on solution sets. One other part of this I'm very concerned about is fragmentation. And is being the 20th anniversary of state of the net, one of the major things that section 230 Love it or not really kept was the getting the initial foundation of the Internet in place. And now I feel like we're struggling on how to manage that with all the freedoms that we have in the United States with, you know, First Amendment freedom speech. But we're also watching more and more countries just decide the they are going to decide how the Internet runs at the whim or the decision of the government. And it's not to the best, in my opinion of for their own citizens. So are we seeing any other solution sets out there that we should be thinking about? I mean, I haven't seen them. But I'm open to ideas.
Law enforcement cooperation, when it comes to fraud, abuse, is a is a huge step forward or see Sam international cooperation and Interpol, if they can work together with adequate resources and cooperation, that kind of multilateral action, can really stop some of the cross border crime problems that we have. And there was a panel earlier today where Maureen flatly talked about the need to go after the criminals. In fact, section 230 only says it's the criminals you have to go after you don't go after the platform that they may have used to put their content out. But why don't we focus on the bad guys for a while and spend some money to go after them? And that's an international cooperation element.
Beside checking, all right, well, that also brings up automation. So one of the things that we heard on a panel earlier today is how with cybersecurity and artificial intelligence automation can definitely it can go on both sides, but it can help with capturing bad code, bad people, there's just going to be faster and hopefully more efficient. But we're gonna have to remember to have the balance there. So any other thoughts I get on law enforcement, but on the automation front, do we think that we're in an hopefully net positive direction, maybe we can find a way to make this work where people can go back to the cul de sac in Utah, they have certain things, I mean, it's gonna be very difficult for them to only keep that in Utah. So as we tried to learn how to automate answers through this and see what it is that's concerning to people, will it be the content companies? Or will it be the laws that we put in place that get to decide that what you see and how are we going to manage that without fragmenting things and, and hopefully using automation to a net positive.
LLM could make a difference at detecting images that are illegal images under Child Sexual Abuse material laws, and it would spare humans from having to look at those images in order to determine what they are. I have to believe we're only months away from having that work. The companies are experimenting with the use of large language models in AI but you can do more you can also do it to investigate speed. Huge patterns that are so difficult to tell satire from genuine hate speech. Sometimes AI should contribute to an increased understanding of flagging user content that may violate our community standards. Notice it's community standards. I'm speaking of not national laws, right? Because the platform's themselves have adequate incentives to limit things that offend users and chase away advertisers, I guess, Twitter and X would be the one exception here, because getting rid of advertisers seems to be what he has in mind. And I think that AI could play a huge role there as we get better and better at it.
Although I do want to chime in on this, from my perspective, I feel like regulators often have schizophrenia, that on the one hand, they want immediate resolution of all, both illegal and lawful but awful content, which can only be done by machines. And other hand, they know that there's going to be mistakes made. And depending on the nature of the potential problem, potentially wide range of mistakes being made, and they want humans in the loop. And it's kind of like you can't have both. And you know, trying to walk that balance, it sets up that failure on all sides, right? You can't be both instant and accurate, you kind of have to pick one. And then if you you know, build a systems as we get more humans loop, we have to talk about the cost of that there's reasons why the machines sometimes are the only possible cost effective solution. And other times there's times when really human loop is essential, you cannot do it the other way. We've seen situations where someone's been accused of a false positive on something like C, Sam, where people have been kicked off of the entire Google Network, try living your life without using any Google product or service. And then to be accused of that falsely, you need to have a mechanism to deal with that. And so it's kind of like, what do you want, it's like you want the magic one that machines but only make perfect outcomes. And since we can't have that which trade offs are we willing to accommodate. And it's that trade offs that we often find, that's what regulators are supposed to be best at. And that's out there actually worse that right now, they don't actually talk about trade offs, who's going to benefit and who's going to lose, and that's the part we ought to be really clear about. So I'm a big fan of machines. I'm a big fan of having human loop. And I know I can't make the decision about which one is the right balance for any particular service.
So I want to chime in on that. Of course, you need machines and you need humans. The thing is to figure out, which are the things that the machines can do best, and let them in this case, the LLM 's let them do that. And where are humans most effective and also least harmed by doing the relevant work? It's true, Eric, that you, you can't have perfect outcomes instantaneously. But you're gonna have much better outcomes that we have now more quickly, by testing again, and finding the right balance. So of course, you you can and companies already are deploying MLMs, to make decisions more quickly, and to hire fewer moderators. So that they can, of course, spend less money on them and also damage them less by exposing them to the terrible content. There has to be more systematic oversight, including outsiders, that is a piece of the puzzle that is also almost never discussed, and is absolutely vital companies are regulating human expression more than any government does. And more than any government ever has, yes, that includes China, on a much, much greater scale, they have much more power over human communication than any state than any government. How and we have virtually no knowledge of how they are carrying that out whether we're talking about humans doing the carrying out or software or MLMs. That's untenable. It's we should, you should join me in being up up at night worrying about this. So a system for regular oversight of companies control a company's speech governance, to include content moderation, but not to be limited to content moderation, is vital. And that would solve, first of all, coming up with such a system would require the people who come up with it, and the people who critique it and improve it as part of that process to dis to make some of those decisions that you refer to as some of those trade offs. And secondly, as it gets tested, one would find a better and better balance between the efforts of the machines and the efforts of humans take just one example quickly. We have no idea whether did for groups of human beings have similar access to platforms. When men and women post similar content when binary and non binary people, when Hindus and Muslims in India, for example, post similar content, is it being taken down at the same rate? The Oversight Board has absolutely no capacity to test that, because they get to look at one piece, one decision on one piece of data or one account at a time. So how is it possible that in this time when we claim to care about Dei, we're not even asking questions like that. So we need a standard, a set of standards for systematic oversight of content moderation and other and other practices by
the largest company driven content moderation on the planet, is something that you're grateful for every day, and it's spam. Yes. And the spam that is pulled away from our mailboxes is equal to the email that's allowed to get through. And it's not just email, but it's other messages to get through. And it's even some of the ads that come across as spam. So the postings, and the way it got to be effective at that was by being trained by humans, every time we would categorize something as spam, that teaches the machine to recognize spam in that cultural context. Also, the companies themselves are beginning to employ artificial intelligence like MLMs, to discover ways to block spam. And that is effective, but not as effective as it could be. But I can tell you how it would be ineffective is for somebody to take a clever cleaver to Section 230. Because stopping spam messages, invites lawsuits from that business whose business messages are not making their way through without section 230 C two, the lawsuits begin and then they never end. Because part of what we benefit from is trying to limit the amount of information that is lawful but awful. I'm not saying that every spam message is inherently fraud, the fraudulent. It's just something that the platforms and the ISPs think that their users would prefer to do without they know that their advertisers don't want to show up next to that. So some of the content suppression, you can study it. But it's extremely Susan challenging to study that since that, that user that machine learning that goes on for spam, it's changing every nanosecond, based on the experiences of new forms of spam. And with what users are categorizing as spam.
Just one sentence, it doesn't. It's great that you bring up this example, which generally doesn't make it into policy conversations, in part because almost everybody approximately agrees on what is spam and hates it. So think of the contrast between that and other forms of content that various different stakeholders would like to restrict. But it is it for that reason is a very interesting case that
I just wanted to add, at the beginning, Steve talked about the four levels and said international national state and then self regulation, there's one more in between, and that is multi stakeholder collaboration in in, in, you know, it's not just industry self regulation, it's having civil society at the table with a voice with some level of power to talk things through, right. It's industry, allowing, like students that allowing a little bit more access, right, within the confines of a of a specific structure. And there's examples of industry and civil society collaborations, you know, very successful in specific areas. And there's obviously examples where it's not been successful. But But I think, I think that there's, there's ways we can be productive, and actually try to solve some issues in this space without leading to, you know, regulation that goes the way of, of, of what what's been happening in parts of the states, we're, you know, going to a point where your whatever you do, then gets up to the, to the Supreme Court to deal with it. So I think that it's important to realize that we're not in this binary of, you know, if we don't regulate then it's up to only the companies to do something, there is another way forward and that way forward is to bring in civil society is to have these voices at the table that then also have the you know, the the knowledge from industry, right, that says, hey, here's how generally how our algorithms work right. You can have it at the level of the industry, you can have it at at you know, company level and you know, the The Facebook or the metal oversight board is an interesting example. But it's not built for that, right. It's built for a sort of like a judge and jury kind of perspective on specific pieces. But there are ways to do this. And I think we can be hopeful even though there's a lot of harm on the Internet.
You know, Shane, for 20 years, we've worked together at the Internet Governance Forum at ICANN other multistakeholder bodies where civil society, business community and governments occupy the same space, ICANN might be the exception, because it is very much non governmental controlled. But at the Internet Governance Forum run by the United Nations, it seems that that no good deed goes unpunished that no matter how much we describe what is technically desirable and capable, how much you describe about the need for diverse communications, it is ultimately up to each of those governments to preserve for itself, the power to decide what it thinks is legal or illegal. And if insulting the president of Turkey is illegal, there's not a whole lot we can do about and they shut off the Internet to enforce that. So I agree that it's important, but we should focus on things that we can get governments to agree on, which is where crime fighting comes in, right? Focus on things we can get them to agree on. Because when we, when we hammer on them about the heads and shoulders of free expression, what they hear is you threatening their power by foot by fomenting more criticism, more organization of protests, and they don't take kindly to that once the door is closed.
And this is a wishlist plus 20 years. So this means that all these the Internet Governance Forum has a remit from the United Nations. And there's always that are going out. And again, they'd like to shut it down because they want it to be more of a one country one vote way of managing it. So it's, it's a big year for a lot of things. So Susan, going back, you mentioned like, sounds like the oversight board is too much of like a court for where you you'd like a little more information flow, let's say in that. And so what's going around in my head is a combination, like you mentioned, the lack of a privacy piece. Any sort of federal privacy law here, it how do you have something that you would recommend that we're able to believe in disaggregated data, the information could be shared, but there would be a comfort level for the companies to feel like they can bring the information forward into the academic areas, I always feel like, the challenge is, they're putting a platform out there, it's people putting their own information into it. Some of them may be crazy. But you know, like, it's the understanding that somewhere along the line, there will be a level of trust, legality that will allow the information to then hopefully put us in a better place. So I'd love your thoughts on that. Yeah,
thank you. First of all, I didn't mean to say that the oversight board is useless or isn't accomplishing anything rather just that it has a remit, you know, it has certain capacities. And the kind of oversight I'm describing is simply not, in spite of its name, part of its gig, you know, to get companies to release data in a responsible privacy protecting way, we have other models. After all, there are researchers including but not limited to academics, who work with, for example, medical data to do APD immunological studies, which are absolutely vital in saving enormous numbers of human lives. Though that research also takes time, but eventually has huge benefit. There are ethics boards at all, accredited academic institutions of higher education, to which researchers now have to submit their experimental designs. Anybody who's a professor at a university in the United States and in many, many other countries can't go forward with research that they expect to publish in a in a respectable academic journal, without first submitting their their experimental design to such a board and getting it approved in advance and then they must follow the the standards for ethical responsible research in order to get approved in advance by such a board. There are also independent boards for researchers who are not attached to academic institutions. In other words, we're not talking about inventing a brand new wheel here. We're talking about bringing something from other spheres of no knowledge and an activity into the online world into the digital world. In another sense The digital world is still kind of naked where other industries are closed, for example, just about every other industry whose goods and or services have significant capacity to benefit or harm the public is subject to some kind of regular oversight by people whose standard job whose profession it is to do that oversight. bookkeeping, for example, we know what is proper bookkeeping. And what is improper bookkeeping. You can't make cars without somebody crash testing them, and someone making sure that the crash tests are correct. You don't run a slaughterhouse make food, make drugs, et cetera, et cetera. Without some systematic form of professional oversight, it shouldn't be so controversial to to at least aspire to setting that up. Though, of course, there are very serious challenges to overcome, like privacy. It's not at all insurmountable. And the potential benefit is enormous.
I really want in on those datasets. So I appreciate that. But I also love that we can do snap polls, when it comes to, you know, things that we want, we can find out right away, but things that need to be responsible, some reason to take way too much time. So it's 317. We're getting you out here at 330. All right, we've got you know, we got 13 minutes for the questions, and I have a feeling this front row right here. Would love to start just do you want to start?
Right. I'm on the normal, cold calling.
Okay, well, good, because you're gotta You got it. That makes it much easier,
right? Yes. Thank you all so much. This is an excellent panel, I wanted to go back to the bounty hunter aspects. Can you expand more on the conflict that how that gets around the constitutional hurdle? Like is it that, you know, networks would have a harder time bringing a preemptive strike, I'd love to hear more,
I'll just start, Eric, clean up my mess, because I'm not a lawyer. But the natural is lawsuits have thus far been able to do a facial challenge. We challenge the words that are on the face of the law, as a violation of the First Amendment, Congress shall make no law infringing on the freedom of expression. And then we'll show the ways in which the state bill compels us to say things we don't want to say, or restricts us from saying things and restrict citizens from seeing what they want to say in Utah, there was a user lawsuit, not just a net choice lawsuit. So those are brought against the statute because it's the government doing it. As you know, the First Amendment says Congress shall make no law. But it allows this place to kick you out for using bad language, sorry, it allows a restaurant to kick you out if they don't like the way you're dressed, or the way you're speaking. So private parties are not affected by First Amendment restrictions. So I'll turn it over to Eric. So I can't naturalize can't likely. Sue, Utah for the bounty hunter bill, any more than a lot of other concerned, women's groups could sue Texas over their abortion bounty hunter bill, the Supreme Court took a look at it and allowed it to stand. But Eric, absent a constitutional challenge. If the law were used to sue a company, does the company make constitutional arguments in its defense?
Yeah, the reason why it can't be challenged facially is because there's been no actual harm yet, because the lawsuits haven't materialized. And so the courts can say that there could be harm, we don't know. So we'll wait and see. And so it's not that they're saying there never could be constitutional scrutiny. And I was saying, it's not timely yet. We don't have enough facts, to discuss to make the constitutional analysis. To me, that's, that's a logical if we know that they've created a claim, we know that the companies or individuals will seek to minimize their risk under that claim, then they know that the existence of the claim will create enough change of behavior that it should be evaluated under constitutional standard, whether or not that's permissible. But that's not what our court has said yet. No, I hope that somebody will fix that. That, to me is a hole that has been abused by regulators knowing that they can play this game. Once the case is brought by somebody to bring in enforcement, then it will be timely for a challenge. And it possibly can be done on a facial basis. At that point, we're likely would only be, quote, as applied to this particular circumstance. And so that will leave open other people to certainly bring lawsuits and have parallel constitutional challenges taking place for each of the different types of claims that are being brought. This is not the way to make law. And this is not the way to establish the constitutionality of law. And again, I remind you, anyone using this is probably doing so in bad faith.
In a question of here, oh, my thank you.
If you can introduce yourself. Yes, sir. My name is Laura. Like Kelly. I lead the modernizing Congress portfolio at Georgetown. So but I have been working with the Reform Committee in the House. For the last six years, so my question so that the awful lawful chickens are coming home to roost right now in Congress, and their experience and members are experiencing 300% More death threats. One of the reasons these really good members are retiring is they're just done with it. Not worth it. I do a lot of field research and member offices, members, there's just a constitutional wasteland in between these old like archaic models of the right to petition, which now means you protest on the model, you hire a really expensive lobbyist. It's either purchased access or protest, like Congress is so derelict in its ability to handle the modern world or have any situational awareness until recently, and it took a pandemic to get Congress to get on Zoom and to create a bunch of electronic workflow. My, my hopeful path forward with this is just that I think there's probably a different way for Congress to legislate on this, because they're going to have to start using this technology, in their own workflow to recreate norms, which have been obliterated, and they're seeing the absolute downside of what they've created. The monster Frankenstein baby, with an AR 15, literally, is what freedom of speech looks like right now. And with an institution that does not have a continuity of government plan, which is stuck in this conversation about January 6. So it's not moving on to all of these other issues. I would just argue there's a new crop of lawmakers right now that understand this crisis in a very personal way. That might be really an important way to jumpstart something productive. Does that make sense?
You're speaking about federal lawmakers, a new crop US Congress? Yeah. You're right. Well, nobody's
organizing it like this, like it still comes in. People advise Congress, it's the most advised body on the planet, but it's always read this white paper, not use this new technology in a new way that makes sense for you. And there's hearings every week, and they're nerdy, their house admin and house clerk, but they're rebuilding the information architecture of democracy right now,
when you're saying they're doing it Lorelai with a personal experience of using the tools,
some of them it's definitely an age dependent, you've got like, it's still the average age of members is what 60, maybe a little younger in the house. But you've got this crop of younger generation who are brand ambassadors for this conversation. It's not strange. And
some of them will start to you don't see your social media. And they may notice that yeah, they get a lot of these threatening messages. Maybe not even directed at them, but directed at
others. Yeah, I was at a meeting with on continuity just two weeks ago with Democrats or Republicans across the board.
So they're all seeing it. And there's a mistaken notion that the social media firms have some algorithm that assesses the content of a threat, and rates it on Google, how enraging might this be, if it rates highly, let's share it a lot. And you have to understand and Susan, I hope your research will help you confirm this. That is what happens. The the sharing algorithms are based on things that your friends, people that you have followed in like, what are they looking at and sharing? And so it's based on Association. Oh, wait, that's another First Amendment protection, Congress shall make no law abridging the freedom of association. So people associate with their crowds and pages on Facebook and x and YouTube. And that is why they see more stuff that maybe takes them down a rabbit hole. But the companies do not rate content for its enrichment potential. They simply share content that is engaging with the people you have engaged with. Right. So given that our lawmakers hopefully Lorelai are going to see that the crowds that they are engaged with him, the people that follow them are sharing things that are really inappropriate. I don't know what they're going to do about that in a country that is governed by the First Amendment. Do I do have any suggestions? I
think what's obvious to members of Congress is that we're letting the Second Amendment kill the First Amendment. The through line of violence as speech is the problem right now. And I mean, I'm from rural America, I saw the people circling in the parking lot. If you have family members who are election administrators or volunteers, they're all having to learn duck and cover and tourniquets for 2024 election, like nobody wants to talk about it, because it's ugly and unfortunate, and we let it happen. But I do think that it's a way that brings this crisis of what do we do with this first amendment? I mean, part of it is like the first branch of government is the largest publishing entity in the world, probably, right now among them, all of the memory of democracy is in it. All of the agents. Every member of Congress is a publisher, but it's not organized in a way that is competitive on behalf of cup public goods and norms. And I think that is what's changing, it can actually be a player for the good stuff. And I don't hear anybody talk about this anymore, I think because everyone's given up on it. But it's really a possible pivot right now. So I'm just saying that that's hopeful, deserves to be talked
about. So actually, I think it's a great metaphor for what we've been discussing today. Because on the one hand, there may be people who are gaming the system playing in this lawful but awful space in ways that we would want to have some kind of breaks put on them, and others are just committing illegal acts that are clearly illegal, and that there may be inadequate enforcement of the law to punish those people who are making the death threats. Literally, I think we don't have a law enforcement infrastructure that could handle all of the people who are making things that are clearly illegal, not just free speech, but literally crossing that line. And so what do we do about pouring adequate resources into enforcement effort? What do we do about insulating the the members from this this lawful but awful torrent? And what do we do to change the discourse among our politicians, because some of our politicians are encouraging that kind of behavior, and are not punishing their colleagues for engaging in that behavior. And unsurprisingly, the American people take their signals from their leaders. So this is a super complicated problem. It's a metaphor for everything we're discussing, there will be no single fix. But what I heard from you is maybe we need more criminal enforcement on this behavior. And I would support that.
The I was also reading the economist this past week has an article about people are actually getting back into smaller groups, it's more what WhatsApp chats some iteration of that signal. So it's going to be harder to manage the conversation, the way people want the platforms and manage it, because they're not taking place. I realize that WhatsApp is also in a metric product, but you know, it's it's the dynamic has changed, and they're getting away from the bigger conversations back into the groups and then you just have circular intelligence, which is very scary. Alright, we have time for one more question. And then I'm gonna kick Eric out. Over here, again, Mike, sir. Right around the other side of the, this column here. You too, have made friends. Good job. Okay.
Hi, I just have I have a comment, actually, to follow up on what you just said, spending a lot of time on the hill. One of the major problems we have with policymakers and their staffs is that they have no subject matter expertise at all, whether the town the technology issues, or the First Amendment issues and their legal and privacy issues. And I think if there's one thing we really need to do is to invest in sending well informed young people to work on the hill and staff, these guys, because most of the ideas that they're putting forth are coming from outside groups that are, you know, have bested self interest. And then of course, looking to further their own. So
can I just make a slightly so as a former staffer on the hill, is probably 80% of this room is if not more, if you're expecting a lot of people to be subject matter experts on everything. And that is it's mean, you go in and you expect them to be agriculture, defensive probes, bah, bah, bah, bah, bah. So understand, I just understand that everyone's trying, and I get your point, your point is, we need to have a better level of information at all areas. But it's not because they're not informed. It's that they're, they're dealing with a lot. I spend
a lot of time on the hill. And I just want to clarify, at least on the committees of jurisdiction, because when I debriefed the Senate Judiciary, staffers last week after the hearing, it was mind boggling, because we're
asking them to do in a tremendous amount, and not very well paid for it. Um, you had a comment, I just
wanted to say there's, there's great programs that are trying to deal with that. Within New America, there's a program called Tech Congress, which is, which is setting up offices, on both Democratic and Republican with, with great people that have a lot of that expertise. And there's a lot of there's other programs that are trying to do that as well. So, you know, it's I think it's also a challenge, like you said, of, of a lot of those people being put into a situation where they have to be experts on a lot of things. And maybe, you know, bringing in specific experts in would be would be a way to handle that. But, and
Shane, just to point out the obvious one of the best ways for tech staffers to get technical expertise is to come to stay to the net, to have it here and to be a part of celebrating 20
spans covering all of this and it'll keep going and you can also watch it on the state of the net Internet education channel. So Eric has to go which means that we have to stop talking out loud but you all are welcome to come talk to the panelists, please. To help me thank our panelists for a wonderful discussion