SOTN2023-11 Section 230: How Will Lawmakers Seek To Reform It?
12:08AM Mar 17, 2023
Speakers:
Matt Perault
Rebecca Kern
Joel Thayer
Yaël Eisenstat
Billy Easley
Keywords:
congress
platforms
policy
law
company
section
act
issues
reform
transparency
point
liability
line
question
case
tech
tools
reddit
lawmakers
conversation
add another section 230 panel you guys are in for a treat this one is how lawmakers will seek to reform it or not, we'll find out. But I'm Rebecca Kern, I'm a tech policy reporter at Politico. I have the head cold, so I apologize for my voice. I'm gonna have everyone go down the line and introduce themselves briefly. And then we'll get into it. Sorry. Yes.
Joe Thayer, president of the digital progress Institute.
Hi, I'm your allies. Instead, I'm a vice president at the anti Defamation League where I had the Center for Technology and Society.
Matt Parolin, the director of the Center on Technology Policy at UNC Chapel Hill.
Hello, I'm Billy easily on the head of US public policy at Reddit.
Okay, great. So as we all know, section 230 debate has largely fallen along partisan lines in the debate in Congress. But today, we're going to try to break past these party line differences and discuss potential solutions for Congress to tackle at a bigger level. And I just wanted to go down the line and let everyone share kind of their biggest goals on to 30. reform or if you take the opposite stance that you don't want reform in Congress, this this session, can get whoever wants to go first. Yeah,
okay. Well get on the line. Well, I think it's better just to roadmap, the issue to get a better framework for the discussion today. And I think the best way to do that is one outline. What are the issues related to Section 230? Also kind of described that there are these concerns with section 230 are actually more bipartisan than most folks like to admit, it's not necessarily the case that whether you absolutely think it should be revoked or, or you think secondaries, the best thing since fried rice, those aren't necessarily the dichotomies that we are that we deal with. So gonna go a little bit into like the bipartisanship that exists on both sides also show how actually some incremental reforms have worked and actually have gotten through Congress. So it's, it's to say that something cannot be done is not factually accurate by just by virtue of the history of how the legislators dealt with these issues. And just final points. So I guess the issues are really two fold as I see it. One is the question and see one, which you heard a lot about in the first panel, which is the judgment immunity under C one, which kind of opened the floodgates for these tech platforms to be immune from virtually every type of civil action, which also in some parts of the country includes contract claims. The second is the censorship that you see in C two. So that's one thing I can see the legislators trying to grapple with and trying to understand how best to reform it. The second thing that we should realize is that both sides see a problem here. And so we we've already seen an example of that last Congress, we saw the introduction of the earnest act, where you have Grassley and senators Grassley and Graham, pairing up with senators, Feinstein and Durbin to understand the implications that section 230 immunity may have, or so called immunity may have towards you know, the prevention of sex trafficking and what have you. The last, the last of the points that I'd like to, that I'd really like to bring home here is that incremental, incremental approaches have worked on the legislative side. So we saw that ancestor Fausta where there was actually a lot, although maybe maybe some would disagree that it went far enough or not, or didn't, or had would have blown up the Internet. I think one or two things are actually true. There were some good things that happened out of the sense to foster conversations, particularly as it relates to limiting the liability for actual harm victims of sex trafficking. I think the takedown of Backpage has been part of it. Also the dovie Twitter case, which is another example of how legislators can help fix some of the questions that we have. And also make a clear point of this, the Internet didn't break just because it's the Fausta existed. So the last point is really look, if I would like to if I like to see anything, I would like to see a targeted and incremental approach to any sort of any sort of reforms. I think that's usually a tie. You have to get it done. You can't just you I think someone said this earlier. And it really panel pick a goal. And I think that frankly, that's that's ultimately how it's going to work out where both congressional on both sides the aisle see issues of concentrated power over over our information as an issue. I think that's fairly bipartisan. So I guess in my, in my view, it's strong. You need strong bipartisan support and incremental change will most likely be the winner at the end of the day.
All right, it's going down the line. Thank you for framing it. Now. That was a great framing. I don't have to go into some of the framing points I want to do because I agree with much of what was just said For me part of the conversation about section 230. It's intentionally being pitted into binaries. And I just want to unpack a few of the binaries. That's my framing. For the conversation, I really would like us to get to the point where this conversation can go beyond whether or not a platform should be responsible for third party contact, and really into what is platform behavior? And how is that different from actual liability for third party content. And so some of these binaries of it's either going to, you're going to kill the Internet, or you're going to kill all the smaller companies, or if you do anything to touch 230, every single company is just going to be bogged down in neverending lawsuits. I think some of that is maybe true if we're talking about killing section 230. But I think most people in this room understand that I am not going to speak on behalf of others. But I for 1am, certainly not speaking about killing section 230, or keeping it as is, I want to talk about how do we update the law today to fit the reality of what the Internet is right now. And some of that is about figuring out what is a company's behavior? Especially. And I'll get into this later in this panel with some examples. I mean, if a company's behavior is actually enabling violation of civil rights laws, how is that not something we should consider? And that is very different from the defamation questions or the questions of third party speech. So what I would like to see Congress actually ask, and again, also sorry, the Democrats versus Republicans, Democrats want to take more content down, Republicans want to take more content up, that's actually really just focusing on content moderation. Right. And I would like to focus the conversation on behavior and conduct of the actual companies versus content moderation. What I would like to see Congress consider and really talk about is where should section 230 protections Stop? Where are the lines? And I don't just mean the word algorithms, because that's like, become this whole thing of but algorithms run the Internet. But what are the lines are the lines in targeted advertising, especially if a company's targeting tools can actually help you engage in civil rights violations? Is the line where a company auto generates pages for terrorist content and actually creates those pages themselves? Is the line when a company's tools actually connect a perpetrator of a crime with a victim? I'm not saying we have all the answers to that. But that's the conversation I want to see have instead of a conversation about content moderation. And I'll just say two last points. I'm sure any of us here could talk for a full hour about this first question. We hear so much, it's just too hard. It's too hard to define what all that looks like. Or it's too hard to address these issues, companies will get buried in lawsuits, we heard the Supreme Court start to quibble over what's an algorithm and what to this, a lot of that conversation is actually meant to help us think it's too hard. And therefore we should remain with the status quo. And I think there's some very specific examples I hope to go into on this panel that will show where Congress could play a role in updating these roles.
I'm Matt Peralta. It's a pleasure to be here today. Thank you so much to Tim and others for organizing this wonderful event. I think before I get into some specific ways to think about 230 reform, maybe I could just start with an observation. And the observation is that in the conversations like this one, which I consider to be sort of expert conversations with people who know the Internet, though, the substance that is proposed as potential options for reforming section 230 typically look very different than the sub than the substance that has actually been proposed in Congress. So if you look at the reform options that have been proposed, I think there's a dichotomy between what you'll probably hear today, or you'll hear at lunchtime, and you're networking lunch, from people who work in the field. And then what we're seeing on the hill, in our, in our preparation conversation for this panel, I made, I made that statement. And I thought of it sort of, I guess, in the typical sense of sort of a way to be critical of the knowledge that exists on Capitol Hill and in the administration around formulating solutions to this issue, as I sort of thought about it more, I guess, I mean, it more actually is a critique of us in this community in the expert community for failing to come up with solutions that are responsive, not just to the substance of the issue. So I think all of us have have thoughts about how to be responsive to the substance of the issues. That's the driving force, I think for what our recommendations might be, but also responsive to the politics. And so we're looking I think, for solutions that will create a better Internet that are both responsive to the substance, the issue and then also responsive to political concerns that people have. I think the failure to reform this area is probably due in part I think to it In adequate solutions that are responsive to both of those considerations. So now I will give you a list of five things that I came up with that I think are inadequate, it seems like to the political concerns because they've gotten zero traction. So the first is reforming federal criminal law. So as Barron said, I think in his comments, federal criminal law and is an exemption to Section 230. So you can initiate a case against a provider under federal criminal law, a platform cannot use federal criminal law as a defense in that case, that means for many of the concerns, or at least some of the concerns that that we have about platform about speech on platforms. 230 is not a defense. So if we had a law on voter suppression, for instance, we don't actually have a federal criminal law on voter suppression, if we had a federal criminal law and voter suppression, section 230 would not be a defense, the the law governing the use of communications tools to riot was passed in the late 1960s has not been updated since then. That is a federal criminal law. Though, if you file a case, using that federal criminal law as the basis for it, section 230 is not a defense. So Congress has options available to it to look at things that we think are problematic and pass federal criminal law in this area to 30 wouldn't be a defense, and that would shift intermediary liability for for platforms and also enable us to go after perpetrators more effectively. Second option is to focus on the differentiation between an interactive computer service and an information content provider information content provider does not have to 30 protections. The language in the statute is developed content in whole or in part. So what does develop content? In Part mean? There has been very limited jurisprudence. I think on this question, I think this is a little bit of what the court was going back and forth on last last week. And I think more robust development of that line would be very helpful. There are others, I think, in this audience who understand a little bit more about the types of policy tools that would be available for setting that line. Various different things like advisory opinions, or policy statements by the FTC, for instance, I think are ways to explore what the criteria might look like, for delineating between those two components. A third is looking at product tools that would actually identify perpetrators, so platforms can actually build tools to enable law enforcement officials to go after perpetrators of illegal content more easily, that would mean reporting mechanisms to state attorneys general for instance, or reporting mechanisms to nonprofits who might help individuals to redress harms in different ways. That doesn't, to 30 is not implicated in that that is platforms facilitating individual liability more effectively. A fourth, which has been talked about in lots of contexts is research or data access so that we actually understand more about harms that exist on platforms. And then a fifth is I think, the an idea that I supported in a paper I wrote on it was some version of the packstack. They're like, I think a lot of people in the first panel were sort of talking about that as a viable model, perhaps with tweaks at the edges. I included this recommendation in the paper. And I think it was an example actually of, of a bias that I had because of my prior employer. So I was on the policy team at Facebook for several years. And the stuff in the back deck seems kind of unobjectionable, I think, from the perspective of a large company, like Facebook, regular transparency reports, someone on staff to handle to process complaints about illegal activity for a large platform that's very routine. You have huge legal teams, huge policy teams, huge, huge communications teams that can handle that type of volume and process it. In the wake of this paper, I got a lot of feedback from companies like Reddit, for instance. So I'm excited to hear what Billy will say about this, that that is really biased towards large platforms that smaller platforms would struggle with the kinds of requirements that are in the pact act, because they increase compliance costs and for smaller platforms, increase compliance costs, makes it more difficult for them to compete. So Billy,
great, wonderful. What a segue. So what what I really wish Congress would do, since 230, has become this political football, is let's put the football down for a second. Let's take a few steps back. Let's just ask a few questions. What are the specific problems we're trying to solve as a policymaking body? What are the harms, the specific harms we're trying to mitigate? And what populations are we trying to protect? And once we have nailed down those framing devices, let's try to figure out what's the best way to solve that? What's the best way to directly respond to those issues? And spoiler alert, like any opinion of the right policy, team and sales as a company as a whole, we think there are a slew of policy options that Congress has more effectively answers some of the questions that have been brought up in this panel so far, right, when it comes to issues like we've heard, Joe mentioned, the concerns about conservative censorship. We've also heard about transparency, access to research data. There are other ways that we can sort of deal with these issues beyond using the blunt instrument of 230. And the other thing that I just want to know, because I don't want to hold us up I have, there's so much more for us to talk about here is whenever we limit laws that make it easier for people to engage in speech, make it easier for users to create communities or find communities that serves them content, or connect them with other people that make them feel welcome and inclusive. When that happens, there's a blast radius, there's going to be less speech in other areas. We saw this with sesta fossa. We've seen this in other sort of proposals as well. And I just really wish that Congress would, would keep that in mind what the impact will be on the Internet generally, and on users specifically. And that's the sort of thing that we tried to highlight when we submitted our brief to the Supreme Court with regarding dissolves versus Google by empowering our users to speak about how their specific communities would be impacted if section 230 was limited. But with that, Rebecca, go ahead. I know you have some fine questions for us.
Well, I mean, you guys are doing a great job. I guess I wanted to talk about sets the fossa because it is the one time Congress has taken the scalpel, and cut back some of 230 protections with the intent of you know, the good intent, no noble intent of stopping sex trafficking, it has some unintended consequences of putting sex workers potentially, in dangerous situations. So I want to use that as a jumping off point. What kind of lessons can we learn in this community of experts? So that, you know, to inform Congress to not make those same mistakes, because as you were saying, every every, you know, proposal has its unintended consequences. So is there a way to educate Congress? Or is there a specific issue that you think could gain bipartisan traction that would, would maybe avoid these negative consequences that you're imagining? So
two things first, I'm going to break that down the lessons with Cecil fossa that I think Congress can learn and other issues that Congress could sort of focus on that are related to CDA 230. Because this is a fossa, you know, what does it mean to engage in a venture related to child sexual exploitation? courts seem to have a hard time figuring that out? Like this law has been too vague to be enforced appropriately, and a lot of ways. And I mean, no offense to Joel, love you guy. But the GAO ao report demonstrated some of the failures of excessive fossa to be used effectively in the way that lawmakers intended. I think the key thing is, if you're going to write a law like this, let's make sure that everyone on down whether it's policymakers, judges, prosecutors, and of course, companies themselves understand exactly what the impact will be, and how to stay out of potential actions that could get them into liability. With Reddit specifically, there were six worker communities where people were just simply talking about how they have been impacted through sex work, not engaging in our asking for favors, sexual favors, or whatever. We had to shut those communities down. Because there was no way for us to know, particularly what the line was. And so I think it's really important if Congress decides to engage in this sort of stuff, that there is a clear fault line between liability and protected speech.
Can I respond to that? So delay, love is shared. But I think that to limit this to account, I mean, first, I hope we get beyond sets to foster today. But I think to limit this to Hey, Congress messes up when they write laws, I mean, and judges, like judges will interpret because that's what they do is what got us in this issue in the first place. I mean, the whether you're talking about the judgment, immunity or the vagaries that exists in Section 230, are always going to be a problem. So I think to say, just simply and I hope I'm not asserting something that I think I'm not misrepresenting what you're saying, but I think this sort of is harkening back to a more libertarian view, which is like, let's not touch it, because bad things can happen. I think that's absurd. I think that I think even from a political standpoint, that's that's just not, that's just not the reality. I mean, if you really want to do this, I do believe that you can take targeted approaches, you can do something incrementally. I think that's something that is bipartisan is this relationship between the concentrated the significant concentration of information with a select few corporations that kind of control how we think through the discourse, if you want Just if you if you're really worried about small businesses and Billy, I'm speaking to you here, then maybe there is an opportunity for Congress to outline exactly what type of company you're talking about. And maybe and I agree with you, maybe there should be some clarifications and says to foster it, but to say that it was categorically that is not true. I mean, I do think that there was a really good outcome when Twitter was held accountable for not taking down, you know, a, basically, what was revenge porn on, on children? I think that was a good result that ultimately allowed the case to move forward and didn't lets section 232 be in immunity. Again, I think I agree with you, Billy, that we should articulate the goal. However, I don't think that we simply poopoo an AI, or the entire idea, because there was one, there's a few courts that may have some bad interpretations.
Yeah, can I pick up on that? Actually, I too, do not want this whole conversation to be about that says to foster that said,
you'd like SR fiesta.
That said, listen, part of part of that we can't solve in this conversation. And part of that is just how inflexible our lawmaking and policymaking regimes are to begin with. And that's just not what this panel is about. But I mean, one of the lessons is that we probably need a more flexible way to not just consider lawmaking, but consider how to be able to update as we get more and more data about how the enforcement of the law is going. But that aside, I think one of the things. So we at ADL, as I said, before, we focus a lot more on platform behavior. Yes, we obviously our mandate is to look especially for a center of tech and society is to look at the proliferation of hate and extremism, and harassment online. But one of the other lessons is, what are the things that we can look at, again, more that focuses on conduct and behavior of the company, versus on whether or not they are engaging in the right kinds of content moderation. And I'm gonna give you one sort of specific example, which I alluded to before. So many of you in the room are probably familiar with the fact that you know, Facebook, also where I used to work. So I think we both have a bit of a different view on this. But Facebook, advertising, in and of itself, they continue to use section 230 as their excuse to not ever be held accountable for any of their own targeting tools. Now, Facebook's targeting tools, they they let's look at the cases about discrimination in housing and jobs. Right? So yes, you can say that the advertiser should be held responsible if they are going to purposively exclude a group based on their race, their age, their gender, from seeing housing ads, or from seeing job opportunity ads, which we know is a civil rights violation. But would you also argue that Facebook whose own targeting tools that they have designed and monetized or what it's allowing that advertiser to discriminate based on age, race, gender, should that also be immune under 230. And then many people want to engage in that conversation, they say, oh, but that that case was settled. The case being settled does not mean that the question has been settled, because those cases are still ongoing right now. And Facebook continues to assert that section 230 Just because they settled doesn't mean that they don't still say section 230 Should immunize them, not just from the fact that their targeting tools allow this kind of discrimination. But yes, even when the advertiser themselves is not necessarily checking the box to discriminate. Facebook's targeting tool sometimes is doing it itself. And I'm just going to point that there is a person in the back of the room who is actually prosecuting these cases. So if you don't believe me, please go speak with Peter later. These are legitimate things that Facebook continues and I get it. Section 230 affects Reddit, it affects smaller platforms, but Reddit is not engaging in selling targeting tools to allow you to actually discriminate in violation of civil rights. And so those are the things that we want to focus on more.
I agree entirely with Yale's first point about the inflexibility of the current policy process. I think sesta Foster is an example of two failures. One is substantive. It was designed to protect the sex worker industry, the sex worker industry is now protesting for its repeal. That I think is suggest that maybe the law isn't performing as its proponents, as its proponents hoped it would, but I actually think that's maybe is like the less important concern. I think the most important concern is the one that Gail referred to, which is that we have for some reason, a model in our public policy system, that we pass a law on day one, and we just hope it's going to do really well it's going to perform Don't really well, in practice for the next 20, or 30, or 40 or 50 years, I think section 230 has performed unbelievably well, but it was passed in 1996. That is held up as well, as it has, I think suggest that it is it was a strong law at its creation. But the world is different. And it probably suggests that it could be tweaked, at least at the edges, if not dramatically shifted to reflect the world we currently exist in today. We don't have a policy regime for whatever reason, that enables us to experiment with public policy, to incorporate learnings from how public policy tools, perform in practice, and then improve a policy regime over time, there are a lot of people who are critical for whatever reason of the idea of experimentation in tech policy. And it doesn't quite make sense to me because because we believe in experimentation in other fields, including fields that I think are more important and have more life, death risk than tech policy. We would not have I don't think vaccines today at all, we wouldn't have a COVID vaccine, for instance, if we did not permit medical trials, because medical trials enable us to test whether medicine works, if effective range of different potential harms, and then tweak the medicine, in light of what we've learned in the course of a trial. The absence of that kind of mentality in tech policy, I think does a few different things. One, it means that we have to try really hard on day one to get it right. Because if we don't get it right, it's gonna be on the books for 40 5060 years, I think it's gonna be incredibly hard to repeal sets to foster because of the politics of it, even though I think there are a lot of people who believe that it was a bad law. And it also just means that we are unable that it means that there is a high political bar to passing any reform in the tech sector, because we know that anything that we do is going to exist on the books for a long period of time in the future. And that means we need a certain amount of political capital that I think is an immense amount of political capital to get anything done. If we had more of an experimental model where we pass a law that might exist for 12 months, or 18 months or two years, we enable researchers to gather data on how that law is performing in practice. And we have some mechanism then for incorporating that data into the policy process at the end of that trial period. So we can we can improve policy in the long run, I think that that would be ideal. Even in rooms like this filled with people who are brilliant in this field, I sort of think that it's possible that most of us don't really know how this stuff's going to perform. In practice, we have our hunches is informed by a lot of different things. We have our beliefs about based on our own experience about how something's going to perform, but we don't really know. And an experimental model and tech policy would enable us to test our assumptions, and then to improve policy based on what we see in reality.
You know, I'm glad that you mentioned the fact that credit has a different sort of model when it comes to advertising, right? We mostly target users through their interests, right? If you follow the our slash soccer subreddit, our slash cats like I do, you're probably going to get products related to that, right. So that that is a different notation regarding platform behaviors. And something I'm not I'm not going to ask you this, that maybe over drinks sometime, I am interested in why CDA 230 has to be the method through which we needle out the sort of insidious platform behaviors are problematic. Platform behaviors, sometimes it's the the only thing, you might we might assume that might be the only way to do it. But I wonder if there's some other ways that we can sort of experiment and deal with those sorts of issues that doesn't use this particular statute because of how it impacts our users. And then the other thing I'll just note quickly, Matt? I did. I've read your policy proposals. I think they're very interesting. The problem is, whenever I look at my bill tracking list of what's going on on state and federal level, right, it does not reflect the sort of, you know, we're going to stop and see and do a lot of experimentation based on data, right? Most, that's just not how it tends to work. There are a couple bills, that I think that's the that's the
issue, it tends to work poorly, right. So we don't have a model. We have models in all other parts of life, where we test and we use those tests to inform our policy regime, other countries have those models, their regulatory regulatory sandbox, sandbox is in jurisdictions all throughout the world. Why can't we have that in tech policy? I agree entirely that we don't. And I agree entirely that proposals that many of the folks in this room have introduced and thought of as the right proposals in this field, don't get political traction, because they think they're not responsive to the politics. But I think the issue is that we have a framework set up where we, we have to have certainty on day one, that a thing is going to perform practice as we say it's going to perform, and that introduces just an unbelievably high standard for lawmaking that I think doesn't work in a world that's moving as quickly as the tech policy world is,
do I have to wait for drinks to answer Billy's question or can I answer it here because it actually takes us into the next topic, right? So as an I and I, before I joined ADL and currently have never argued that section 230 Is the end all be all one One thing that will fix the Internet, it is one piece of a larger puzzle. And that's another argument that some people I'm not saying you at all, I'm when I say some people assume I mean my former employer that people like to use to shut down the conversation, right? At the end of the day, this is still the only industry that benefits from a protection that no other industry benefits from, because we wanted technology to flourish, because now technology is too hard for us to all understand. And so no matter what, including all the other laws, I would love to talk about section 230. Still does, it has been so overly broadly interpreted. So to me, it's not actually about whether we should completely amend how it is written, it is about how has it been interpreted to this date, to just be used to throw out cases before a victim has any opportunity to even get to discovery to even start to learn to your point about being able to learn to reform our entire process, whether or not that platform did play a role in facilitating that harm. And that is one of the things I just I cannot continue to understand how as a society, which one of the principles here is that a victim is supposed to be able to face the person or the entity that helped commit a crime, and have their day in court. And 230 has been so broadly interpreted that that has just become a huge barrier. But I would say there are all sorts of other things. One of the reasons that we have been fighting so hard for some of the transparency laws, like a, b, 587. And others, is because part of the conversation we're having today is based on assumptions, because we have no true insight into how these things actually work in these companies. And to this day, we continue to rely on companies to self report. And that is what we are using to write our laws. So we don't believe in transparency for just transparency sake, we want to see true transparency, and how platforms own tools are being used, how their policies are being enforced. There are some other ideas of transparency out there that we fundamentally disagree with. For us, it's about getting insights into where these company's own roles are, how they're enforcing their own policies, so that we can write better laws so that we can enact more data driven policies. So to your point, section 30 is not the end all be all only thing. But it is the law that has enabled platforms. I know when I worked at Facebook, and I tried so hard to get a voter suppression rule passed within the political advertising system, which was easy. We had it all worked out how it would be done. They said no, anyone knows my story knows that's why I left but they flat out said no. And I believe it's because there was no incentive for them to do the right thing. Because they don't have to. So while section 230 is not the one thing that fixes the Internet, I will continue to say we can't just push it aside for all these other laws that we hope to get passed, because it still enables truly bad behavior by some companies. And that is something we need to address. And yeah,
I just want to because I think I think you're absolutely right, I think that there are tweaks that you can make to second to there that make perfect sense. And things that are are getting currently debated, I think earlier to a previous panel, leave as Matt was describing this question of distributor liability versus publisher liability, that is something that should actually be articulated in some way. And I see your agreements, parent, I know I see you there. But But I mean, it is it is a legitimate concern to to not only address from a legislative perspective, but also to be very clear as to what they what publisher liability actually means or what what the words mean in the text, because it just speaks directly to his point. I mean, this is precisely why see one has a problem. See one has a problem, because frankly, almost all civil liability can basically be thrown away because oh, well, we can't discern whether this is a they're acting as a publisher, one is one instance, or and then at the same token later down in their own brief, this actual word distributors, that this doesn't really matter. So I think that this level of conflicting notions are something that yeah, maybe Congress should take up and maybe just discuss more, more intuitively. Other thing you talked about See, too, that's where you get to one of the censorship issues like there are legitimate things that have been thrown out. And I know this is the moment I say is everyone's gonna groan. But I think truly if you want to look at how the statute has been interpreted to allow for this type of moderation. I think you really look at how courts have just overly broad I've read this in such an overbroad way, where they can basically just take down Are they can take down whatever they want, whether it's if you're a Republican, you're really concerned about the COVID vaccines, if you're an LGBT person, then you're really worried about all the stuff that they take down on. From the perspective, they are not liable for doing any of that, because they're actually, quote unquote, encouraged to do so. Well, okay, then my, a reasonable tickets put about public accommodation onto that. So that way, no one can get taken out for those particular they don't at least get those immunities if they violate some form of public accommodation. And that could include political affiliation, that could include gender identity that can include sexual, all the traditional things that you would get from Apollo accommodation. It just, it makes perfect sense. And I think anyone who worries that, Oh, that's too broad, or that's too Well, look, you guys are seem to be very big fans when the court actually rolls your way. Maybe, maybe take, try your hand there, because I ultimately I think the courts will, will get that right. If they have a framework right now, they do not have that framework. So maybe it's up to Congress to give them that.
Well, so this week, Congress will have a hearing, the Senate Judiciary Committee is going to have a section to their hearing. In the wake of the Gonzalez be Google case, which we had the previous panel on, discuss. What we expect will be discussed will be they're gonna act, potentially the safe Tech Act. And I just wanted to see if there are some provisions of those bills or current legislation that you think could thread the needle on some of the issues we're discussing right now. I mean, earn it has not been reintroduced. It will be though, and it is Blumenthal's subcommittee. So
I mean, on earn it, my sort of view of this has always been that this is an issue on the enforcement, love basis, more than any sort of 230 question. One is that we've sort of supported as a company as well, actually, I think it's probably the only bill that we've endorsed is the in Child Exploitation Act, which was introduced by Senator Marsha Blackburn, which would allow companies like ours, to hold evidence related to see some cases, up from 90 days to 180 days, so that we could assist the relevant authorities with that information, as if investigations go on for too long, because there have been times where we by law are not able to keep that evidence for long enough to assist them. But also, there's other bills out there that deal with the enforcement side of this, in the prosecutorial side of this scenario widens, invest in Child Safety Act, which provides five billions of dollars of funding to Nick mech, the Department of Justice, and creates an office in the White House to coordinate efforts agency wide against the SAM efforts. So I vote for quite frankly, I think that's more effective than what's in earnest.
And I'll turn to 30. For that,
I mean, we we we actually have a were quoted in the release for the safe Tech Act. So I feel like I should at least bring that one up, doesn't it? Like with many of the laws as written, we agree with the spirit of it. The problem is when you get down to really how it will be enforced in the actual details, right? Like, just one of the things in the safe tech is, can we start differentiating between what is information and what is speech? Right. So that's going to be that gets back into the content moderation side, which I said is actually not my prime focus. So I realized that that is opening up a whole Pandora's box. But we do think that some of the Act sure it needs to be refined in the wording needs to get more specific, like they talk about paid advertising. My issue with paid advertising, also falling under the same rules is not when it's just paid advertising. It's when it's paid advertising that gets to benefit from targeting tools that actually are part of hate to throw out the term surveillance capitalism, but the surveillance business model, right, it is, should that company who is not just showing you an ad, but is actually allowing that advertiser to target you based on more than just because you like cats, but because of age, race, gender, because you liked this particular thing. We're showing you this, which might be cats, but it might also be like the Facebook whistleblowers paper showed with that story about Carol's journey to Q anon. It might also be you liked this person. And within two days, we're going to start showing you and pushing you into Q anon groups like that's not advertising, I get it, but some of the Safetech act tries to address some of this. There are definitely some ways that it will need to be refined. And the problem is it's going to be how is it enforced? But I think we're too easily trying to throw out every single idea on the hill because to Matt's point, the bar for it has to 100,000% not Have any trade offs otherwise, that breaks the Internet? I mean, every industry before tried some of that as well, let's be frank, the tobacco industry, the automobile industry, the oil and gas industry, the food industry also did not want to be regulated and claimed that it would completely destroy them. And guess what, they all still exist. So I don't think there's a perfect bill out there. That is the bill that I'm thinking it's gonna fix the Internet, whatever that means. But if we could get past that the biggest problem is it becomes a partisan football, and then we can't get past any of the partisan debate about it. And so I'm not really hopeful that Congress is going to go anywhere. But there are some really important principles being debated right now in the hill that I hope has a chance to be debated.
One thing that does have bipartisan support is the recognition that there are these market concentrations, and those markets are real in relation to the information in the content distribution. So I don't necessarily share the view that it's impossible to get that I'm not saying, you know, you're saying that I'm mostly just I think there's a little bit more optimism than there is skepticism, because I think that there are there are going to be those trade offs. And I think to Billy's point, there are unintended consequences that do happen to small businesses that we do want to protect. And we do want to ensure that they don't get affected by so I think, and I fully agree with Billy on this, like, articulate your goal, like figure out what your goal is, and then try to execute from there as opposed to take a very ephemeral femoral problem and try to as, as some of the panelists said, create a law that may last 2020 30 years. So I think step one, articulate the goal step two, I mean, execute it in a way that's consistent with, you know, the base of the harm you're trying to quell. So I but again, I do think that there are elements of bipartisanship here, you certainly saw that last Congress, whether it's an ID Act, whether it's Allameh, whether it's any anything else, you are seeing a element of, we are skeptical of what these companies are doing, we are worried that they're doing things that we that they ought not be doing. And so I think you might see more in the line of what Elon was talking about earlier, which is transparency, because these, some of these larger companies have operated in almost complete opacity. So it's maybe it's time that maybe that's the starting point, the starting point to start with the transparency bill, and then move forward with other types of actions to be more informed. But I mean,
who asked me, Matt, did you know? To quote, one of my favorite reality shows. Sometimes you got to be there for the right reasons is what they say on The Bachelor. And I worry that most of the time when lawmakers work on these 230 bills, they're not doing it for the right reasons, is not reflective of actual issues online. Now, sometimes. Sometimes it is. I want to be very clear with that. I think there are some good faith lawmakers who want to deal with issues and I'm going to prove that that I'm not just casting aspersions. I think transparency is actually a really good starting point for law and the
act act, or do you have anything in mind?
If I had to, I guess, like, caveat to not this is not an endorsement by the company. I'm sure. doubly sure. I'll get a phone call in 30 seconds. I'm the platform Accountability and Transparency Act. Yeah. But I believe Senator Coons right. Yeah, I think there are actually some really good pieces of that bill. There are some issues that I have with it. But I think the starting point where they're going with it, in terms of researchers need access to data to be able to eat so lawmakers can target effectively the issues that they're concerned about and identify potential harms. I think that I think that's a perfectly fine premise for policymakers to start on.
Yeah, I mean, and to switch from Congress to the States, we've seen states taking a different approach with very partisan bills are now laws in Texas and Florida, that are before the Supreme Court that would basically force force carriages speech for these platforms. And then New York, passing a law, basically going after hate speech and without as the ultimate goal. But there's been first amendment claims that this is violative of that as well. So I know Joel was in our prep call where you were talking about how these are both kind of colliding laws, that as platforms that are global, will will basically basically not failed and follow both and operate, at least not nationwide. So could you talk about some of the pitfalls these are creating and in the state level?
Yeah, I think there's a lot of well, I mean, anyone who's been around tech policy knows about the patchwork effect around privacy. And so in the complications that makes in terms of deciding what regulation you should potentially, you know, follow and what what actually creates the liability and who actually, what statute Are we are we operating under? And then I think there's the big issue really is you're seeing that these states are trying to find workarounds around section 230, which, in some cases is problematic. But there's some things I actually do support. I mean, I'm clearly a fan of the Fifth Circuit. That's not a DPI perspective. But that's, that's my person, I think this fifth circuit actually had it right. And like, and so. But again, I think that what it really speaks to is this question of, okay, well, who do we want to? How do we want to govern these particular issues, and I think a federal strategy probably makes the most sense. And if we, as long as we don't have that, and when we have that, essentially a vacuum, you're gonna have states trying to fill in that vacuum in very strange ways, and in a lot of ways, very contorted ways from any practitioner who's trying to figure out, you know, how to advise a client on when you're violating a law in Texas versus when you're violating a law in Florida or New York. So I think this is, it's probably a strong, going back to the political incentives for Congress to work that is a huge political incentives, incentive. You match that with what's happening with Supreme Court with what's happening in the States. And I think you're gonna actually find a lot of inroads, I think, Rahm Emanuel and said, don't waste a good crisis. And so I think this is what crisis that, you know, potentially could lead to a lot of bipartisan inroads for Section D 30. To clarify some of these issues.
Yeah, I mean, picking up on the state strategy, which we obviously have a state strategy as well, which I mentioned earlier. I mean, part of the obviously, the Internet has no boundaries and borders, sorry. And so obviously, a patchwork of state laws is going to be extra complicated for these companies. And one could argue, I'm not saying this is our strategy. But one could argue that that's sort of one of the upsides of a crazy patchwork of state strategies is to get the companies to actually come to the table and figure out what Congress could and should do. One other thing, I'm sorry, I'm gonna just I wanted to add one point before this ends in that is that one of the things that I forgot to mention earlier is right now part of the problem with 230 is it's been so broadly interpreted that anybody who uses technology in any way whatsoever is held to the same set of rules. I mean, to Rebecca's point earlier, is Wikipedia, the same as Facebook, and should they absolutely 100% be regulated the exact same way? Or are we at a point in 2023, that we were not at in 1996, where different companies actually engage in I mean, I'll give a Reddit example, I read Reddit amicus brief for Gonzalez. And I saw the Reddit concern that if you start to take into consideration recommendation engines, then does that mean, the Reddit user who uses the upvote or downvote, is going to be held liable? I mean, I can't Yes, I understand there are people with terrible ideas out there of killing 230 All together. But to me trying to equate Reddit up vote from a user with a Facebook targeting tool that actually allows you to discriminate against people, it shouldn't even be in the same conversation. And so part of the 230 thing that is not talked about enough is what are just pipes through which information flows? And what are companies with profound amounts of power to actually cause real harm? And how do we start differentiating from between some of that as well?
Yeah, I'm just I'm just gonna know that precisely that sort of the fact that there are different algorithms and tools that are just being put in the same bucket is exactly why we submitted our briefing Gonzalez versus Google, because some of the advocates on the other side seem to imply strongly that they believe that those tools should all be viewed in the same lens, which doesn't make a lot of sense. And with regards to platform behavior, and how users interact with those one thing I want to note just about the sort of state patchwork thing, obviously, this is terrible for companies, right? You all know that. It's also really bad for users. In a lot of ways. We've had users who have been impacted through the Texas Social Media Law. I'm so glad I can talk about this now because for a while we couldn't. There was a user on our slash Star Trek who was banned because he insulted Wesley Crusher. I'm not a Star Trek fan, so I don't know this stuff. So that was a no no, the mob banned them. And then the user you Use the Texas social media law to sue the moderator. Um, and we were able to sort of, you know, assist with that case in some ways, right. But like that was one of the highlights to us of, oh, this is really bad. When we have we are, it's not an abstract concept that when we have these sorts of laws that essentially negate see to right or ban any ability for moderators to create a safer space, right or moderator space effectively, then what happens is moderators can be sued for frivolous stupid decisions. And that means you're going to have less community engagement, less moderators. willing to put themselves out there. And so that's what I'm really worried about from a state perspective. And also, just to be honest, like, Matt, I appreciate the experimentation point. But when you experiment, you break some eggs. And there are some negative connotations that happen with that. I do wish that some policymakers were willing to be upfront about that. I know you, you would be but
that's the point you you can't have public policy that doesn't break eggs. So you can pretend that it's cost free. I feel like that's a lot of our policy conversation. We pretend that it's cost free, you cannot have cost free reform. So the question is about coming to terms reconciling yourself to what those costs are, and then being prepared as a society to bear them. We learn that through experimentation, we don't learn it through jumping off the bridge and hoping for the best.
Great, I mean, we've hit our hours. I think we've touched everything we need to do and thank you guys for such a great panel.