You're listening to podcasts from the Congressional Internet Caucus Academy.
We're going to go ahead and get started. Welcome to today's event. My name is Abigail Ashcraft. I work in Congressman Michael McCaul's office, who is the co chair of the House Congressional Internet Caucus. I want to welcome you to today's event called on the horizon, the future of tech policy in the 119th Congress. I want to note that this event is hosted by the Congressional internet caucus Academy, in conjunction with the Congressional Internet Caucus and its co chairs. As I mentioned, on the House side it's Congressman Michael McCaul and on the Senate side, it is Senate Majority Leader, John Thune. We've been doing these luncheon briefings regularly with a brief pause during the pandemic since 1996 when the internet caucus was founded. And I want to highlight two upcoming events that we have in March and April, and they will be in Rayburn on March 13, we will focus on the future of online safety for kids, legislative changes on the horizon. And then on april 23 we'll discuss regulating freedom of expression, navigating the future of online speech. And we hope you'll come back and join us for these conversations for the very you know, very timely and important. Today, we have a panel of experts who are on the front lines of these tech policy issues, and our moderator today is Kelly wicker, who is the director of the Science and Technology Innovation Program at the Wilson Center. So from here, I'll let her take it.
Thanks so much, Abigail, and welcome everyone to this afternoon's event. I'm going to briefly introduce our panelists, but I keep it real short so we can get into the good part. So to my right, we have Sarah Collins, the director of government affairs at public knowledge. Next to her is Evan Swarztrauber, who is the Senior Fellow in broadband and telecom policy at the foundation for American innovation. And on the end is Amy bos, the Director of State and Federal affairs at NetChoice. So Abigail kind of hat tipped it with the march announcement. So let's just dive straight in with the first question. There's a lot of issues still hanging over from the last cycle, including the kids Online Safety Act. Do we expect any movement on KOSA or on other issues that are still outstanding from last Congress, and I'll toss it out for anybody. Amy, sure I can jump in. Hopefully everyone can hear me, okay,
Yes, kids safety online is expected to be another big issue again. This year we saw the Kids Online Safety Act introduced last
year. It's been a multi year effort. It's undergone a number of changes. Passed the Senate last year pretty overwhelmingly, but ran into road blocks in the house, particularly from House leadership, Freedom Caucus members as well. And a lot of these concerns stem from censorship concerns, privacy concerns, so a little bit of a background on the Kids Online Safety Act, and I'm simplifying here, but essentially it creates a duty of care. It would, it would require online companies to prevent minors from harmful content. So what does that look like? The bill is incredibly vague on how that is done, but the privacy concerns stem from, you know, if we need to know who is and who is not, a minor, what does that look like? Does it mean showing some form of ID to access the internet? Does it mean, you know, sharing a birth certificate, things like that. So it has raised a number of privacy concerns. You know, when you hold online companies liable for harmful content, which I think that's a vague term in and of itself, what you're going to see is companies are going to be cautious. They're going to likely over moderate, over censor certain lawful speech, and that's been a huge problem, like I said, particularly with House leadership. So you know, whether it's KOSA or whether it's there's an opening for bills that have garnered bipartisan support that don't raise constitutional issues. I think the Invest in Kids Safety Act is one of them. I think there might be an opening for House leadership to do something on that, whether it's KOSA I think that's going to be tough, especially if the bill is doesn't receive a major overhaul.
Evan or Sarah thoughts?
Sure. Well, just today, I believe happened. KOSMA also got introduced by Senator Cruz and schatz, which is a bill very plainly, banning children, I believe, under the age of 13, from accessing social media as one might imagine. Defining what a social media app is or is not is quite difficult. First and Second, it's just i think a very reactionary policy to remove children from what is essentially a very vibrant part of everyday life for all of us. It would be like removing children from other generally accessable public spaces. That doesnt mean that there isnt a role for Congress to play in building a better, more functional internet for all users, however i think were going to continue to see these sort of
blanket bans. These have also happened in the States, as well as other sort of reactions to the harms. You're seeing parents and teens come forward with their senators and representatives.
Yeah, definitely agree that kids safety is going to be huge focus this Congress. I welcome these efforts. I think that parents are speaking loudly and clearly around the country that they do not believe the status quo is acceptable. If you look at polling, parents say that current parental controls, whether it's on search engines, app stores, individual apps, et cetera, are just not getting it done. There are practical reasons why relying on individuals to navigate this ecosystem doesn't work, and parents are also saying it is significantly harder to parent children than it was two decades ago because of the internet. That does not mean that you need to ban all kids from using the internet, but I think that what you're seeing in Congress is a reaction to what people who they represent want to see. They want to see action on children's safety. You're seeing states take the lead, because Congress has not passed any significant legislation on this topic since the 1990s and you also have seen challenges in the courts. So I think Congress is going to react not just to state bills. There's their state bills, just in Utah, an approach that would age verify at the App Store and operating system layer, that could be potentially an answer to concerns around censorship and content specific regulations. But you also going to see Congress react to what happens in the courts with bills that deal with obscene and indecent content online, the bills that deal specifically with social media, if the courts essentially say and NetChoice is very capably fought a lot of these bills a very successful track record in the courts. If the courts say that you can't do content specific regulation, there may be momentum in Congress for particular approaches that target specific layers of the internet stack, whether that's the app store, whether that's the operating system. So I would look to see if representative James in the house and Senator Lee in the Senate reintroduce the App Store Accountability Act, that could be one to look out for this year, as well as to see whether speaker Johnson can resolve some of the concerns in his caucus around the Kids Online Safety Act. But this issue is not going away, because parents are demanding action.
All right, so we're here to talk about tech policy writ large, which we talked yesterday about just the difficulty. There's so much ground to cover. So what we're going to do now is kind of move into talking about some big thematic buckets of different types of policy or different challenges that arise because of tech and I encourage our panelists to take that in whatever direction they want, whichever text they want to talk about, and also to remain in conversation with one another, if you'd like to. So let's start by talking a little bit about some of the movement we've seen just this month. A lot of what we saw happen on tech governance with the last administration took place three executive orders, which are now being reversed, some of which, some of which, some of them are being replaced with new executive orders. But I'm interested to know, from your perspective, what are some key issues that could use a congressional hand in putting some legislation on the table, and what do we expect Congress to tackle? I'm thinking primarily of AI governance, but really, there's a lot of ground to cover here as well.
Well, sure. So I am going to pick up on a thread for AI, because I think that's a very interesting place right now, especially when it comes to governance. So while all of the Biden AI EOs have been rescinded, they covered quite a lot of different ground. So some of it was the AI Bill of Rights, which had a very strong civil rights component, but some of it was dealing with AI safety, which both Republicans and Democrats have said they've wanted to be more engaged on. It's missed working groups. It's a lot of questions of governance and American leadership in this space. So for me, it is very unclear how both the administration and Congress are going to work together to remain to make sure America stays as a leader in this space, while also remaining competitive and allowing for innovation. Because, frankly, I haven't heard a very strong vision yet of what this, this Congress, exactly wants to do here in this space.
Yeah, I would add, I think we get a little preview in you know, the House and Senate have done some great work on this right, establishing Task Force. There was a report released by Congressman obernolte, Congressman Liu, a bipartisan report in December of last year, which I would encourage folks to take a look at. It's really. Yeah, it really kind of provides a bit of a road map where I think the 119 Congress is, I think one particular issue that it mentions is, you know, using as existing law to see how far it can go to alleviate some of the harms that could come from Ai, but also filling in those gaps. And I think one of the biggest gaps we continually hear about is that of deep fakes. I think that is very ripe for congressional action. I know Senator Ted Cruz, just recently introduced, reintroduced the take it down act. There's multiple proposals. There's the Defiance act as well, but giving individuals recourse to make sure that non consensual sexual images are not shared without their consent, and that they have a path of recourse.
I totally agree that that is a worthwhile effort, and that's the low hanging fruit, right? Like we could all agree that individuals should be able to take action when something is heinous as revenge porn through deep fakes is on the internet. I think with the new administration, there's going to be less focus on trying to regulate the algorithmic outputs of chat bots. They're not going to be trying to legislate to prevent a chat bot from engaging in misinformation. I just don't think that that's going to be a priority. So where should Congress speak? I think that export controls in order to maintain US dominance is a really important issue this year, and if Congress wants to see consistency in the approach, they have to act. And we're in this moment after deep seek where there's going to be a lot of concern about what appears to be China's ability to catch up in the AI race, and there might be companies calling into question this approach. You know, from where I sit, the this news from deep seek means we should double down. We should double down on efforts to prevent US technology from being either stolen or Terms of Use being violated by Chinese entities to improperly try to catch up to the United States in an unfair and illegal way. We should provide BIS, this is the Bureau of Industry standards under the Commerce Department, We should provide them with the resources they need to effectively enforce export controls. And at a time when we're looking at cutting federal budgets and agencies, I think that's absolutely appropriate, given the state of the national debt, but hopefully Congress has room to say, Okay, while we're cutting in some areas, there are other areas where, if we're facing a geopolitical threat with AI, we need to provide federal agencies the tools to enforce the export controls that we have on the books, not just in terms of chips, but also in terms of the export of models and making sure that American IP is not essentially being handed over to our adversaries.
You teed me up so perfectly, Evan, because the next thing I wanted to talk about was geopolitical competition. We're seeing more competition. We talk about China endlessly in DC, but we're also competing with our friends and allies and with mutual countries on things like aI talent, on chips, on access to compute, on critical minerals. I'm very interested to know what aspects of American competitiveness we might see Congress take up in this in this session, what are some openings where policy could really make a big difference on competitiveness and this for anyone again,
happy to jump in again, I think there's bipartisan agreement that we need more energy for AI, there are obviously environmental concerns that need to be addressed, but permitting reform has been discussed for how many years in DC, and we haven't seen significant action. If you want a project like Stargate to be successful, you need to make it easier to build. And if we're going to have the energy outputs to fund AI research and development without boiling the ocean. To use an expression, we need to invest more in nuclear energy, geothermal. We need to essentially have an abundance. To use a buzz word agenda for AI, in concert with some of the other actions I took or mentioned before about ensuring that foreign adversaries are not essentially benefiting improperly from our technology. So I think Congress should absolutely look at making it easier to build not just energy infrastructure, but just the buildings themselves, so that we can continue to research. And I don't buy the argument that deep seek means compute is not important. I think it's going to continue to continue to be important, and we should double down on that approach.
Agreed. Yeah, I echo, you know, everything that Evan said. I mean, if deep seek showed us anything this week, I think you know, it's game on, right? We knew China was close behind America. You know, we're not talking years behind. We're talking months behind. And I would just add, I mean, I think what we're seeing with deep seek, I think, generally speaking, if you can create an AI model more efficiently, efficiently for a better price, that's generally a good thing. Yeah, but we need to, and we've heard consistently from the president that we need to make sure that AI continues to be the leader here, and that will definitely include all above energy solutions,
If I may, take a slight left turn here, I would like to caution Congress though, on taking too protectionist of an approach, there's a real concern that if Congress decides to double down on only allowing certain companies to build frontier models of decreasing access to open source models and research, that we could actually very much stifle the growth that we've seen in this in our own country. Frankly, at public knowledge, we think a competitive AI space, whether it's in the compute arena, it's in the building, the foundation model arena, it's even in the data collection and training arena that you need to have lots of different people with different access points and different points of view creating these to create a robust economy. What we, I don't want to see, is, in the name of competing against China, we entrench open AI Google and maybe one other as our AI providers for life.
I would agree with that, and I would also be very skeptical when companies ask for policies in the name of being competitive with China, when those exact same companies are either looking the other way on chip smuggling or allowing their models to be exploited in China, because if you want the federal government to be a team player in terms of US competitiveness, making it easier to build data centers, not regulating certain aspects of AI that you find harmful, that's fine, but I completely agree that, like the answer cannot be, just do whatever the largest companies say that needs to be done to entrench their position. And also if, if you want to be a team player, that means you have to be on the right side of this Cold War that we're in technologically. And that's not a hyperbolic statement. I think it's pretty obvious at this point that we're in a technological Cold War. So if you are a company that is asking the US government to help you win that competition, then you need to be taking steps internally at your company to not help our adversaries.
Let's stay on the deep seek thread for just a more just one more minute, because I know it's on everyone's mind a lot. But actually, you know, the AI committee was talking about deep seek last month, this model just we finally caught up. It feels like a lot of the you've all raised some misconceptions that you feel you hear a lot about what deep seek means for American policy, and some of these misconceptions have spread because they're pithy, they're short, they're easy to digest. What is the one digestible thing you would like people to take away about what deep seek means for American tech policy? That's a really hard one to I didn't tell you. I didn't tell you I was going to ask you that
one. I think I kind of touched on it a bit. But deep seek does not mean that America's AI strategy needs to be radically altered. It does not mean that chips are not important. It does not mean that trying to prevent adversaries from accessing our technology is not important. I just don't see a need for a radical rethink here. I think it's important to take it seriously and essentially use it to to further invigorate what we've already been discussing right infrastructure, investment, research and development, but not I think the Chinese government would love if our reaction to this news was to question everything we've been doing and go into a tailspin and get confused about our national strategy. I think that that would be exactly what they would want us to do. So that I don't think that a 15% temporary stock market tumble and Nvidia stock means we need to drastically rethink our AI strategy as a country.
Yeah, I'll just agree that, you know, I think there's been a little bit of an overreaction there. I think most folks have, have acknowledged, have acknowledged that, but yeah, again, reinforces that. You know, China is behind, right behind us. I think there is also some of those numbers that we see we probably should be a little skeptical of as well coming out.
Yeah, and finally, like plus one, all of that. But also, I think for Congress in particular, being focused on building both the incentive structures and the enforcement structures to create an AI ecosystem that we want to see which is diverse, which is built with consumer protections in mind that is useful for businesses. All of that is going to be first and foremost. If we're just responding to China constantly, that is not the way you build a new ecosystem that could be truly economically revolutionary.
Let's talk about national security more broadly, we're talking a lot about how competitiveness feeds into security, but thinking of literal attacks with let's things like cyber assault, Typhoon cyber attacks, we're seeing a lot of rise in both state led cyber attacks as well as non state actors. Do you guys have any opinions about how. Policy makers might be able to address cyber warfare and warfare in the digital plane with any policies this year.
I'll admit this isn't my strongest area, but I think it's a wake up call, just in terms of legacy systems that we have, particularly in the federal government, and there's potentially things that Congress could do to get agencies to respond more quickly to Gao recommendations, the Government Accountability Office has been telling agencies what they need to do on cyber for a long time, and for various reasons, agencies are slow to adopt some of these recommendations. It's just not going to work in this era if we have agencies running systems from decades ago, because that's where a lot of these really damaging attacks tend to happen, is when, whether it's hospital systems or others, are running outdated software that hasn't been updated. So I think that's something that maybe Congress can really force the federal apparatus to to modernize in a way that helps protect our most important infrastructure. So
no more windows 98
Yeah. I mean, I think Congress needs to take a close look at the procurement process, right? There's an over reliance on single vendors. There's a bill called samosa that has been introduced in Congress the past couple of Congresses that would require that. I think that's a great first step. A bonus would be that it saves the federal government a ton of money. So yeah, would encourage, you know, Hill staffers to take a look at that bill. I think that's a great first step where we can really hone in on the practices of the federal government there
to focus a little bit on salt typhoon in particular, which was an attack on, mostly, I think, completely our telecom systems, making sure the FCC is empowered to have requirements, both cyber security requirements, the hardware requirements that we've seen just from the ban of Huawei equipment being Used in telecom providers. But I will say making that that authority would be much easier to have if broadband providers ISPs, just generally, were considered title two carriers. All of that is laid out in the title two statute. I know that the Trump administration is walking back from that. We will see how the title two hearings progress and go. But if we cannot regulate ISPs and sort of this infrastructure this way, we're going to have to figure out how to make sure that the FCC can play its role as the premier regulator for our telecommunication systems without that authority.
I mean, I do think without making this an annual panel to spare you all that that horrific possibility. I do think that, given that these are mixed use networks, that some of those title two authorities on the telephone side of the of the network can, can certainly be used. And I would not. I think Chairman Carr has been very clear and very forceful in his statements on salt typhoon. So I would not be surprised to see the FCC taking action in this area. I'm
going to do a hard pivot. Let's talk about kind of the cloud of issues that we've been talking about for decades already, privacy, identity, data, ownership, intellectual property. I think there's a lot of different directions we can take this, so feel free to raise one and then come back and raise another one if you'd like to go in multiple directions. But are we going to see any movement on these really key issues for Americans,
privacy? Yes, we've been talking about privacy for years, years. You know, I actually am a little hopeful for the 100 and 19th Congress man. They got really close last year. It passed out of House Energy and Commerce Committee. Overwhelmingly. Of course, it goes back to those two issues, the preemption issues, the private right of action issue. Can't quite seem to get over that, that hurdle. But I think even when we're talking about AI, you know, establishing a comprehensive national privacy would really lay the base and alleviate a lot of the concerns that come with AI. So again, I think you're looking, you know, especially with the new leadership, Senator Cruz and the Commerce Committee, Congressman Guthrie in the house, they've talked extensively about the need for a national privacy law. It would be a shame if all the work kind of goes to waste. But again, I think given just the narrow majority here, it's going to be a needle. Threading that needle will be very tricky.
Oh, I am going to be slightly more pessimistic about privacy. I started my tech policy career as a privacy advocate. I'm actually deeply concerned because 80 PPA, which was the 100 and 17th Congress bipartisan Privacy Bill, and APRA, shared a lot of the same DNA. They were data minimization first, APRA had some sections that were literally cut and. Taste of 80 PPA. I'm very concerned by Cruz and Guthrie's talk of looking at Texas or Kentucky as a model. I find those states incredibly weak, mostly notice and consent. Plus, while it's very heartening to see the Texas ag do something with that about location data, I would much rather see them focus on Maryland or Connecticut as a model, rather than go back to notice and choice, which I think we've all agreed is not a functional privacy model.
Evan thoughts without
applying one way or another on the right approach, I think it's going to be a heavy lift to convince Democrats to support a bill that doesn't have some teeth. I think we need a national privacy law, but just politically, I'm not sure that you're going to see Democrats vote for a bill that codifies the status quo. So I think like it doesn't necessarily need to be a private right of action. I think there's that's very controversial, and a lot of Democrats that represent California would probably have a problem with that, but a bill that just says everything that everyone's doing today is fine, I'm not sure that Democrats are going to vote for that.
I will just add, obviously, this is an area where states are stepping in as well, trying to fill fill that void, and I would hope that creates some sense of urgency among lawmakers. I think most everyone would agree, if we have 50 different state laws, that creates a lot of complexity, complexities for businesses, particularly small businesses, to comply. So states will definitely be charging ahead. And yeah, the pen is is in Congress's hand, and again, hopefully it continued to be a it will be a priority for this year,
thinking about deep fakes as a privacy or an identity issue. And I know I think Amy, you're the one who already kind of brought up that you see that as a gap that Congress will definitely want to fill. Is there a specific bill that you are seeing, that you think might make movement or a specific aspect of this, and we've had a lot of success with the civil side of deep fakes. But where can we go with that?
Yeah, I again, I see something like Senator Cruz's Bill having probably the most most consensus around it. It would essentially establish a notice and take down regime. So, you know, he, he's going to prioritize that at least in the Senate. And I think if it passes overwhelmingly in the Senate, you're going to see the house take it
up. I will say not to overlook defiance and shield, which takes a much more sort of non consensual, intimate imagery approach, rather than just on the deep fake side, which could be a very useful set of tools of the shield act, I believe, is the one that creates the exception to 230 to make it slightly easier for plaintiffs to bring cases. And defiance adds some civil penalties about the creation of deep fake pornography. So those could also be a possibility. I believe defiance in particular, has AOC as well as Durban, sort of leading the charge in both the House and the Senate. I think finally, the big caution here is what we saw is there, there's addressing the name, image, likeness question, sort of the commercialization of people's uses of creating like a fake Beyonce to promote your product or fake Tom Hanks to sell insurance is a very different problem than sort of the non consensual, intimate images and some of the other heinous sort of emotional damages you might see. And I would urge Congress to make sure to separate the two. I think it very difficult to create a holistic approach that addresses both of those problems, because they have very different equities and, frankly, different actors involved,
I would agree, and there is a separate legislative effort called the no fakes act that deals with that commercial issue. Because today, right of publicity is not a federal right, it's a state by state right? And we've talked a bit on this panel already about how having 50 state approaches for some things in the country is obviously appropriate, and I believe in federalism and states rights for sure. But when it comes to something like deep fakes, it could be beneficial to have a federal approach that essentially protects the name, image, likeness rights of individuals vis a vis commercial use. That's an important distinction, because we don't want to stifle First Amendment protected speech, like documentaries, journalism, etc. There could be really creative uses of AI in those fields. But what we don't want to see, or at least I don't want to see, and I do support that bill, the no fakes act, I don't want to see creators being denied commercial opportunities from an unauthorized use of their likeness, right? We don't want to see people losing opportunities to make music, to make movies, to act because someone can just rip their likeness and recreate it. So I think that that strikes a good balance, where you're not doing all. Regulation on algorithms. You're not overly regulating AI systems themselves. You're just saying that people have a right to their image and likeness for commercial purposes, while carving out those parody, satire, all those traditionally First Amendment protected applications of speech.
Let me pick your brain on a related topic here, what about when synthetic media is used to essentially expand on a chat bot when we're when we're talking about a person that doesn't exist being used for commercial use or for engagement with people online? Is this a non issue? Is there an issue that we need to be thinking about legislating here?
I would encourage Congress and every federal agency and every state government to look at what laws they already have on the books that could apply to AI. I think there's sometimes a misperception that any application of AI means that there is no law that applies already right. Housing discrimination is illegal if someone uses an algorithm to discriminate in housing, that is still illegal. So I think that something like a fully synthetic person, I don't know that what specific challenge that might cause, maybe fraud, if it's fraud right, we have fraud laws on the books. And then I think if, if, for some reason, we find out through enforcement actions that the courts are saying no anything AI is unregulated, then fine. Maybe, maybe we need to step in. But I just think that it would be premature to assume that we always need a new law here, and maybe we need to kind of comb through the very long and already very robust Federal Register to see what might already apply to some of these challenges that may seem novel, but maybe aren't novel once we just take the AI piece out of it and look at the underlying conduct.
Yeah, I'll plus one to that for chat bots. I think the big thing, and this is far more for companies, is there seems to be a move to try to make these chat bots into something different, that they're not quite our agent, because they can generate responses. I think that is a very foolish approach, right? If you're going to be using these chat box, if you're going to create like, sort of synthetic avatars to be representing your company and interacting with customers, then it's the same as if you hired somebody to do that work. All right, so it's I also want to caution, and I think the laws that we have in existence do cover this, but we shouldn't view this use as something so wholly new that it is completely either unregulated that we don't have an understanding of what this is. We know how businesses are supposed to treat customers. They can't make false statements, they can't make promises and go back on it. That's all very well established. Just because we add AI doesn't mean that that changes, and nor should it. Yeah, absolutely agree. 100%
AI is a tool, right? So we have these laws already on the books that I think really cover the vast majority, and again, where we need to fill in gaps, Congress should fill in the gaps, but we have hundreds of laws on the books, so I think that gets us pretty far. Great
reminder, let's talk a little bit about the future of work, which has used to mean a different thing, and now means this. There's a lot of professions that are now exposed to automation that were not ready. They did not see themselves being automated anytime soon. There's a lot of question about whose responsibility it is to forge a path for those professions. What is the role that Congress can take here in helping people move into the AI age of work?
Listen, I think AI is, you know, I'm, I'm very gung ho about AI. I'm very tech positive on this. I think it's going to increase efficiency in a lot of areas. Hopefully you're all using it in some sort of capacity. But yeah, the reality is, and I think we saw this in the obernolte and lieu report too, that there is a very real risk that it could displace workers. And I think that's where you have the federal government coming in with whether it's workforce development programs, retraining programs, I think that's where you're going to see Congress stepping stepping in. They don't want to slow down this technology, but we want to make sure that the workforce is is prepared for that, and if that means retraining, I think that that's an area where they could, where they could pass legislation.
Um, I so, I'm an attorney. I'm trained as an attorney, and basically since I started working as an attorney, I have been hearing, oh, though like that, document review software is coming for your job, the AI chat bots coming for you to all of this. And while it has made like attorney work more efficient. I don't know that any attorneys have lost their jobs. And the reason I use this anecdote is because I think there's a lot of AI software being pitched to especially company owners like you can cut your coding staff in half, or by 75% you can get rid of all of this. Don't worry, we have this AI software that'll generate a contract for you and. We're seeing in reality. It doesn't work that seamlessly. It doesn't work in that function. So the the other lens, I think, of this is sort of the AI hype train, to not just buy into it. Congress is going to have to have a role for what retraining looks like, maybe what UBI if like the AI prognostications, the wild ones actually do come true, but for right now, I think a far more important use of their time is to oversee regulators to go after people who are lying about what their AI products can do for both businesses and consumers. I
don't necessarily think AI is the same as every technological revolution before it, but we have seen panic around job displacement with other iterations of technology. That displacement does happen. I'm not saying it doesn't, but overall, the economy tends to grow in jobs, and we don't necessarily see the massive unemployment that is forecasted by whatever the new technology is. So I'm optimistic that you're going to see a lot of productivity growth. And you see polling coming out of some companies where they're not projecting big layoffs, even with widespread adoption, they're instead looking to productivity growth among the workers they already have. So I would wait and see before we declare the end of the American worker. But of course, it is very important to invest in re Skilling and training, and we have seen some of the have seen some of the negative societal impacts of the United States over a long period of time, neglecting the manufacturing job loss that came from offshoring. So I think it's it's important to invest in that now, but not to panic, because there are economic reasons why companies want to overhype AI, whether it's what you just mentioned about, like, oh, streamlining everything, or whether it's extinction, because saying that your technology is powerful enough to make humans extinct could be very good for your stock price. So just be very skeptical of the hype train. Yeah,
Sarah, I'm reminded by your story about lawyers, about in 2013 we were told that we would not need to have radiologists anymore because computer vision truck drivers or truck drivers and radiologists still make a bajillion dollars, and truck drivers still exist. So let's not buy into the hype. On the flip side, a lot of what I feel like this new administration has brought is a lot of talk of, how can we streamline, specifically government, but also just broadly, like you guys mentioned, how, how can we help people start using AI properly for the right things? Is there, is there something that should be coming from the federal government level on helping Americans start to use this technology appropriately? Or is that kind of a private market solution.
I don't really see the federal government really stepping into that area. I think you're right that it will mostly take place in the free, free market. I mean, I think you could look at, you know, similar to online safety proposals, which require, you know, digital literacy courses, something along those lines of, you know, training people to use AI, possibly. But I think this is going to be mostly the markets figuring this out.
Yeah, I would agree. But certainly maybe we could see, like our education system helping prepare kids for interacting with AI, and a lot of it's going to be humans developing the skills to use the AI helpfully. So whether that's prompt training things like that, or things that I'll admit I'm way behind on right because I'm a millennial, and this stuff takes time to learn, and I'm like, Oh, it takes time to learn it, and then I takes away from your day job. But I think that the education system could certainly be helpful in addition to the free market. But I would agree that I don't know that we need a federal bill about how to teach people how to use AI.
I saved one of the best for last, at least one of the most contentious. Let's talk about freedom of expression versus protection from an online harms. We're at a moment now where we're seeing companies like meta walk away from content moderation, and people had a lot of knee jerk reactions to that, but there's also a lot of questions, should a private company even be the ones moderating free speech online? Where do we take this?
So I was disappointed by metas rollback, and frankly, I'm incredibly disappointed by tech companies seeming to do a 180 now that we have a new administration about what their business plans are and how they want to treat their users. But I hate this framing of it's either free expression or content moderation, because if I am getting yelled at on the internet about being a woman, about getting slurs thrown at me, at death threats, rape threats, all of that. How am I expressing myself? I have been harassed offline. So this framing that we've accepted from extremists, that if I can't be the most vile version of myself on any platform I so choose my free expression. Rights are being violated is wrong. Now there's, I wholly agree that there is very tricky issues when it comes to the government, sort of regulating what content moderation looks like. And frankly, our preferred approach is to make sure there's a vibrant enough internet so that people can choose the communities they interact with in the moderation that they're getting. But the idea that it's either free expression or content moderation is wrong.
Look, I welcome some of these changes. I don't think that platforms need to host speech that is just racistly and sexistly harassing people. I think the issue with content moderation was always around political speech. It was always about relying on very fallible news outlets to decide what speech is correct, and then taking that recommendation and taking things down. And the pure problem was the confluence of government and technology power to decide what is true and false and decide who can speak on the internet with platforms that are already very dominant and consolidated and control a huge percentage of the bits and bytes that flow across the world, having a system where tech companies were relying not just on biased news sources, but also on the Biden administration to tell them what was true and false regarding the origin of COVID 19, for example, that was the problem. If there is a backlash to that that takes things too far. I understand some of those externalities, but we need to get to a point where you can discuss content moderation on things that everyone agrees are terrible without saying. And also we need you to decide what is true and false and what political speech is acceptable. And I think the frustration with companies changing their minds only proves that we should have done something about transparency years ago. Right? That Congress should have stepped in and said, let's have at least some transparency around how tech platforms moderate, so that we can have an informed discussion instead of simply trying to work the reps and whoever, whichever party is in power has companies doing things that are more favorable to their view on content moderation. If we want consistency, let's start with transparency, because right now, none of us have any real insight into how any of these algorithms work or how any of the content is organized on the internet. So let's do that, and then we can maybe figure out if there's a further step that needs to be taken legislatively.
So I have a lot of thoughts. You know, I think that is decision. It's been really welcome news, particularly among Republicans, right? You've heard a lot of concern about conservative speech being being censored. I think getting rid of those third party fact checkers and going more towards an X model of community notes, I think it's been really well received. Listen, if you listen to Zuckerberg, they made a ton of mistakes. They got it wrong, and they're really returning, I would argue, back to their roots when metal, when Facebook was was first launched, a light handed approach that still means they can enforce their their terms of service. Absolutely, they will do that. But you heard Zuckerberg talking about, as Evan mentioned, the Biden administration coming in and saying, Hey, this speech needs to be removed. And you know, companies feeling that pressure from the government to remove that information, and very bluntly, the government needs to get out of the speech business content moderation is not going to be perfect, but you run into very serious constitutional concerns when you have the administration calling up companies and demanding take down of speech. So that is hugely problematic. There is a bill in Congress. I know Rand Paul introduced a bill earlier this year too that would penalize federal government workers for coercing social media companies. And to be honest, I think that should be a bipartisan bill. It's mostly been a Republican, Republican effort, but both sides should not want the government coming in and telling companies what speech is allowed and what speech is not.
Sorry, I find some of this wild given the remarks Commissioner Carr has made about broadcasters and reviewing licenses, determining if they were fair enough to President Trump, there has been in huge moves in media to remove either channels or personalities that are not liked by Republicans in all of this, and we have heard from journalists themselves that they are not pursuing stories or not Going after this because of this Republican administration. So the idea that we've allowed Republicans to create this idea that Biden is the great censor in chief is just, it's wild to me. So I would
argue that's every that's why Democrats should be on board with this right. They should be concerned just as much with you know. Efforts coming out of the Trump administration as as the Biden administration, this really should be a bipartisan effort. Do you think
the that that Rand Paul is concerned about Commissioner Carr statements to broadcasters like, that's what Anna that's what's animating that bill
I'm going to table. I want to ask a question that kind of moving away from the political side of it and thinking about, is there a threshold of impact that speech can have that would, that would merit someone so let's say, piling on to a 14 year old girl, can we prove it leads her to suicide? I don't know. Is that a threshold of impact? Is there enough chatter talking about some kind of public action being taken against, let's say, a city government or something, or a rally being planned that clearly has some kind of violence involved. Is, is there speech that has a threshold of impact that we would say we are preemptively taking this down? Is there a role ever for government? In your opinion?
I just, I just don't think that in the United States with the First Amendment, which has obviously like there are trade offs that we all agree to with the First Amendment, and part of that trade off is that vile speech is legal. In Europe, they have a different view. I don't think that the federal government can ever strike the right balance and say like we know where that threshold is. I just don't think they can. And part of the problem is the federal government is not this neutral actor that has no interest in what speech goes up and goes down. It's a political force, and the number one incentive for politicians is to remain in power. So there's just no way. I don't I don't think that that balance can be struck. And I think part of the problem here is centralization. I'm going to shout out my colleague Luke Hoag in the audience here. He's written about middleware. So much of the frustration around content moderation, whether it was when the platforms were catering to Democrats or whether, allegedly, now they're catering to Republicans. The frustration is, it's centralized control, and it's up to the whims, largely of CEOs, depending on how they feel on a given day, and they can change it on a dime. That is, that's the power, that's the network effect. If you want more choice and more competition, you have to look to third parties and middleware that can decide, let's have a filter that is friendly to you know, Christians, or a filter that is friendly to a different religion, or a school can say, this is our filter. So that we know that parents know that their kids can interact with social media in a way that is healthy to their education. You're not going to get to a place where everyone is going to agree with the decisions of a handful of people that control a vast amount of the information on this earth. So if the companies aren't going to open up voluntarily and allow third parties, potentially, there is a role for Congress to come in and say, Look, we need you to at least grant some API access to third parties and researchers so that they can decide to build filters on top of your platform. That's a potential option here, because there's just no way we're all going to agree on what these platforms are doing, and it's just going to be a constant issue otherwise.
So I totally agree like, I don't think the government should be able to like, up or down specific posts or specific speech, and I don't think that there's necessarily a role for government in the threshold, but what I will say is I don't feel like a lot of tech policy learned a ton from Gamergate, and I think it's a very instructive point to think about like this, the pre censorious internet, if we're going to take that was not a particularly welcoming place, and people and most of the platforms did not feel a responsibility toward their users to provide a platform that they could feel that they were safe on. So I think there is going to have to be a combination of just congressional oversight, whether it's to increase access to different APIs so people can set their own things, whether it's to increase competition just generally in this space, so there's more places to choose from, so that it's harder for networks to lock you in. But I do think there are structural things that we can do to make the pocket, to make the internet more diverse, less controlled by four or five people, and to make it a more usable place for people.
Sarah, you on something that was going to note is or ask about is, a lot of the things that you see and the way things are elevated on different platforms are governed by algorithms that we have no understanding of and no real sense of, what are we being shown and why? And you mentioned that a vibrant internet means people can choose. Can people actually choose? You know, we, a lot of us, moved to blue sky to stay on, stay on Twitter, but moved to blue sky for a different kind of conversation. But it's just, it hit a million users, and that was a big deal. How many does Twitter have? Do we. Actually have a choice, no, and does that change the freedom of speech question? This is like we're really in the weeds over here.
I think there's certainly options out there. The question is, are, do those options have a fighting chance, and are there certain choke points over layers of the internet stack that create an illusion of choice? So if you have a lot of different apps, but they all are forced to comply with the rules of a Google and Apple App Store, and then they also take 30% of every transaction. Is that choice? So I think that's a question maybe for Congress and regulators to look at, is, are there certain choke points here where there is market power, where the only way to enhance competition is to take action? I would look to bills like the open app markets act, and I know I'm going to lose my panelists to the right on that one, but that's that's something that I think is definitely a worthwhile conversation, is, are there is there an illusion of choice, or is there real choice? And can there be action to create real choice?
Well, on that existential note, let's do our last question, which is an easy one, we covered a lot of ground, but obviously there was a lot more ground we could have covered. What did we miss? What was something that you thought we should have talked about that we didn't get a chance to let's, let's end on one of those for each of you. Or if there's not something, what was the thing that you really want to make sure we all walk away thinking about,
I might just mention a topic that we haven't covered, and that is of antitrust. Do I think Congress is going to act in this area this year? No, probably not. But there are some. There are several high profile antitrust cases that are playing out. I think you're going to see from the administration they're going to be enforcing antitrust law. No, no question. But I think, you know, some of the I think it's going to be a little bit more traditional looking. I think, you know, we've gotten away from the traditional consumer welfare standard and gone off a little bit on the deep end. So I think you're going to see a return to more traditional antitrust enforcement from the administration. I don't see a lot of these cases. I don't see them dropping the cases, but yeah, will be an interesting area to watch for sure.
I guess I'll take your second option, something to take away from the conversation in tech policy China is thrown around in almost every conversation because it's powerful and that, and it's a legitimate concern, right? I do believe we are in this technological competition. It's a technological Cold War, and it's very serious. I don't think that that means every single argument should just be well, because China and then, like that ends the conversation. So when you are meeting with stakeholders and companies, and they are telling you you should do something or not do something because it will help or hurt the US. Vis a vis China, that's fine. What I would say to you is ask yourself and look up what that company is doing in China, and then figure out if that is a credible messenger on that issue.
One thing we didn't talk about is broadband as infrastructure, where it's the internet caucus, this is all built on, or be able to on, the ability to connect to each other. So I will just say that an affordable, accessible, expansive broadband policy is going to be the underpinning for all of this, and that is something we definitely need to focus on, especially the affordability, so that everyone is able to access all of this amazing innovation.
And we're five minutes early, so I don't know how that happened, but I think with that, if it's all right, we will go ahead and wrap and leave you with five extra minutes in your day, thanks so much for coming and please feel free to reach out. I know probably I speak for myself, but also all of you, please feel free to reach out if you have further questions about these topics. Thank you.
Is there no better indicator of a good panel of performance than finishing early.
Oh, Hi. How's it going? Okay, yeah, my name is you?