Hi. I'm Ashley Gould, a tech policy reporter at Axios. I'm here with John Beazer, who is a senior adviser for the Democrats at the Senate Commerce Committee. And we're just going to have a conversation about AI and regulation. And where we see things going. I think I'll get started by bringing up something that was recently in the news, there was a New York Times article just a couple of days ago that basically described Congress as hopelessly inadequate on AI, not knowing anything sort of starting from square one, when it comes to grappling with these questions of regulation and AI. I know John here has some thoughts about that. And I'm going to let him respond. And first of all, give us a little background about yourself and what you do on the committee.
Microphone. Oh, yes, it is now. Yeah, that's, that's one of those I'm sure we all get these. Like work. Everybody, you know, sends you a link, you got to read this. And my takeaway on that article was it basically said Congress is too stupid and lazy to do anything about AI? Might be a little sensitive. But that's how I took it. I would dispute that. My role I am Senior Advisor on the Commerce Committee and majority staff, I am the staff is basically organized along subcommittee lines. So I'm associated with the consumer protection subcommittee, and I get loaned out to telecom and space and science on occasion. So that's my where I am. My sort of seat at the table is I have a long background in technology, several dozen startups and a few large tech companies. And so my my role is to sort of provide the technology perspective as we consider legislation. So yeah, I dispute that article. You know, it's, we're not exactly burning up the track and beating Europe to the punch, but we're not doing nothing either. You know, there are few things in that article, like, like I mentioned, that Congress only had one person with a master's in artificial intelligence. And AI actually is pretty good. I did the math, if that was the average, we would have over half a million people with master's degrees in artificial intelligence. What is it? They're bad? Yeah, I'm gonna speculate we're, we're a little bit ahead there and interesting way to look at it. My bet is Google is going to lure him away. That was a joke. So as I would just point out, you know, first of all, I don't think you necessarily want the government moving as fast as industry. We're very slow and deliberate on purpose. I think that's what everyone wants, we don't really get a whole lot of chances to do it over or to go out and compete in the marketplace of ideas and see which laws people want to follow. It's a really different game from industry. So I would, I would point out that we did pass bipartisan legislation a couple years ago, it was in the National Defense Authorization bill. It was a huge amount of funding, authorized $6 billion for research into AI. And I would, I would point out that we are kind of pivoting now, at least on the Commerce Committee, we have been very focused on supporting research through NIST and other NSF and other bodies. It is now rapidly as we all know, entering the mainstream. And so for my role in consumer protection, it's definitely time for us to step up. And so you know, that's where we are. I don't I don't think we're, you know, we could go faster. But I don't exactly think we're moving too slowly. It's difficult, complex space, and we need to get it right.
So in these beginning stages, when you're starting to grapple with regulation, how are you thinking about regulation? And how to approach it in Congress? And how much should Congress be taking into account the NIST framework and other guidance from the White House?
Yeah. Okay. So I can say a couple things about that. One is just on my own, I kind of have framework for how I'm looking at it. And, and I would say, as I've been thinking about this issue, I've sort of gone through three stages of understanding and the first stage is pretty simple. You know, AI is a powerful tool, it can be used for good, it can be used for evil, we're just finding out what the contours of that are. So it is a candidate for regulation. And there's no doubt about that. And we need to figure out how to do it. It's difficult. Sort of second level of understanding is, you know, we are grappling with a lot of regulatory issues, mostly involving like, enormous billion plus user multinational data driven businesses that are, you know, in many of their own words, impossible to regulate you when you really think of some of the harms that are occurring and what you might try to do to prevent those a lot of times the industry is coming back to us. It's like, well, we'd love to do that if we could, but we're really dealing with extraordinarily complex, huge businesses that are capable of great harm and it but it occurs to me that AI is a tool we might be able to use to monitor, say, compliance, or, or, you know, content moderation. And then finally, I think, you know, AI itself is one of those difficult regulatory challenges. And it may well be that adversarial AI is how we go about that, meaning, if you're worried about a bot, and maybe you need another bot to watch that bot, and yet another one to watch both of them. So I don't really know if these are all, you know, this is where we're going. But this is how I'm looking at the problem. And then I would say, you know, with the to sort of the framework that NIST put out in the the bill of rights that the Office of Science and Technology Policy put out. Again, from my perspective, I'm sort of seeing these as proto legislation. And I'm not even sure that's a legitimate way to describe it. But I think for tech legislation, things happen so fast, and it's it's so likely that things will happen that nobody anticipated, you know, to make a bill that is a law that is effective and can be implemented, you really have to be very, very specific. But when you're very, very specific with technology, it keeps changing. And so by the time you get the law on the books, and everything settled, it's different, right? So I've always advocated what I kind of think of as a catch all. And a lot of times that takes the form of a duty of care. It's like whatever else you do, try not to kill your customers. Right? You know, it's should be obvious. But, you know, I think when there's a lot of money on the table, sometimes we see that businesses realize that accidentally, they're doing harm, but it's actually very profitable. So maybe they're gonna look the other way. And so so it's a very vague, a lot of people have a real problem with that, because it's very vague. And there's two sides to that it's hard to comply with it, because you don't really know what it is. And it's hard to litigate it, because you really don't know what it is. And so I see, I see this sort of thing as a necessity for situations where something really obviously bad happens. You don't stumble into this kind of thing by mistake. Like we had no idea this was a problem. It's like no, when you make a conscious decision that you don't care about your your users. I think that should be against the law.
One more quick question for you know, we've all been following the privacy debate for many years, we still don't have comprehensive privacy legislation. What's the worry that any sort of AI regulation would go the way of privacy and just sort of go on for years and years with no real action?
That's totally gonna happen? No, you know, I think with privacy, and let's we'll put it this way. There are two sort of major sticking points have been for several years, and they are partisan, there are clear partisan divide one side versus the other, really difficult thing to get through Congress right now. But I would say that, and the reason I really don't want to talk about this too much is that we've been negotiating really, really serious negotiations for over a year and a half. And when people negotiate in public, it makes it a lot harder for people to back down. So I'm very reluctant to take any kind of stand in public. And, you know, I think you can see there's progress. It's very, very disappointing that we have gotten only as far as we have. But we are still making progress. So I'm optimistic there. In terms of AI, I just, I don't really know where the partisan lines are. In fact, I'd like to get a pitch out that you know, anyone with with interest in this area, whether it's, you know, civil society, academia, business, whatever, if you're interested in this, you know, please reach out to us. You know, maybe you guys already know the answer to this, and I don't, but I don't see a strong partisan divide yet. So I don't really know where that's going to come down. We have a new ranking member, Ted Cruz, I would almost describe them as the polar opposite of Senator Cantwell. Ted is kind of very aggressive. And Cantwell is very analytical. But we have already got a bill headed to markup. It's relatively simple. It's about you know, if a device is capable of listening to you, it should let you know that. So it's a very, very simple bill, but it is proof of concept. We've we're getting it into markup. So
there's hope. I'm sure those partisan lines will make themselves known superstars. It's possible. Well, thank you very much. Thank
you. And we'll talk to you soon on the panel here. Thank you. Thank you.