Thank you all for joining us today for this discussion about current state of AI governance. My name is Oma Seddiq. I'm a tech policy reporter for Bloomberg Government. And I'm going to have each of the panelists quickly introduce themselves.
Awesome, I'll just go ahead and go first by virtue of seating order. I'm Travis Hall. I am the Acting Associate Administrator for the Office of Policy Analysis and Development at the National Telecommunications and Information Administration, the Department of Commerce. Sorry, you asked for a short introduction. My title is very, very long. I basically run our policy shop for NTIA within the department. Alan, our Assistant Secretary spoke earlier, and I'm looking forward to the panel. Thank you.
Hi, everyone. I'm Miranda Bogen. I'm the director of the AI Governance Lab at the Center for Democracy and Technology. CDT is a 30 year old civil society organization. I've been doing a lot of work when it comes to AI. I do a lot of work on figuring out what comes next in the AI governance conversation. How do we implement some of these ideas that are that are floating around? And how do we tackle some of the emerging questions that are coming around the more advanced AI models?
Hi, there, my name is Evangelos Razis. I am the AI and data privacy lead for Workday here in the United States. Workday is an enterprise cloud software company providing enterprise solutions, both in HR and financial management, to 70% of the Fortune 50, 10,000 organizations around the world. with about 65 million users under contract.
Hi, I'm Adam Thierer. I'm a senior fellow at the R Street Institute here in Washington. I've spent 32 years doing tech policy for a variety of different academic institutions and think tanks. But my real claim to fame is I've attended every single state of the net conference, Tim gives me a little big Chairman sticker to put on my thing, not the chairman of anything, but been to all of them. I think the first one we had like Alexander Graham Bell is a keynoter and tech demonstration from Marconi or something like that. But yeah, glad to be here again.
Okay, great. So start off with setting some context. Almost a year ago, this AI frenzy really hit Washington. And we've seen so much go on since then, you know, the Biden administration has taken action on AI, Congress is moving to take action on AI, you know, beyond the federal level states are moving to to try to regulate this technology, the rest of the world is moving to try to regulate this technology. So I guess my question to you all is why why are we seeing governments try to respond to AI and why now?
I'll go ahead and take a stab first, and just simply say that this is not actually a new topic. It's not a new workstream. It's a new name for it. But we have seen iterations of these conversations for over a decade and a half right under the Obama administration was big data. Right? We've been talking about algorithmic bias and discrimination for a very long time, like a lot of these topics in terms of algorithms, in terms of like the collection and use of data, in terms of the impact on people's lives. We have certainly been engaged in, in various stages over a over quite a long time. And one of the things that I think has captured the imagination in terms of current conversations around AI, is it does take some of those policy conversations that felt a little bit disparate, right, like some of the privacy conversations, some of the intellectual private property conversations, where people were having different siloed conversations, and focuses in on a single piece of technology, a single, you know, focal point of like potential, both excitement, but also potential risk and harm. And then we also just simply about, you know, a year and a half ago, right, had a series of products came out that really, really were cool, and fun and exciting and accessible. And you could play with and you could make images and Dolly and you could write lullabies to your daughter, you know, about basically any topic you could think of. And then it also really brought to the fore some of the potential risks about these technologies in ways that I think people, again, in policy circles and tech circles and like little tiny silos have been talking about for a longer time, and really kind of like made a little bit of a lightning strike and turn the public. And as Alan Davidson, our Assistant Secretary, said this morning, like we we actually struck that lightning a little bit because we had been working on a project on AI accountability, AI accountability policy, looking at how to do audits, how to do certifications, things like that, and had been doing that for like about, you know, eight months. It takes a long time to clear documents to the federal government and, and put out the request for comment right when interest was piqued. and we brought In over 1400 comments, where usually or comments are like, you know, 50 to 200 or so. And because everyone was so keenly interested, so keenly aware,
Overall, I think that that public awareness that attention from governments is, in general, a good thing, people are coming together, having some of these hard conversations hope, you know, again, building on some of these past conversations that we've had, but, but I do think that, that kind of like, you know, many different issue areas coming together, kind of a single piece of technology, a single moment of people being like really interested, excited and concerned, like really helped to, like draw this attention to the fore.
Yeah, I think in those early years, there was a pocket of policymakers, policymakers, civil society groups, researchers, who understood that the infrastructure that was being built, data, machine learning was going to power all sorts of experiences that would affect people, but not in a way that people were experiencing directly and could attribute to the technology. I think that that was one of the things that changed of there's there's now a language to this technology, there's there's an interface, that just makes it much more present. And so in that way, AI has sort of become a lens through which people are projecting a lot of different concerns a lot of different policy concerns, some of which have to do with the technology itself, and some of which are macro issues, that technology is just a fast moving train to jump on and, and try to get the energy that has kind of come up around AI to tackle an issue that maybe they've been asking to be tackled for quite some time. But the challenges in DC have have meant that we haven't tackled those issues. So it's an interesting moment, because I think it does give us an opportunity to revisit some, you know, conversations we've had about privacy, for instance, you know, still don't have a federal data, data privacy legislation, but privacy might end up in some kind of AI related vehicle, for better or worse. You know, copyright, labor, civil rights, all these things that people feel like there are gaps in AI can both make those issues worse, but also could be a vehicle to try to get some creative solutions, and at least tried to get the attention on those issues that people haven't felt that they've gotten.
So I think what the question is, why now? And so I think we're still sort of asking ourselves that question, because I think very much like Travis and Miranda just pointed out, these issues have been around for quite some time as a company workday has been using AI for about a decade. So I think what we are seeing that is new is the trust gap. And so because these issues have gone to the forefront of public debate, you know, there's there's been some serious concern among policymakers. But how do you address those risks, while also harnessing the real significant benefits of AI when it's developed and deployed responsibly. So we actually have some numbers now behind that trust gap. So workday, published a research some research that we had done last month, and we found that in the workplace, that trust gap is very real. So about four and five employees have said that their their employers have not issued responsibly AI guidelines, but how it's used in the workplace. And similarly, only about 52% of workers found that AI is that they that they trust AI being used responsibly in the workplace. And so, yeah, I think I think the policymakers are responding to that real gap and trust. And, you know, candidly, as a company, we're here to help them address that, while also trying to understand the great potential benefits of this tech.
Well, just to link everything that's already been said, which I agree with, you know, Travis pointed out in some some ways, there's nothing new under the sun, we've had debates about a lot of these issues for a long time. It's just as the Miranda said, there's the new gloss, everything being viewed for the lens of AI with an added layer of concern, as we just heard. And I think that's what really makes this debate interesting is because it's the confluence of many different debates into one. And everybody is now interested in legislating on issues that we've long been debating through the prism of AI regulation. And especially not just the federal level, state, local, and even international, so multi layered kind of interest and what you hear at every single congressional hearing, and all of the different sessions on AI around this town and other state houses, is like we can't allow what happened to the Internet to happen to AI. And I'm always like, well, what does that exactly mean? Because we got a lot about the Internet, right? And America has become a global leader on digital technology and ecommerce, thanks to a really enlightened policy vision, in my view, but now there's a sort of reassessment of that, the wisdom of that, and a question about this governance moment to get back to the theme of our talk, like we have for AI that we may have sort of maybe not taken as seriously for the Internet. I remember because I'm such an old dinosaur being around when at the table when we were writing the early drafts of the Telecom Act and like The Internet was just not found in the telecom. And so we had this interesting sort of accidental fortuitous moment where the governance of this new thing was a very light touch, kind of very simple approach compared to traditional analog age media. But now we're thinking about reversing that, and going a very different direction, with a more heavy handed upfront pre emptive approach to algorithmic and computational governance regulation.
Adam, you brought up a good point, in terms of kind of a lot of the conversation today around AI is, you know, we don't want to repeat the mistakes of the past when it comes to the Internet and social media. But I guess my question is, is that a correct comparison to draw? Because all of you just mentioned how this technology even though it is a technology, it encompasses so much more, and it touches on everything? So when you're thinking about AI regulation, are you actually regulating the technology? Or are you kind of filling in the gaps, or addressing these long held debate long had longtime debates? That AI is just bringing up to the surface?
I certainly have something to say about that. I recently wrote a study on AI, legislative overview, looking at all the different proposals that were just in front of Congress and not doing one on the States right now. And the answer is a little bit of both. I mean, there's an effort to actually try to use AI to regulate individual policy silos, privacy, safe, child safety, security, so on and so forth. And then various other technocratic areas, you know, very specific AI, and employment and medicine and whatever. Those are all interesting debates. But the different debate here is the macro debate we're having right now, that is diverting a lot of time and attention about regulating AI qua AI, like as a general purpose technology. And that's very, very different, unique, and it's sucking up a lot of the oxygen out of the committee rooms, there's people are talking about existential risks of AI and sort of high level macro regulation of AI. In my piece that I did about a sort of legislative overview of AI, I said, this is why I don't think anything's gonna move through Congress, because it's so tied up with the big picture, that we're not breaking it down into its smaller components. And having the more concrete policy debate about the individual policy silos are issues that have, as Travis Miranda said, been on the table for a long, long time, I think you can get traction on a lot of those things. But now AI has come along and derailed a lot of that. I mean, you know, we should have had the privacy bill last year. And now this session, we can't even get it introduced, you know, where's the driverless car bill? Right, that was widespread bipartisan support, it's been derailed by high level, let's regulate AI or, you know, AI governance. And I think that's just a non starter, I don't think anything's gonna happen because of it.
I think the thing that people realized from the purchase we've taken in the past technology that maybe we're a little bit more, let's wait and see what the technology will do, it could provide a lot of benefit. We saw a lot of the externalities that came from that. And I think what people are realizing with AI is that a lot of those externalities are predictable, there might be some that are a little bit more speculative that maybe we don't want to tackle, before we understand what the shape of those risks are. But there are a lot of impacts that technology has had on specifically marginalized communities that it's very clear that without kind of a thoughtful approach, or the right incentives, are the people building the technology to prioritize those externalities, they will too easily be left behind. And so I think then the question is, what is the where's that intervention point? In some cases, there might be intervention points that are at the level of AI, like datasets, or documentation where there's some set of recommendations, rules, guidance that plays a role no matter what, what, where the as AI system is being integrated. In other cases, the correct intervention point might be more at the implementation layer. And so then you get back into the the sort of vertical areas of the approach that Adam was mentioning. But without some of those foundational interventions, maybe you don't have enough information at the in the context of deployment to know what AI system has been used, or to or for regulators to have the information that they need in order to take action. So I think figuring out what is the appropriate division? What are the things that can be tackled in the AI qua AI circumstance, and where there's specifications that need to come up in different contexts, there's been some proposals, for example, for their breach, there to be some some kind of risk related designation that then agencies would determine which types of implementations of AI would fall into which types of risks the EU has taken such an approach, for example, like in general, we have some expectations are on these systems. But in certain contexts, there's heightened requirements because of the implications of deploying the technology in that context. So I think navigating that and figuring out how to tease that apart, not letting everything be bundled into AI as an umbrella for everything It will be really important to finding those right levers and tools.
Yeah, I think you sort of hit hit on it on the head, which is taking a risk based approach, focusing on the context, focusing on the potential impacts to individuals. And I think so the way we sort of think about, and we've seen this emerge, not only in the European Union, but also at the state level, and at least some bills here in Congress, which is focusing on consequential decisions focusing on consequential impacts, those that, you know, we know that there's some agreement on across the aisle about focusing on you know, things like hiring decisions, decisions to terminate or set pay or promote, I think, folks generally agree that if a in terms of credit, credit ratings or credit, sort of offering or extension of credit, or healthcare, these are sort of the core essential elements of, you know, human life, that we all agree that are sort of those high impact areas. So rather than, you know, try to have an all encompassing AI approach that will regulate every potential use of AI, instead focusing on those, those high risk potential uses of of the technology.
And just to kind of, like, provide some observations on the conversation, right, like, this is a more nuanced conversation. Right? Like I, unlike Adam, I've only been coming to these since, like, 2015. But the conversation right now, about these issues has developed there, like in terms of policymakers thinking about these issues, having the constituents calling them about these issues, right. We are in a place, where to Miranda's point, we can recognize what the potential external analogies are, we have real case studies of what they are, in analogous analogous cases, not necessarily Internet, but other uses of algorithms, right, that aren't necessarily intelligent. And, and we, I think, and I also think that there's, again, a little bit back to that kind of like lightning in a bottle moment of it capturing people's imaginations, and also in terms of some of the benefits and the harms making intuitive sense, right? Like the people actually graphs how these technologies will, in fact, affect their lives, where it took a lot longer for the Internet to be to be taken up, right, like it took longer for it to actually start impacting people's lives. Right. Like, it just, you know, it had to be deployed. I mean, we're still deploying. Whereas I think that there's more of a sense of the immediacy of the need to, to think about these issues. But I also do think, in a positive light, that I think that the policy conversation is, is a lot more nuanced and mature in terms of how to think about it. I think some of the difficulties, of course, are that in terms of the technologies themselves, right, artificial intelligence is a broad umbrella term for a family of technologies that are actually quite different, that encompass a family of uses, or a broad set of uses that are actually quite different. And, and to our earlier conversation, is can be a little bit of a Christmas tree of like pet issues and things that people are concerned about in terms of overall stuff that are densenet aren't necessarily issues of artificial intelligence. And that's where that kind of like nuanced and subtle distinction can be quite difficult. Because also, the technology is actively changing and shifting and you get technical experts in the room and the technical experts actually disagree in fundamental ways about what the technology can do. And it's not that they're either one of them are necessarily wrong. It's just simply it hasn't proved out yet. So I think that there's we are, the train is further ahead than it was before, but we are writing it as some of the tracks are being built.
So, Miranda and Evangelos, you both mentioned or brought up what's going on in the EU with the AI act? Obviously, in Congress, there's not really an equivalent just yet. And we don't know if there will be an equivalent ever. But there's been a lot of comparisons, at least now to what the White House has done on AI considering the the sweeping executive order that came out last fall. Would you say that was kind of the United States setting this foundational approach to AI policy? Is that what the executive order mainly serves to do and and if I could ask at least one of you to just briefly tell us what's what this executive order does, too.
Well, the EU was working on the AI act for years before this kind of big moment. And so that type of legislation takes a lot of work. The US had been focused on other sort of big efforts like privacy legislation, and now is starting to think about what a holistic approach would look like, but that it does take some time. And it was very clear in you know, in that moment that there was action needed. And I think that the the executive order really tackles a lot of different angles of this problem, it looks at some of the foundational issues that have been coming up for a long time, like automated decisions, government uses of AI enforcement of existing laws when it comes to AI, privacy enhancing technologies, security components, and also kind of looped in some of the new newer considerations around longer term risks, national security implications, geopolitical dynamics, things like that. And, you know, got a lot of credit, I think, for including so many different perspectives in that document. And I think it was one of the longest executive orders that had ever existed, if not the longest, I don't know if anyone was able to actually prove that. But it just kind of goes to show the importance that this topic was coming up, and the different the number of different angles that people were, were prioritizing, getting into that vehicle. And I think we're seeing a lot of movement coming out of that the timelines were pretty short, a lot of the agencies have already started undertaking the, the the authority, the requirements that they were, they were asked to, and, and not everything has kind of come to fruition yet, but we are seeing some movement. So I think there's still a lot to be seen about whether that will have the intended effect. I, you know, executive orders are somewhat fragile. If we have changed administration, there could be some backtracking there. And it's not going to be the most durable way to regulate these things. But it did provide at the very least momentum for agencies. I think, for the industry, players United spend some time in industry. I'm curious what you think of Angeles, but having having big loud voices, saying, you know, people building the technology have to take a certain set of steps, really does give more license for internal folks to commit money and spend time on those things. The question is, how will we make sure that that momentum continues? How do you also make sure that that attention is not overly narrow on only the most advanced types of AI because you need that same level of care and diligence among the people building technology, even for simpler systems, because they can have as significant impacts on people as some of the more advanced ones might have in the future. It's just a little bit more difficult to imagine what that will look like. And so I think that was one of the places where the CEO, you know, there were some questions about what is the scope of focus? And how do we make sure the administration's broader focus on automated decisions, the impact on consumers and people doesn't get lost in this new moment.
I think comparing the executive order in the EU AI Act is a bit comparing apples and oranges, the EU AI act as Miranda mentioned in praat, in process, since at least 2019. The executive order is a landmark executive order, but it is just that it is an executive order. It is something that we've sort of rolled up our sleeves to assist wherever we can with the administration. And I just point out sort of two elements that I think are going to have lasting impact on the AI governance landscape. First is pretty new, which is the AI Safety Institute Consortium, which you know, workday is a founding member of. I think that is I think there's a general recognition now, but the lack of standards, which are needed to underpin effective, scalable AI governance, and the AI Safety Institute is going to help fill that gap. So by pooling resources, expertise, sort of talent, we're going to be able to help address I think, as the undersecretary of Congress put it sort of a new measurement science, which is exciting. So I know we're pretty pretty happy to be part of that. And then second, in terms of potential impact is that of the OMB is draft AI executive memorandum which will govern how the federal government acquires and uses AI procures IP data procures and deploys. And so there, it's pretty ambitious. If you read it, it reads, it's clearly been informed by some of the commercial oriented AI proposals that we've seen elsewhere in other jurisdictions. And, you know, on on the whole, it landed in in a pretty good place, you know, focuses on, again, high risks to individuals, it deploys impact assessments. It relies on AI Governance Program and includes a shout out to the NIST AI risk management framework. So if if that is fully implemented, that is going to have a significant impact not just on how the US government acquires software and you know, pursues it modernization, but on the broader AI governance landscape as a whole.
So I'll just say thank you to my co panelists for summarizing the AI executive order. I think that I probably was my job, but I appreciate it. I'll just give a quick shout out that there's also a number of slow burns, hot deadlines, slow burns, such as our report on dual use foundation models with widely available model weights. Where you know, we are the President's Principal Advisor on telecommunications Information Policy. We provide advice we are not a regulator we're not going out and like actually going to be immediately putting out rules we cannot But we are putting out a report on this, that will then feed into policy right, it will feed into future conversations, future legislation, things like that. And there's a number of such reports and a number of such activities where the seeds are being planted. We have 270 deadlocked 270 days since the executive order was passed in order to write this part of my brain is thinking about that right now. But, but this is going to be, there's going to be a number of things that come out of the executive order that are going to have a longer term lasting effect, that are not things that are going to be done and immediately completed on day 270. I do agree that a regulation as sweeping as an A a European Union regulation, as opposed to a directive is a different beast than an executive order. But I do think that the, we are all very proud of having put out such a powerful such a well thought through such a, you know, ultimately impactful policy document with multiple moving parts where the entire word is an entirety of the administration that is currently working on this, right, it is not just simply a single agency, or a single department, although the Department of Commerce has gotten a lot of that work, call out to my colleagues at NIST. But that ultimately, this, even though they are very different beasts, the AI executive order is going to be extraordinarily impactful for the reasons that evangelist noted, but also for some of the smaller bits and pieces that are included in that that are going to have a longer term effect. But
just want to say a brief word about the politics and the executive order. Because I think this is important and sometimes overlooked, which is, you know, there's a lot of sensible things in this executive order. But it's so big, it takes us sort of everything in the kitchen sink kind of approach to addressing all things algorithmic. And I think that's, you know, well intentioned, but the reality is it overshoots by quite a bit. I mean, the stuff in there about the defense production act, some of the stuff about empowering the Federal Trade Commission, Department of Labor, right, other things, these are things that are meant for Congress to do, these are legislative functions, and to understand the executive orders response to congressional dysfunction, and an action. But the reality is, is that some of these things are overreach that ultimately, as Miranda said, so be potentially subjected to be eliminated immediately by the next president. And I think that's even more likely now. Because I think the executive orders sort of supercharged the political discussion about AI on the Hill among Republicans. And I think now there's a lot of pushback about what's in the order, or just the order itself and how big it is. Now, will that mean Congress gets its act together and acts and does its job? I don't know. But I think it's now a different discussion, because of the executive board.
I do think if I can add it was, you know, important that the executive order noted all of the agencies that should be acting under their existing authorities to enforce the laws that already exist for the technologies that are already impacting those areas. Because agencies like this is the the implementation of these systems in context requires that domain expertise, and, and probably some, you know, some technical insight that legislation might not be able to get, you know, precisely in whatever in whatever Congress puts together, if they end up putting something together. And so, you know, agencies are going to play a really important role here, and and we hope that they will, will kind of be proactive in thinking about how technologies of any kind AI included, are in their wheelhouse already, and what action they can take.
So I'm hearing a little bit of, you know, maybe the executive order was too broad in scope, some of the politics to political issues around the executive order. But what are some of the issues potentially arising now that it's the implementation processes beginning? What are some of the issues with more of the substance of the of the order? Or where did the administration maybe missed the mark?
I think one of the big questions that underpins all of AI governance in this kind of bore out in different parts of the EO are, how do you define risk? How do you measure and detect it, and then what is required once you measure and detect it, so in certain cases, they've deputized NIST to kind of come up with that. And that's really important, because the measurements that people are coming up with to determine what impact an AI system might have, what capabilities it might have, whether whether it kind of passes a line of some kind, that is going to determine the effectiveness of any interventions. And I'll be honest, the state of that measurement science right now is very nascent, very poor. It's kind of throw throwing things at the wall at the moment. And so that's going to be really important to get right and I think NIST is a good place to do that they have that expertise. But having not having a reliable way to set a benchmark and say this is probably within bounds of what's acceptable and this kind of surpasses that. And, and another example of that was with the A foundation models with widely available model weights. The you know, the executive order sets sort of a threshold of compute as the way to determine when risk might manifest. And they note that that's sort of a placeholder. But it was one that did raise a lot of questions was that the right way to set that limit? Does that really tell us much about the risk of a model is how many parameters and how much how much compute was involved in training it? It's not necessarily clear, that's the hypothesis at the moment. But I think coming up with better measures for when the risks that are going to be unacceptable, where, where we want regulation to come into play, where we want rules to kick in, that's going to be the crux of effective AI governance is knowing when something poses a risk that requires action.
At the risk of misstating, something Travis has said, I think there's a lot being done in a very short period of time. And getting the right mix. You know, regardless of the context within the executive order, is key to hitting the mark when it comes to the kind of stated policy goal. And you know, I mentioned earlier, the OMB AI memo, that government procurements really complicated. That's what I've learned, preparing our comments for this. And it is, you know, again, on the whole, a very positive proposal, but getting the details right matters. And so it's not missing the mark, I think, again, on the whole, the executive order does hit the mark, but making sure that the time is being put in in place, so that you can listen to stakeholders, take on feedback, amend these proposals as needed, so that what we do get is a very high quality product. And so that is certainly one one aspect of the executive order. We're we're keeping keen on on
just a brief word about the NIST process, because I think it's one thing everybody generally agrees on this debate is the important role that NIST has played in setting a sort of a multi standard for multi stakeholder framework and sub standards for AI governance issues, writ large security and many other issues in terms of AI ethics, but there's now with the executive order, and with some legislative proposals, an effort to sort of put some potential added enforcement teeth there. And you utilize the NIST framework, which has thus far been multi stakeholder voluntary sort of bottom up into a more of a regulatory thing. And that's not traditionally been what NIST is done. Right? NIST is not a computational super regulator, we don't have a federal Internet, you know, department sitting in the Department of Commerce. And so there's this subtle but important shift in the governance discussion now about taking that NIST role and sort of supercharging it, putting on steroids, whatever your preferred mix metaphor isn't, and giving it some sort of added enforcement, I think that's going to be the debate we're going to have this year in a big way. One other thing I need to mention that the executive order Congress and everybody is missing, is just the absolute explosion of state and local efforts at AI regulation and governance. Just absolutely. I think BSA did a study last October year over year 440% increase in state and local legislative proposals. And the number of municipal proposals, including ones that have passed is like nothing I've ever seen in the world of tech policy. And the EO doesn't do a lot about that. And Congress isn't done anything. I mean, all the bills that are pending, as far as I can tell them, none of them talk about pre empting the states. This is a very different governance framework for for AI and computation than we had for the Internet, where we had very clear proposals and and policies that basically from the Clinton ministration, and Congress, so that this was going to be a national global resource, the global framework for electronic commerce, go back and look at it 1997 White House single best governance document for any technology I've ever seen in my life. And it made it very, very clear that this was not going to be something we should meddle with, in that way, the state and local level. So I see us going in a very different direction and a dangerous one on AI over the course of this year, especially with Congress being unable to act apparently.
And just to put a little bit of code on that, maybe veering a little bit away from the state and local conversation, because it's certainly something that we're looking at that we're thinking about, but you know, but not something that we're actively, actively, like, actively directly working on. But I do want to just put in, that the executive order did put in a number of steam vents, right, there are a number of places where instead of making a final decision, which he could have done, it actually put in place reports or notices of comment. I just want to point out that, you know, OMB doesn't always for everything put in comments on its guidance for procurement right like that's a particularly on something like its high level and as important as this that's kind of not a regular thing and the executive order notices the need for have that kind of engagement for that kind of thought? Yes, the the deadlines are tight. I personally would love longer deadlines. But that is but that is but that I think that this is part of the urgency of the moment. And then in terms of some of the definitions as well, agencies do are given some discretion over interpretation and over future refinement of some of those definitions. And so I do think that, while they're, you know, while this is a policy document that is attempting to stake some take some flags in the sand, there is also a recognition within the administration that some of those flags at some point in time might need to be moved, and that there are ways mechanisms and ways to do so.
And I'm hearing that the administration, even though this is the first step, not the first, but one of the most significant steps so far in the AI policy landscape. At the federal level, there, there's still a need for Congress to step in. But Congress hasn't just yet in terms of actually planting the flag and the way that the White House has. So tell us more about how Congress fits into this conversation. And what you would like to see from from lawmakers on the Hill?
Well, I don't think it's a secret that this is an election year, which is going to make legislating a little bit more difficult. Look, I will give folks on the Hill a lot of credit, there's been a significant amount of very thoughtful, bipartisan, bicameral attempts to really grapple with these technologies, learn the possibilities, both good and bad, and figure out a policy way forward. So I think, whether it's the Insight forums, or the AI working groups that have been put together in the house, you know, Congress sometimes gets a bad rap, and I don't think that's earned. And in terms of what we'd like to see, you know, I will say, within the realm of, again, this is an election year, we'd love to see a sort of common sense, if targeted approach, like the one taken by the Federal AI Risk Management Act. So this is a bill that would require the federal government and companies selling to the federal, federal government to adopt the NIST AI risk management framework that is incorporated in this product is not making this their super regulator. But it is an important step forward in terms of articulating what our basic expectations around AI governance, and would really put the federal government in a place where it is really leading, in terms of what good governance looks like.
Voluntary approaches, you know, there's so much fluidity at the moment that practitioners are going to be interpreting a lot of what's coming out of NIST, or, or other agencies, but at some level, and at some point, voluntary approaches are not going to be enough to change how things are prioritized in practice how money is prioritized to be to be spent. And I think setting some of those baseline rules of the road that do change that behavior, where you know, the voluntary commitments coming out of the White House, for instance, like they're just that you can sort of use commitment language and you know, truthful statements as a way to kind of back into getting companies to take action that they wouldn't otherwise take. But to make sure that companies are a developers of AI at all really are integrating the the sufficient risk management practices into the development of AI and changing the way they make decisions around what should be deployed. And under what conditions, I think we need clear rules there. And as we heard on the panel, like that might not be the role of the executive to, to set like, that's they're doing what they can, knowing that the attention is on them. But ultimately having something stronger that that will be durable, that will change those incentives is going to be necessary. And the trick will be what are they focused on? Where do those rules land? And I think everyone, there is a lot of bipartisan action at the moment on these high level concepts. And once we get into the details, I think that will that will be the the trick of can we find some alignment where people recognize that those rules will actually make circumstances better for everyone because it creates certainty. It increases trust, and and it sort of sets the parameters in a way that the conversation, you know, moves from one set of spiraling to a little bit deeper question of what are the details? What are our values? What do we want actually to show the world that the US has approaches to tackling this these new challenges?
And I think one of the reasons why it's sort of it's bipartisan right now is because for companies that are developing and deploying these technologies, there's some baseline of agreement or in a, you know, in addition to the NIST framework, although that is certainly a good baseline in terms of what good governance looks like. Again, it is it is emerging technical standards, as I will always always say is, they're still still nascent. But I think when it comes to you know, putting in a governance program that map manages measures and governs the risk of AI, carrying out an impact assessment, pre deployment or pre sale. So you understand the potential impacts on sort of protected classes or algorithmic discrimination, being able to provide some baseline of notice to folks that would be impacted by these consequential decisions. I think there's a lot more agreement there around those basic building blocks than sometimes we give people credit for
just a brief comment I've already given away to my punchline, which is that we need to break AI policy into its smaller components if we're going to get anything done. But if we're gonna go high level, obviously, some of the things evangelist just said about, you know, a soft endorsement from Congress for the NIST framework and in utilizing it and how it can to extend that framework. Great. I think everybody's behind that. I think the trickier thing is the one that I've already mentioned about national framework, and what are we going to do about state and local activities, that's going to be extraordinarily tricky. It's hard enough to preempt on things like privacy and cybersecurity and other issues like that. It's going to be even more difficult on AI policy. But it's a conversation that many people in Congress are thinking about, they don't know where to go with it, I think it's needed. But what's also needed and we're not going to get in fact, we get the exact opposite, is we need some sort of potential section 230 for AI. And we're at the point now where we might not even have section 230 for the Internet anymore. We're sort of one unanimous consent decree away from Mr. Hawley on the floor saying let's get rid of it. And so I think there's a real danger there that we backpedal on that front and open up the liability nightmare that we prevented for the Internet. For me, section 230 was the secret sauce to powered Internet, commerce and freedom. But we run the real risk now of backpedaling on that and undermining the computational revolution if we don't get it right for AI.
I do think that the analogies are somewhat different. I think the liability conversation will be really important. But AI systems, if we think of them only as language generators, which is the analogy that is in the news these days, it can end up misdirecting policy conversations and so many AI systems are actually effectuating outcomes directly and are plugged into systems that otherwise actors would have some responsibility for. So I think being you know, being making sure that we're thinking about that goes back to the definition of AI. Certainly, I think generative AI is not the the focus of AI that we ought to have when we're thinking about these interventions. And you know, what is in scope of that conversation will determine what might make most sense from a liability standpoint,
one of the most helpful documents the administration put out last year was a joint statement by the FTC, the DOJ, the EEOC, and the CFPB. There's a lot of acronyms. All just sort of underscoring that there is no loophole for AI and existing civil rights and consumer protection laws. That element of certainty is particularly helpful to the marketplace, and one that I think serves as a baseline for us to understand what are the expectations around, you know, when you're developing and deploying these tools, I'd also sort of give a shout out to work that has built on top of that, which, you know, we work together with future Privacy Forum around again, a lot of this is pretty concrete. And in a document around workplace assessment technology is we and a bunch of other you know, AI vendors got together and mapped out again, what does this look like in practice? Because, you know, statements like, like those from the enforcers are particularly helpful. But you have to find ways to instantiate them in practice.
I think for for Travis, I would ask, you know, how closely is the administration engaging with the Hill on these issues? And what would the administration like to see from the from the hill in terms of, of at least the first steps with with AI regulation?
So I would say, I mean, in terms of engagement, we regularly provide technical assistance to the Hill on a variety of bills. The administration has been very heavily engaged in conversations with the hill and, and there are certainly lots of open lines of communication. On this, and it being bipartisan is certainly, you know, a very helpful thing with that. In terms of what it is looking for, what it what the administration wants, all to say two things. One, I think that the executive order did a good job of laying out where we have existing authority, right, to rely on existing authority similar to DOJ, EEOC, FTC, CFPB, putting out the statement about it's disgusting authority what what is and is not a loophole as right and mainly it being not. But that there is are other areas where, where the executive order is putting out, you know, reports Uh, you know, guidance, things like that, that, you know, at some point in time probably would need to have authorities be enhanced in order to actually make them fully actualized. And the second point that I would just simply say is, you know, the President has put out his FY 25 budget. And there's a lot of activities that the administration has in there that are focused on helping to fund the activity, you know, fund, this type of work. And so the President has put out his vision for what the budget is and what the budget should be. And certainly, that's another area where there's active engagement.
For example, NIST has been given a lot of authority and not necessarily the money to go with it.
So I think there were a couple of kind of legislative proposals mentioned, but I want to ask what bills that are out there right now that you all are really paying attention to there is one mentioned, but I mean, maybe I'll mention one, the create AI Act has been thrown around a lot. There's some proposals out there to ban the use of deep fakes and federal elections. I mean, what are you all really paying attention to? And where do you see some movement potentially,
I think the election stuff obviously, is getting a lot of interest in potential traction. And that could be an example of breaking the issues down a little bit to to address them. Although I am concerned about potential overreach there from a first hand perspective on some of those issues. At the high level, it's difficult create a Act has got a lot of support. A lot of people in industry are interested in the film Klobuchar bipartisan proposal, which builds on the NIST framework and does look into high risk systems and tries to narrow the focus a little bit. I think that's an interesting approach, I wrote a paper about it. And then there are a variety of other things that would just take a look at expanded government capacity. And, you know, one thing we always fail to mention is like how much AI is already regulated, I spent a lot of time writing about this and individualized contexts. You know, our federal government is massive, we have 2.2 million civilian employees working at 436 different federal departments. The idea that nobody's taking a look at are already regulating is is ludicrous. I just finished a law review on FDA regulation of machine learning and algorithmic systems. And EEOC is very active on this. And we're gonna hear from Commissioner Soderling, about that later today. You know, we'll go FTCS actions, although the letter that went out there. And so part of what Congress can do here is just an inventory and just make sure like, what, what are we already doing on AI policy? And how do we maybe support or improve that, and that could be the most important short term takeaway, but it's just not as sexy as saying, I've got a big grand bill to do all things AI, you know, and I think we get we get caught up in that debate. I think that's really unfortunate when the the smaller micro debates actually more important and tractable can actually get action on Capitol Hill.
I'm gonna cheat a little bit give you a state bill, which is AB 331. In California, it was introduced by Rebecca Bower Kane last year, I think we're probably going to see some momentum around it again this year. And it does a lot of what I already mentioned, which is it focuses on consequential decisions, leverages impact assessments requires companies developing and deploying these high risk tools to implement a governance program built on the NIST AI risk management framework. I think we've seen California clearly with privacy, willing to take a lead here. And I think in terms of what realistically we expect, if there's a pass or to really move the ball forward this year, it's going to come out of California. Short of that, however, we've seen a B 31. Also influence bills in other states, be it Washington State, be it bills in Virginia, and Vermont and New York. So I think we're gonna see I think 2024 really be a year of the states. And yeah, it'd be 331 by Rebecca Bauer Canada's is what's on our watch list.
We're watching a lot of different components that are moving, and I think some of them will end up in some vehicles that might, you know, try to move on their own. But what I'm most interested in is, are the attempts to fill the gaps in, you know, existing regulation legislation, you know, their employment in Title Seven of the Civil Rights Act, for instance, you know, been around a long time, does have enforcement and power and at the same time, there's some challenges to individuals who might be experiencing discrimination when it comes to AI because they lack visibility. They it's challenging to kind of bring bring the the counter evidence that there might be an alternative that an employer ought to have used at cetera. So I think the things that are focusing on those building blocks of what sort of documentation do we need systems to have so that whether it's regulators or individuals or their advocates, how do they know that an AI system was implicated such that they can leverage the you know, existing structures? What sort of third party visibility or oversight might be needed to ensure that conduct that You know, risk management is happening to a degree that sufficiently addressing that risk. And so, you know, I think we're, we're we're following all those things and trying to give advice where we can to make sure that the proposals will have the intended effect and will create that environment of trust. For example, you know, auditing and an impact assessments have kind of come up quite often as concepts that have general consensus that there might need to be something like that. But what does it look like? Right now there's not, there's not even sort of a market for the type of expertise that third parties would need to have to do an audit effectively. But that might be really critical in some of these higher risk systems. How do we make sure that the legislation is both pointing to be the standards that are going to need to underpin but whether it's self assessment or third party assessment? And then how do you create the conditions where there are actors that are sufficiently independent, to actually bring the accountability that legislation is going to want to impose in particular context.
And, fortunately, there's an agency within the Department of Commerce who's coming out with a report shortly on that very subject.
So a lot of the I want to take a step back here and maybe speak to sort of the the overall approach earlier was mentioned. So that risk based approach was mentioned, you know, not sort of a one size fits all approach. Because you hear administration officials, members on the Hill saying they're trying to strike this balance, you know, we want to promote AI for its many benefits, we want to stay competitive with China. But at the same time, I want to protect Americans from a lot of these risks and concerns and harms. But how do you ensure that all these voices are being heard? In terms of all the different stakes of AI regulation? You know, how do you strike that, that balance, because there is concerns, of course of industry playing too much of a role in regulation or or maybe playing an outsized role? And then there are concerns about, you know, being too heavy handed, and then stifling innovation? So how would you suggest policymakers strike that balance?
I mean, anywhere that the details are being talked about multiple multistakeholder conversation is really important. I'll talk about NIST as an example, the AI Safety Institute, they just announced this consortium, which is, you know, a variety of organizations, industry, civil society, academia, ostensibly coming together to kind of talk through some of these issues. They're not like we CDT is a member of the consortium, which we're really glad about, there are a handful of other civil society organizations that we work with that are a part of it, but just from a proportions perspective, and the amount of staff and time and resources that different, you know, components of that multi stakeholder group will have to spend on that process will sort of, I think, have a natural effect. And so it just, it means that the policy, policymakers need to make even more of an effort to include voices of communities who wouldn't otherwise have space in the room just because otherwise, it's very easy for them to be drowned out. But just due to the nature of the amount of resources that that folks have to spend, at the same time, you know, people building the technology who are close to it, I think, have interesting insights that are important to consider, we do want to make sure that policies are tackling the problems that they want to tackle. And sometimes there can be a mismatch when when the technology is moving very quickly. But I think that has to happen out in the open, I think we saw with the Insight forums on the hill, for instance, it they did involve a whole lot of different folks, but also weren't, you know, as open as they could have been to let more people kind of contribute their expertise, share their experiences, and how do we make sure that the voices of individuals who are consistently not considered are held up to the same, you know, are weighed in the same breath as the sort of more macro issues that that policymakers are considering? It's their job to consider all of those industry
is not a monolith. So I'll give you an example. We are I mean, as I mentioned earlier, we're an enterprise b2b software company, which means that we have very large customers who are not going to use AI tools, they don't trust, full stop. And so that has created a very important market pressure amongst us. And I think some others that are taking steps to institute safeguards on the AI development side. And so that we can set up our customers to succeed and comply with either existing law or laws that are coming online. So that, you know, that that trust gap, again, is addressed because, again, large organizations are not going to adopt AI at scale, if if they don't trust it. And so, you know, as I mentioned, adopting the NIST framework, we were actually the first company to use forgiveness to put a case study out on our adoption of the NIST framework. You know, using impact assessments, having an AI Governance Program, enabling testing of data for tools for algorithmic discrimination. And so these steps are already out there. And so why do I bring this up in the context of what policymakers can do, because they can find ways to codify this to, you know, help take advantage of these paths that have already been laid out by many of the leading companies, because, again, no one is going to adopt AI if they don't trust it. So this is not a call for self regulation. I don't think that that that is going to work in this case. But you can find workable regulation that puts in place safeguards, while also follows, you know, moves the ball forward and follows what what can enable innovation, and
understanding, we're short on time, I'll keep it very short. Making sure that we are reaching out to as diverse a group of the community as we possibly can to get as many stakeholders in the room and talking to us as possible as particularly people who are directly affected by the technology is an evergreen challenge and mandate that we take very seriously. And
just to bring those those themes together, all roads seem to lead back to some sort of agile, iterative, multi stakeholder approach to technological governance to keep pace with the evolution of this fast paced technology. And that has really important ramifications for our broader economy and our geopolitical security. As we square off against China and other nations on this front. You know, right now, because we got policies right in this country for the Internet 18 of the 25 largest by market capitalization technology companies in the world are American based, over 50 are right around 50 of the biggest employers in the world in the information technology world are US based. That was a direct result of us getting a more flexible, agile area of approach to the Internet and ecommerce, that is the exact opposite approach of what the Europeans took. I always challenged my students and clouds when I'm ever in talking about these issues, you're like Namie, leading information technology companies based in Europe. Now there are some, it's really, really hard. And that was not an accent that was a real result of bad policy. So we don't want to take that approach to AI that the Europeans are taking we need a different us approach. It's more flexible and iterative. And yes, quite messy. The thing that people don't like about multistakeholder ism, it's messy. It's iterative. It's agile. It goes with the flow. Likewise, things develop, I understand that they want an overarching upfront pre emptive precautionary solution. But that's not the right solution if you care about our national competitiveness and our geopolitical security.
Well, I want to thank you all for joining us today for this panel. And thank you to our panelists, too.