SOTN2023-08 What’s Next in AI Governance? Frameworks, Regulations, & Impact Assessments
7:58AM Mar 16, 2023
Speakers:
Ashley Gold
Elham Tabassi
Bertram Lee Jr.
Evangelos Razis
Michael Richards
Keywords:
ai
people
approach
organizations
technology
impact assessments
law
talking
important
state
standards
understanding
context
nist
eu
discriminating
terms
governance
community
chamber
Hello, everyone, we have a great panel here to talk about AI governance more broadly beyond Congress, I'm gonna let my panelists introduce themselves, maybe give a line or two on how you use AI in your job or how it applies to your job currently, and then we can go from there. Start with you.
Thank you Elham Tabassi I lead the trustworthy, responsible AI program at NIST. I have been with National Institute of Standards and Technology since 1999. Working on various machine learning and computer vision, programs applications with biometrics and we just recently released the AI RMF that I think we'll be talking about it today. Thanks for having me.
My name is Bertram Lee, Senior Policy Counsel for Data Decision Making an artificial intelligence with the Future of Privacy Forum. I'm looking at AI regulation across the globe, at both the state global level. And yeah, that's my daily use of AI.
Evangelist it has to be off. Can you man, evangelist Rosie's with Workday. We are an enterprise software as a service company, providing enterprise financial management, human resources and planning software to over half the Fortune 500. I am the US AI policy engagement lead.
My name is Michael Richards, I'm the policy director at the Chamber technology Engagement Center within the US Chamber of Commerce. My day to day job is to work with AI working group of 300 Plus members as well as look at state and local as well as federal policy.
Thank you, I think we will start with the NIST framework. It kind of came out just in time, a little bit ago, when everybody was really talking about generative AI and how it applies to their life. I think some people were surprised to know that NIST had actually been working on this for a while. So can we talk about the framework and why NIST put this together and how it sort of came about and how you see it fitting into this very current conversation about AI.
So your question about whiteness is working on that. NIST has a very broad portfolio of research and a long standing tradition of cultivating trust in technology. And we do that by advancing measurement science, advancing technical standards that make technology more robust, secure, private, in other words, more trustworthy. And that's exactly what we're doing in there you have aI through our foundational research, but also, as I said, NIST has a very broad portfolio of research. So AI scientists also using AI as a scientific discovery in their research. So we get to see how AI is being used, and also work on foundation concepts of what constitutes constitute trust and what trustworthy AI looks like, about the AI RMF AI risk management framework directed by congressional mandate. Ai RMF is a voluntary framework for managing the risk of AI systems in a flexible, structured and measurable way. Flexible to first and foremost, allow for innovation to happen but also allow for different organizations with different level of resources, be able to adopt it and use it structured in terms of that it starts by trying to provide some sort of a lexicon, you know, unified lexicon and vocabulary about what's AI system, what's the risk of AI systems, how the risk of AI systems differs from the risk of any other information or data systems. And with that sort of lays out why another framework is needed for managing the risk of AI systems. And it's measurable and that's really near and dear to our heart, because if you cannot measure it, you cannot improve it. So, if you really want to improve the trustworthiness of AI systems, any approach for risk management any approach for understanding the trustworthiness should also provide metrology for how to measure trustworthiness. It adopts a rights preserving approach and provide an outlines processes for understanding and measuring traditional measures such as validity, reliability, accuracy, but also importantly acknowledge socio technical characteristics such as privacy interpretability, transparency, security, bias, and outlines processes for understanding and managing them. These characteristics are tied to human behavior and they cannot be summarized into a single number and be measured with a single threshold. It was released on January 26. Very quickly to answer your questions about generative AI. So Of course, the church up at the release of chat GBT in November captured a lot of attention across the media and everywhere but but a lot of us in the AI field were aware of these things, whatever of this generative AI systems, and we were thinking about this very quickly AI RMF. The way it defines AI systems, that definition is definitely applicable to generative AI, it was by design for something that we had our eyes on it. And the second thing is that risk based approach is a very powerful approach. And if it's done correctly, it's also flexible to allow for all of these different innovations that happen. Well, granted, understanding the risk profile of a generative AI such as Chad JpT is going to be a lot more complicated and complex, I have a risk profile of a deep neural net, which is more complicated and complex than a logistic regression. But at the end, what's important is to have a structured way of understanding risk a good taxonomy of risks and, and methods and measure to afford measuring them. If I'm, if I have time, I just want to say a few words about AI RMF has been developed in an open transparent, collaborative process. It started with a request for information, rounds of workshops, rounds of draft for public comments. And in the span of 18 months, we heard from more than 200 for the organization's receives more than 600 sets of comments. And the the outreach and engagement was from a tech community to civil society to legal scholars. On our team, we consulted with psychologists and sociologists, we have cognitive scientists are on our on our team, because again, AI systems is all about context and risk management cannot be done without understanding the context and the from the socio technical lens and approach.
Thank you so much. I was going to ask you about generative AI and the framework. So thank you for addressing that. My next question is for you three. AI is obviously rapidly advancing, how do you perceive the state of governance today in your organization's in this AI landscape? How are you perceiving how you're using AI and thinking about AI right now, and we could start with, for sure.
So um, I think what we do at feature Privacy Forum is really fine, the nuance, and really just kind of describe where the law and technology meet. And where there is not clear clarity, we try to provide best practices and guide firms and organizations with the help of civil society, academics and other organizations to be able to build forward as to how to make these technology safer, how to build them in context, with data privacy protection principles, how to ensure that civil and human rights are part of the conversation, and how to make sure that they're workable. And I think from that perspective, what we're seeing right now in the governance space, is that with EU AI kind of moving forward, we're also seeing states really lead the way. There are a number of proposals on both in Connecticut, California, who the New York AI law, I think there's one in Oregon, if I'm not mistaken, don't quote me on that. But there are a number of proposals that are coming out that are asking AI, or they're asking organizations that use AI to be more transparent. And I think transparency is going to be the next big wave within AI regulation. Can you cite your work? Can you show your work? Are you willing to have somebody open up and say, This is how we did testing. This is how we did risk management. This is how we do bias testing. These are all the things that are so incredibly important because the scale of these decisions is what I think scares regulators, policymakers and people so much the ability of AI to make 1000s and 1000s, if not millions of decisions that are really consequential to the lives of people every single day is the thing that has a lot of people really wanting to regulate the space. And honestly, the organizations that we talked to want regulation as well, they just want to make sure that regulation is in line with how they're using the tool in line with risk and in line with how these concepts and and align with their legal liability as it stands right now. And so all of those things are like a massive dance right now. But it's moving forward in a particular direction.
So I think John put it very well earlier, which is these are very complicated issues. And I think folks are figuring out what is the art of the possible at the same time that you know, clearly the AI technology is moving at a at a at a certain pace, and how can policymakers act in a way that builds trust? And really, that's sort of the foundational framework in which workday views a lot of these AI policy conversations. How do we advance trust? How do we, you know, in enable responsible innovation, and you know, I think we've seen quite a bit of interest in policymakers, I think as Bertram had mentioned, at the state level, obviously here in NDC and then internationally, in terms of what specifically I think is within that realm of the possible, it's accounting for the really maturing state of AI governance. Clearly, we have the NIST framework, which is really an important step forward in terms of how do how do organizations operationalized trustworthy AI, but technical standards are very much still in development. And so what are you looking at in terms of at a very practical level, for organizations, when it comes to implementing trustworthy AI, a lot of that's still being written, a lot of that foundational work is happening now. And so we try to be constructive partners with policymakers recognising the need to do something, recognising the need to move the ball forward. And, you know, for us, I think we arrive at AI impact assessments, which I know is sort of a key subject of the of the panel, differentiating between the different roles in the marketplace that companies play. And then, of course, taking a risk based approach, which, you know, make sure that we are focused on the use cases that present consequential risk to to users and also aligns us with the EU AI Act, which is certainly a key priority for us.
Yeah, so us at the US Chamber. So first, Price Waterhouse Cooper put out a study a few years ago that showed that AI has the potential for a $13 trillion increase in GDP globally. But on top of that, we recently did some polling, which show that small businesses are planning on utilizing AI at 27% increase within the next year. So within the business community, this is going to happen, people are utilizing artificial intelligence. And we need to make sure that the use of AI is going to be trustworthy and developed in a way which mitigates potential issues around it so that the society in general, we all have trust within the utilization of it. So we've taken that, first and foremost as our approach to this, and how do we go through that process and working with our companies members to develop trustworthy AI be, you know, providing comment back to the our risk management framework, the other things that are happening within the federal government, state government as well. But on top of just saying things, US Chamber actually decided to actually do more than that. So about a year, a little over a year ago, we put together the US Chamber commission on AI, to form members of Congress, John Delaney from Maryland, Mike Ferguson, from New Jersey, as well as a group of people from academia, business community, as well as you know, groups such as Brookings Institute, a AI etc, to come around and actually go around the United States and globally to actually look at this issue as far as recommendations, regulations and ways in which we approach and actually I'm happy to actually share with you everyone in this panel, as well as in the outside that those recommendations are actually gonna be coming out this upcoming Thursday. And we look forward to sharing that with everybody and looking for a bipartisan approach to developing, you know, a way forward around this. So this will
be the Chamber's sort of principles and guidelines on AI for businesses. So
this is actually not the chambers. I can't say that this is the chamber funded this as a thought leadership process. But this is more about having a bipartisan way of approaching things that we saw that this being so important that, you know, it was necessary to bring together everybody to allow this to come out.
I have a question for Bertram. So what you've seen come out so far within this framework and the AI principles from the White House, what's your perception on how well they're addressing diversity, inclusion? Any worries about how AI can disproportionately discriminate against people of color, and women? are we addressing these things in a good way so far, and where just more work needs to be done?
So I would say that there are a couple things that folks need to keep in mind. One, the laws that are on the books right now apply to AI as well. Right. So whether you're talking about the context of civil rights laws, Title Seven, title six, whether you're talking about the Fair Housing Act, whether you're talking about ECOA, Ficker, a DEA, Ada, right, these are all just like a smattering of them, there was more, right? These all apply currently to organizations to firms right now. What there isn't really, though, is a great way of thinking about how these tools and how do you test these tools in a way that's equitable. And so for example, I'll give you the example of their different communities have different contexts about how they want their data used, and how that data should be tested. Also, what is the correct test? Are we talking about four fifths rule? Are we doing statistical significance? Right? These are things that regulators need to kind of like coalesce around. And there also needs to be a way to think about what is a privacy protected way in order to test for disparate impact occurs Last a litany of communities and how do we ensure that that data is accurate? And how do we make sure that that data actually coalesces around these communities and makes it so that we can do this in a way that is consistent, that is protective of rights, but also is actually doable for companies and for firms across the board. And so that's a conversation that I think is missing right now. And that like I think civil society, as a future Privacy Forum, folks and within industry are all trying to have these conversations around where to move those things. And then I think right now, there is a real heavy emphasis on bias. But there is not as much about testing and about data. And I think that those conversations are just as important, right? Because in order to figure out how AI is being biased, how AI is potentially discriminating, right, you need to be able to have the right data set, whether that's synthetic or created, right? What do you do? How do people sign up to be part of those datasets? How do people make sure that they're not? All of those things are complicated conversations that we're looking forward to having moving forward?
And are you? Is your organization sort of developing best practices? Or are you talking to lawmakers, or NIST, specifically about things you would watch out for providing guidance there?
Absolutely. I mean, I think we talk to organizations a lot about best practices, we're developing those around HR and AI, working with a few folks in here, as well working with evangelists on those. And so what we're trying to do is figure out, what are the best ways to kind of like have organizations or engage in best practices around AI? And also like, what are some of the best recommendations that we can have around these tools and principles? And that is going to manifest itself in a lot of different ways. But, you know, coming from civil society, I'm going to lean towards that direction, because those are the people who I've worked with, but also folks in industry as well. And also within Paul, with policy makers as well.
Yeah, I mean, it reminds me of the section 230 debate in other instances, where you're not really sure how far the law goes in impacting people until you see a lawsuit or you see something unfold in court. So are you looking to sort of get ahead of that and decide how our current laws are played? Applied to AI sort of before you're, you know, just dealing with a lawsuit? And that's kind of your only example to look at?
Well, I think it depends on who you ask, right? If you ask the civil rights groups, whether the law applies, they're going to extend it to what they envision not only the letter of the law, but the spirit of law as well, right? If you ask firms depending on the firm's in the high risk area or not, they're going to kind of limit that to what the court is actually set. But there's a lot of space in between where there's agreement. And that's what I think we're finding more and more is that folks want to do the right thing. Folks want to engage in these processes, and be active stakeholders in there, and make sure that their products are not discriminating against particular communities or large swaths of communities, because the future of America is one that is a multiracial coalition. And if you're discriminating against your potential future users, potential future clients, right. That's bad business practice. And I think firms across all the firms that I've talked to about this completely agree with that notion and want to be as inclusive as humanly possible.
Absolutely. So for evangelists, the this framework is voluntary, it is it's not compulsory, by law to follow it. So I know workday has been really involved in sort of crafting leaves. And you guys are excited to sort of adopt the guidelines, other sort of mid sized companies are excited to adopt these guidelines, but they're voluntary. So how do you get the bigger companies excited to sort of look at the NIST framework and say, Hey, maybe we should be following some of these principles? How do you get the whole sector to sort of adopt at least in spirit some of these guidelines when it's not in law?
Yeah, I would say let's start with looking at NIST Greatest Hits, almost like a good band. You know, we start with the cyber, the cybersecurity framework in the privacy framework, these have influenced best in class governance programs for both those respective fields. They're very widely respected, in part because NIST runs such an open and transparent process. And because of its technical expertise, and so when you describe to a lot of folks who understand these, these governance issues, you never usually have to spell out what NIST means. They just they're familiar with the organization and its products. And so, look, I think part of it is being able to sell the utility of this, it is a very practical, how to guide for organizations. For us, it is influenced and will continue to influence how we think about AI governance. And I think in many ways, the utility of it is going to make it you know, it's not exactly a difficult sell. And so, I think part of it is just education, being able to speak to others about why this is such a valuable tool, how it does advance, trustworthiness and AI and then, you know, again, like any good sort of stakeholder process, invite folks to the table, allow them to share their best practices and how they're using it. And, you know, I think the ambition here is like the Cybersecurity Framework, seeing the AI framework become a common language in which we're thinking about AI. Because I think, Bertram, as you, as you mentioned so eloquently, I think there's there's a lot of discussions right now on AI, there's a desire to come together around common concerns, common trying to craft common ways forward. And I think we certainly see the NIST framework as that potential common language, because it's already been done in other fields. Michael, do
you have any thoughts on that, given that you're in touch with companies often
so I mean, on top of the watch companies, I think, as MJS was saying that, you know, they're already looking to utilize this, they're already going through the process. I think a lot of other large companies that I've talked to are doing the same thing to be completely honest with you, I think really where one that efforts that needs to be done is actually on the small, medium size, as well as the C suite executives, because that's really where you're going to change hearts and minds, one, you know, small businesses, they don't have the resources, to be completely frank with you, they don't have a legal department to, you know, look at this and see how they're utilizing it, you know, track it and look at where they are in their lifecycle to really go through it. But there needs to be this understanding and lexicon as we were just talking about, about these issues, and have that, you know, the right questions to ask to go through the process, which is something that RMF does a very, very good job about doing. But I think there needs to be also this opportunity for a continued dialogue around it. I think that this is what this is doing right now be at the playbook, as well as upcoming profiles. This is 1.0 is the best way I like to say it is not the end of this, it's a living document, it's going to continue to be discussed and worked on. And you know, for us, the US Chamber, we look forward to continuing to work with them.
I want to talk about Europe for a little bit. So the AIA act is soon gonna go into effect in Europe. So from a high level, how is everyone sort of comparing the US approach so far? And what Europe is doing? What are the major differences and similarities? And we can start with you.
Okay, thank you for all of the kind words about AI RMF. That's, that's really good. And yes, we're committed to work on the profiles and implementation, what are these profiles you're talking about? So AI RMF by design is a framework right? It's it's technology use case agnostic and of course, law and regulation agnostic, because we are non regulatory. At the same time, I said it and are you all agree that AI is all about context. So if I want to do risk management, in for you know, an AI systems that look at a scan of the brain image and make some, you know, predictions or diagnosis on the image will be different than how we use AI in hiring or how you use AI in different things. So, profiles are going to be verticals or tailored setup guidance of the AI RMF for particular use cases. And that is again goes back to some of the flexibility that we said we built in so they RMF is you ask about the voluntary nature of this. But it already had brought community on a shared understanding of what we mean by risks and provide that common lexicon and language which which is I think the first step or whatever else we want to do first we want to know what what is we're talking about, about the EU. So their approach is also a risk based approach. But there are similarities and differences among the two approaches, trade and Technology Council. It was one of the deliverables that came out on December five was a joint AI roadmap, which is focused on understanding the similarities understanding the gap and figure out what can be done over the you know, two sides of the Atlantic to understand those gaps and get to you know, bring bridge the gap to the extent possible. In the RMF. You have also provide some crosswalks one of them or one of those crosswalks talks about how the trustworthiness characteristics in the eye RMF maps to or relate to the concepts in the EU AI act but also OECD AI recommendation will be try to leverage all of the work that has been done. You said that it's going to come to effect really soon. Really soon is a little bit objective term. It's still some way ahead. My personal belief is that regardless of the regulatory and policy landscape, having good solid you know scientifically valid, technically solid, international standards that talk about risks that talk about risk management, talk about trustworthiness can be a good backbone and a common ground for all of these regulations and policy discussions. And that is our job, that is nice job to provide those types of technical contribution on to the table for the conversation.
Happy to weigh in. I think a lot of what Elohim said is so important, because it enables technical interoperability between the emerging approach here in the US and the AI X, you know, workday has been watching AI very closely engaging, again, trying to be constructive and building bridges between what is happening in Europe. And obviously, sort of the emerging AI governance approach here in the US. The importance of standards here, I think just it's important, maybe just to kind of dig in a little bit. And this is what differentiates us from the Europeans, the Europeans can legislate and generally when they say they're gonna legislate there, there is legislation, they there will be an AI act, we know that for certain. And so they will then turn to their standards development organizations and ask them to fill in the blanks. That is generally not how the US has approached technical standards, there is a more market driven or sort of bottoms up approach to standards of development. And so there's a lot of interest in once the AI act is enacted into law, getting those standards in place at a time when a lot of this is being built from the ground up. I think this has sort of been a challenge, as we sort of view in the US in terms of again, what is the art of the possible in advancing AI in a way that recognizes that those technical standards is still in development. And so it's one of the reasons why we're so much in favor of impact assessments, because they don't rely on technical standards, even though I think as input they're good standards will always will always be helpful in terms of good organizational practices. And so this is certainly a space that we're watching. We think there's a lot of potential in the AI act to become a global benchmark. I mean that positively. And certainly, we're watching this space very closely and eagerly seeing the good work coming out of the TTC.
Either of you want to dress Europe,
I think one of the things that gets missed in the conversation between US and Europe is that while there are differences in how we think about bias, and how we think about like, you know, we use the term marginalized communities in the US, in the EU, they use the term vulnerable communities. And there are different standards in which they kind of like constitute those. But if you look at kind of like the totality of civil rights law that I basically blurted out previously, and also the EU human rights convention in the UN Convention on on human rights, they map almost the same. Not perfect one to one, but mostly, they map on pretty similarly when it comes to protected classes. And so that also kind of like manifests itself and how the EU AI thinks about high risk activity. Again, not perfect, but it's almost a one to one about what you currently cannot do within the context of the US when it comes to like what activities we think of as protected. We, they're in different laws, they may be in different standards, they're not necessarily all considered, quote unquote, high risk activity. They're covered by a meal, yas de Lucia laws, right. But like, though, they're very similar. And so like, I think there's not as much of a delta in between, like how we think about this. And I think that's something that we miss in this conversation as well.
Yeah, but one thing I would add to this is actually, you know, what, the US Chamber has a lot of concerns regarding the UI and its current states. But the one thing they did get rights was based, were something that we're seeing right now in the United States is the edTPA. The American data Privacy Protection Act, specifically, section 207 is really not a risk based approach. They put consequential risk in there. But that is defined term for FTC to define not actually Congress, we would ask Congress to define that term. So that makes sure as well as the requirement to submit impact assessments through that provision is actually a made for the FTC not a shell for actual risk based.
So impact assessments that's addressed in the ADP as you just said, what exactly in very simple terms is, is a risk assessment and how does it apply to an AI application? Anybody who wants to answer Go ahead.
I started enough time so I'm on how to define it. So an impact assessment is a tool to help organizations identify document and this is an important document and and address risks. associated with technology. And so it is notice I didn't use the word AI in that definition because they're already used in other contexts like data protection or privacy. Under GDPR. For example, you're required to carry out a data protection impact assessment. Under several state laws, it is required to carried out a privacy impact assessment of some form, although the requirements differ from depending on the state. And so it is a very valuable tool to help organizations take a holistic view of the technology before it goes into the marketplace or before it is used. So we we think it's a really valuable approach to address some of the risks around AI. a necessary step forward, as a lot of the technical standards are being developed, I will give a shout out. So we released a paper along with access partnership on the role of impact assessments. And for those who have may have seen our little card, there's a QR code where you can get access to that and learn learn more.
And when it comes to impact assessments, just to kind of like address two things at once. One, I think 207 actually works as a public accommodation space. We currently have that in law as well, businesses already have to comply with public accommodations laws. And as it stands right now, 207 is more of a Digital Public Accommodations law. And that is, I think, important to keep in context, because it'll reminds businesses and it reminds firms that you have to include everyone in how you design tools, and make sure that you're not discriminating or at least attempt to not discriminate an impact assessments are a way in which to codify that, here's what we're trying to do. Here's how we're thinking about it. Here's how we're trying to at least address some of these issues. And I think tool seven and add, but does that and I think we're Impact Assessment start is that we're developing technical standards, right now, we're working towards that. But there isn't necessarily a common language that we have quite yet. Then as we develop that, let's at least try, it's the same context of like, let us try to get these things, right, let us try to get be as inclusive as we possibly can, as kind of like was stated earlier like this, this is an administration priority, be as inclusive in the economy as humanly possible, it makes all of us stronger. That's the context of technology allowing us to actually be buy into the idea and the concept that all of us collectively, when we work together, and we all have access to these tools, we can build a better and greater thing, or build a better and greater society, build a better and greater Internet build better and greater tools. Right? That's all 207 is trying to do in that context. And I think that impact assessment piece is so critical for that. And so like when, in my previous role when we were working on that section, that was the whole concept behind it. And I think that there's, you know, pushback against it. And I think that some of it's valid, but I think the broad context of what that pushback comes from is not necessarily from the public accommodations, or the broader kind of context of like allowing more people inside of the technology, allowing more people to participate in the digital economy. It's about how we do it. And so like, as we're working forward, let's make sure that we keep that in mind that we want everyone to be able to participate in this new digital economy. And that's like, I think a really key point for not only policy makers, but it's true in the EU, it's true in South America, it's becoming true in the APAC region is going to be true in Africa, the whole world understands that concept. And let's not be the one who lags behind as America.
So I can go very quickly underscore something really important that Evangelia sandwich from you several times pointed that the importance of assessments, but also the importance of having good understanding of what to measure and how to measure metrics, methodologies, test data that can do that. And I think that is something that, particularly from the socio technical angle is a challenge for the whole tech and science and research community. I usually say that we're agency have 120 years of history of running bench benchmark and evaluations, we have perfected the art and science of technical evaluations and technology evaluations, but when it comes from measuring technology, from the viewpoint of is it working for everybody? Is AI systems is benefit benefiting all people inequitable responsible, fair away. There is major technical challenges that I think I have a lot of really good scientists and research communities working on that and I think that's something really important.
So let me ask you just sort of about a real world example right now, Microsoft released beings chat GPT sort of not to the public to us Like amount of being users and on a rolling basis. And their point of view going into that was, hey, maybe this isn't fully ready yet. But we really want to see how people are using it, the sort of feedback they're getting some of the flaws people are pointing out, that helps us learn, that helps us make the product better. Now, on first impression, I wouldn't think that's like a traditional, you know, impact assessment exercise that seems a little different to me is, is that, is that an example of that? Is that a little more public than they should be? How do you feel about that approach, you know, compared to what you guys just described, which is obviously a very methodical sort of rolled out thing.
So I won't comment on Microsoft or Microsoft technology, or I might end up complaining about teams. But But I will say this, this goes a little bit back toward kind of the risk based approach and why I think Chad GPT has, has interested so many in AI governance, which is, you know, we're talking about sort of a use case specific approach to, you know, focusing resources, attention by organizations, by government and policymakers on specific uses of a tool. And I think, you know, some of the questions that have arisen around chat GPT has been broader, you know, non use case specific applications of AI. And
I think is that because people are sort of just throwing stuff at it and seeing how it responds, and not really any uniform way.
I you know, that is that is a good question. You know, I, I, myself, I'm still, I think under under trying to understand the implications of it. But yeah, I think I'll have put it well, which is, you can still use tools that we're talking about now, to try to understand and address the risks associated with any AI technology, including specific use cases of generative AI.
So so they AI RMF, put a lot of emphasis on test evaluation, verification and validation. And also talks about risk management being a shared responsibility across the whole lifecycle of AI, which also talks about lifecycle and in the guidance that it provides, and in the playbook talks specifically about the role of develop designers developer, deployers tester on along all of this lines. Again, evaluation and measurement are extremely important is exactly where the rubber hits the road on understanding risks and doing the risk management. A, it's it's important when this type of testing is being done, that the impacted community or identified so that the magnitude of the impact can also be measured evangelist talks about participatory design, making sure that they they're using inclusive reach to the people that get the test that, you know, are involved in whatever the role of AI actually so it's it cannot be, you know, over emphasized the importance of doing the right verification and validation before putting this type of products out. When they are out. They are out with all of their risks there. Right. So, so the things that we just put it out and tested as people are using it is is not a very good, risk based approach.
Yeah, I don't think, you know, sort of letting people use it and see what happens is exactly what you guys have in mind. That's the best practice. We can end here on China. A lot of lawmakers seem to be motivated by this idea that we need to regulate AI with our American principles to keep up with China. So is that a good motivator? Is that how lawmakers should be thinking about this? And how are you guys thinking about sort of the global competition aspect here, especially with China and we can start with you
sure, obviously, trying to hazard 2030 plan on in they have stated that be it standardization, be it you know, AI regulation b It's everything they want to be part of, and they want to weed in by 2030. So obviously, it's something that our policymakers are looking at and have to address. I think that this would be another great opportunity for me to plug the AI Commission's report that's coming out this Thursday that actually has a national security piece to it. And I think that those would provide lawmakers something to go off of as far as starting principles around it. Thank you.
I think your question speaks to getting the balance right? which is how do we address these issues of public concern in a meaningful way, while also enabling responsible innovation and enabling companies that want to do the right thing or doing the right thing to do so? And so I think certainly that question draws it into starker light. I also think it speaks to the importance of having consistent policymaking. There are certainly concerns around as we've seen in privacy, you know, states acting, I think, there are some concerns around a patchwork and around inconsistent requirements between different states as well as here in, in DC. And so having, again, a kind of common language, common way of thinking about AI and addressing this issue of public trust without unduly burdening responsible innovation.
I think I would just say that the strength of what we do is when we include more people inside of the process, and ensuring that our tools work for as many people as possible, if we're talking about kind of like the global AI context, making sure our AI actually does work for the globe, the globe is not for white people, the globe is full of a multiracial coalition of humans who are trying to engage and participate in the digital economy in a varying context, ensuring that our AI works for the world is critical. And in doing so that means protecting human rights, that means protecting civil rights, that means doing testing, that means thinking about fairness, that means thinking about how we frame up are responsible risk management practices, those are all critical in order to keep us at the highest level of competition, because that's how Silicon Valley started. It started in the Bay because of the diverse array of people who are around there. And also it was it helped fuel and funnel the digital economy and fuel technologies we know right now.
Do you have anything to add? I'll just take it back to the RMF. And say that the ultimate goal, the bigger goal of the AI RMF, as I said, is to cultivate trust in technology is to operationalize values in the technology, design and build technologies that are reflective of the values of our society. And we don't want this to be an after fact, we want this we want this proactive thinking about how to build technologies that are reflective of our values and work for all people again, in a inequitable responsible for way in as part of the design development.
Thank you. Well, thanks to all of you for a great and fascinating conversation with that. That's our time. So thanks for tuning in.