Let me say that this is going to be an absolute pleasure, Marina. Welcome.
Thank you so much, Renee.
So I'm going to jump right into it. So you're the CEO of credo AR, you're also the founder of credo AI. If you were to describe yourself within the context of artificial intelligence, how would you define yourself?
Oh, absolutely. So we provide oversight and accountability of AI systems. And what that means is, as you can imagine, with artificial intelligence, there's not only new kinds of risks, but there's also existing risks. So credo AI helps you figure out where those risks are, and helps you mitigate those at scale.
So we've heard so much about risk, and we know what AI there's extraordinary rewards, extraordinary promise. But there are risks. And there are pitfalls. If you were to outline some of those risks, and the kinds of response that you deployed against those risks. What would you say those risks are? And what would you say we need to do to ensure we detect mitigate? And of course, manage those risks?
Absolutely. But before I answer that, just show of hands. How many of you know AI? Heard of AI? That's good. How many of you have heard of responsible AI? Wow, that's amazing. How many of you are excited about generative AI? Okay, okay. How many of you have played around with GPT? Oh, wow, quite a few. Okay. So the reason I asked that is, you know, it's interesting past three, three and a half years I've spent time talking to audiences like this and responsibly is still an emerging field, very young. And there's a lot of action happening at a global scale. We have over 1000 principals launched across the globe by different nations and countries around how do you define how do you use How do you procure artificial intelligence in a responsible AI manner? And just in the past, I would say two weeks, everything we knew about responsible AI is getting disrupted at scale, because of generative AI. So the reason I wanted to get a sense of who knows what is I don't think gender to VI is an example. You know, autopilot. It's an interesting tool that can be used by coders to basically write code. So if you're an engineer like myself, Oh, my God, how fun is that? You can go from zero to one like that. One 200 was very different question. So with autopilot, you're seeing issues like copyright laws. So as you can imagine, autopilot has been trained on corpus of data, which comes from GitHub, where different developers have contributed code. So as you can imagine, if I'm a company using autopilot, now, suddenly, I have to start paying attention to the IP risk and copyright risks that I'm exposing my organizations to. Second example, we work a lot with financial services organizations, as you can imagine in financial services. Right now, first and foremost, they're not using generative AI, because as you can imagine, for high impact, high value use cases like fraud, anti money laundering, risk scoring, traditionally, AI systems have been used to like gradient boosting linear regression, etc. And even in those traditional systems, there's a lot of concerns around for example, fairness. So if you are using a risk scoring system, which is based on machine learning, you're looking at credit card transactions. And based on that credit card transaction, you might be looking at some proxy variables which might have protected attributes. So in that case, is there a way or is there a chance that your AI system potentially could be providing different outcomes based on maybe your gender, Master? You know, a couple of years ago, we had the apple cart example, which I'm sure many of you read about, but it wasn't providing the right level of credit for women because women generally don't manage finances within an within a household. And as a result of which women didn't get the credit scores that men were getting. So are there issues for waste of within fairness, within security within sustainability, within privacy in these AI systems? So at Credo, we are tracking right now nine but as you can imagine, with generative AI, those risk are just increasing day in day out.
Thank you. And you know, one of the big questions I think, for me as someone who's an AI ethicist, and someone who teaches a data and AI and and tech ethics, is this definition. How would you define responsibly AI?
Yeah, great question. And this is something we've spend, you know, really three years thinking about who do we want to be in this ecosystem? Because again, there are terms like responsible, trustworthy ethical AI, and it was a very mindful choice. We decided not to use the word ethics And the reason is it's very difficult and a complex topic to define what the ethical values of each and every individual let alone organization is. So we at Credo focus extensively on Responsible Use, responsible procurement, responsible development of artificial intelligence. And the reason for that is, in that case, we are trying to keep accountability, central to the work that we are doing. So who is going to be accountable who's going to be responsible for the outcomes of these systems, for the procurement of the systems for the designs of the systems? So for us, that's a core focus and credo. Interesting that you introduced the word accountability because we speak so much of systems being accountable, and transparent and explainable and fair and auditable all creating our major challenges in the AI space. But there's something we seem not to be getting right. Because no matter how many frameworks there are, no matter how many policies and an emphasis during this work, things are not happening in the way that we hope they should happen, when it comes to reducing risk when it comes to dealing with discrimination when it comes to dealing with those historic data sets
that continue to present some very unique challenges for the world of AI and of course, data science. So in your interpretation, what are we not getting right? Or what do we need to do better when it comes to accountability, transparency, to be responsible, and to be trustworthy within the space of AI?
Renee, you have a kind, you said, we are not getting some things right. I think you're getting a lot of things wrong right now. And so let me give you examples. First and foremost, as I mentioned, globally, there are over 1000 frameworks, there's emerging regulations in Europe leading the charge with EU AI act. There's a lot of work happening in United States. I'm a member of the National AI Advisory Council, which has been tasked to advise President Biden with upcoming regulations and policies. And so there's extensive amount of work happening within United States at state federal level, as well. So the things I think, as you can imagine, with one of the increase in regulatory focus, with the consumer pressures that we are getting, and if you recall, just last week, or the week before last, I'm losing track and gender DeFi timeline, but, you know, Google won in one day last 100 billion dollars in market cap, because of one demo mistake with their barred Chatbot. So as you can imagine, when you have these consumer pressure when you have these regulatory pressure, the question really begs What does good look like? And to answer your question, what are we not getting right right now is we don't know what good looks like. And the reason we don't know what good looks like is because it's super contextual, the same AI application. And I think I've overused this example. So pardon me, for those who've heard me speak previously. But a facial recognition app on your phone, which unlocks your phone, versus the same facial recognition that is giving you access to a building with proprietary information, with the same facial recognition being used in surveillance. As you can imagine, all these three use cases have very, very different risk profile. One is low risk, and the other surveillance has super high risk. So when you think about the spectrum of risk, how do you put the guardrails? What do you measure? What do you measure the dataset model and system level? Versus what kind of policies do you have at the organizational level? So all that drastically changes. And so we had credo are really focused on operationalizing. Not just the organizational governance, but testing of the systems so that we can marry the two to bring accountability.
And just within the realm of of governance and international governance and global policy, we've seen that the EU AI Act has pretty much been very specific when it comes to what's risk, saying things like facial recognition, technology, definitely something that's too high risk, or deploying algorithms in human resource management or talent acquisition or in anything that deals with our particular kind of training, healthcare, transportation, all very risky. But somehow in the US, we seem not quite able to really become very specific about our risks, because you said what is good look like? Or what is responsible look like? You know, there's always these questions, you know, and I hear it a lot from my students, you know, who's leaving when it comes to governance and policy and, and compliance and oversight. So we have the US we have the what's the state of play there? And what can we learn from each other?
Yeah, absolutely. And, you know, I think I, I do want to give credit in United States. I think we are not the regulators of the world. We are, you know, we want to enable innovation. We have great policies, but Those policies are at different levels. As I mentioned, federal level we've seen, you know, I was just talking to Nick who was here and great work to OSTP for coming up with Bill of Rights. But as you know, that's not enforceable. That's just, you know, general principles and guidelines. But then you look at what New York State is doing New York City is doing, they've come up with specific regulations, for example, for hiring algorithms, if you're an enterprise in New York City, buying third party HR window tools, you know, there is a requirement through this local law number 144, to be able to provide bias audits, by April 16. Of this year, obviously, there's been a lot of discussion and debate and good lobbying and bad lobbying around what ADT really means, which is the automatic employment decision making tools. And I think there's a lot of, you know, healthy discussions, but it's getting us to a point of what does that context look like? And how do we come up with regulation? Having said that, just with a show of hands before we rattle off a lot of different things? How many of you have heard of Europe's artificial intelligence act? Oh, awesome. I love this audience. This is great. So I think in context of you know, if you think about just the history of Europe, it's, you know, they have been great at regulating with GDPR and other things. With EU AI act, we are seeing a similar trend, especially for general purpose artificial intelligence systems, where they are looking at risk profiling is that United States approach No. However, as we are looking at policies and state and local regulation, we are coming to a realization that context, and in some cases, just based profiles might be useful. Another example is insurance, Colorado's SB 169, you know, it's being extensively debated right now with the National Association of Insurance Commissioners. And the whole idea behind as we want six nine is really pushing for transparency and impact assessments of AI systems, especially when they are used for personal insurance, life insurance, property insurance, etc. So I think to answer your question, I think there's good work happening among our allies in United States. And there's actually some great work happening in global south. There's great work happening, we, you know, we've been working with Singapore government for a couple of years now, on their model governance framework. Same thing with Canada's impact assessment, we are seeing this emergence of you know, the need for ensuring AI is in service of humanity. And I think that is the objective, if y'all can keep central to our conversations, a lot of good can happen
indefinitely. And within that context of AI, for humanity, you've introduced the impact assessments, which are critical, and of course, the ability to audit to audit these algorithms, because algorithms, you know, have a way in which they could misbehave. And we have seen that, particularly communities of color or indigenous communities. Woman we've seen ableism, we've seen all sorts of challenges, when it comes to the ways in which we need to do this for humanity. Now, we know last year, the White House released the blueprint for the AI Bill of Rights, and just everyone was excited about that. But then when we drill deep into it, we realized you know, something more or something is missing. And for a lot of ethicists, that missing part has been enforcement, as well as how do companies really adopt this? How do you create your own frameworks around it? So within that realm of enforcement, when will we get to that place where there's a certain comfort level, that not only is there the oversight and the compliance, and we're working on the regulations, but we can actually see some real time enforcement?
You know, Ronnie, that's such a fantastic question, because one of the things that I'm seeing in our work at Credo, and I do want to talk a little bit about it, so that all of you get context. We are not a consulting organization, we actually provide you software to AI governance tools that sit on top of your machine learning infrastructure. And we are able to bring in context whether that context is coming from your company policies, whether it is coming from your regulations or standards like NIST AI RMF. And we codify that to be able to test your technical systems, your data sets, your models, but also your processes. And the reason I'm giving that context is one of the big things we are recognizing is the way organizations whether you're a commercial sector or government, they end up using credo is by creating governance artifacts, and these governance artifacts could be a transparency report. It could be an impact assessment. It could be model cards or AI solution cards. But one of the things that we are still trying to, I would say do with our customers. Bayes is enabling impacted communities to be able to get visibility into these governance artifacts whenever the companies are ready to share, and be able to incorporate that feedback. And so I did want to highlight and underscore making sure impacted communities have a pathway for redress, we have a pathway for feedback into these technical systems. super critical. Having said that, you know, I think we need to be doing more. And this goes back to your question of around enforcement. By the way, if for those who raise their hands on responsibly and staying up to date, whether I'm sure you're seeing the work FTC has been doing the work CFPB has been doing, Doctor, you know, Commissioner Keats on Sanderling? No, doctor. He's been very actively talking about AI algorithms and the desperate impact that these systems have on hiring and employment and disability. I think great work is happening right now. On the enforcement side, as well, to ensure that the systems and how the US citizens get impacted is managed from day one. And that's why again, by the way, I'm an advocate for technology. I am an engineer by training, I spent the past 20 years building AI products and tools that companies like Microsoft and Qualcomm but I also understand when there's a technology like AI, which has that scale of impact and at the speed of innovation that's happening, how critical it is to start putting in place some guardrails, whether it's regulatory guardrails, whether it's self assessments, whether it is some other proactive action through enforcement.
Well, you know, as a criminologist, I must touch the concept of justice, and fairness. And these are things that we are still struggling with within the realm of AI. So we know that diversity, equity and inclusion critical to the ways in which we design develop, deploy our algorithms and of course, adopt them. But we also seeing this question about whether or not we can do innovation ethically, whether or not we can do innovation inclusively, what are your thoughts about inclusive, innovate innovation?
Supercritical, I would say this is where as an engineer, I've seen a change or a shift. You know, my early years working as an engineer, I was focused on mobile technologies. So we were building the cell site modal modems. So if you have a cell phone, you're communicating to a base station, and I was the hardware engineering hardware engineer building those base station chips. So very, very technical. In that case, you didn't need that multistakeholder perspective, you needed an engineering perspective, to make sure that the systems work. However, now when you look at artificial intelligence systems, they are drastically different. And we are seeing that by the way at Credo AI in our sales cycle, in our implementation cycle, it is no longer about just the data scientists and a machine learning engineer showing up together building a system and putting it out in the world. What ends up happening then is you get racist bots or bots that that don't perform right. So the question now begs is how do you bring in policy perspective I have here our Director of Policy he fully right there who brings in, you know, great perspective, from DC from Brussels, to really inform how we as technologists should be thinking about operationalizing, all the great work that's happening in this ecosystem, we are working with compliance people, because as you can imagine, now, the compliance jobs are getting much harder. You cannot just know the set of rules and regulations and try and be compliant to it. Because rules and regulations don't exist. So how do you still ensure that you are, you know, within the guardrails trying to do your best, and we are seeing a lot of I will say, interesting perspective from philosophers. We are seeing designers participate in AI. So I would say truly in artificial intelligence, we are seeing this coming together of multi stakeholder perspectives, because it is not a technical problem anymore. It's a socio technical problem that we are trying to solve with artificial intelligence, which needs those multiple stakeholders.
And given the fact that it is a socio check, no logical experience and an experiment that we are we are seeing, one of the most critical things would be the impact of AI, on humanity, and of course, on the future of society. One of the other things about AI is that all we think we know about it, we may not know about it, and that's the big challenge. What do you see is the future of AI and the ways in which we need to deploy these trustworthy toolkits and the responsible AI approaches. So what does that future look like and what this future of responsibly I look like?
Oh, that's a heavy question. The reason it's a heavy question is because I can see two literally two alternative realities we go into. And yes, we are living in a simulation, if you will. The two alternative realities are if we don't take action right now, if we do not think about guardrails, if we sort of bring in the mindset that I know is called the Silicon Valley mindset of move fast and break things, there's a reality that we will hit very hard very quickly if you don't put guardrails with AI. And by the way, I'm based in Bay Area and Silicon Valley. I'm very proud of it. But one of the things that we are embracing is move fast, but with intention. And the question then begs is What does move fast with intention looks like, and moving fast with intention really looks like when you bring in governance, and it can keep pace with the development. So one of the things with and I keep bringing up generative AI because I think has done a fantastic job and democratizing access to artificial intelligence. The very fact that my mom is on charge GPT. And like, you know, using it to say, Who's the Rena saying. And then it says, By the way, she died two years ago at my mom's like, what? No, I just spoke to her yesterday. By the way, this really happened. So if you know, there's this concept of hallucination within these systems. And and you know, these systems are not always accurate. But the question now is, if my mom who's almost turning 80, is using GPT, that really means that you can see the power of this technology. And when you see that power of the technology, these two pathways that we might go on is up to us. And right now is the moment to bring in this governance and oversight into this.
Fantastic. So I think we may have time for a question or two. Fantastic,
great presentation company you're talking about is kind of like auditing, reporting, looking into what algorithms are doing finger systems? Where does the accountability?
Yeah, great question. And I'm so sorry, I'm gonna explain something before I answer that. So auditing, I'm very careful about the word auditing, we don't do auditing, because audit generally happens against a certain set of standards. So right now, if any company says that they're auditing your AI, that's false, they can't audit your AI. So that's one thing. Yes, they can review it, they can assess it, etc. Perfect example. So now that I've explained that now, let me explain what the accountability happens. So the way most of the companies right now use our system is they will do these assessments based on their company policies, or they will do it if there is an existing law, like, for example, New York City 144. Or if there's an emerging standard, like Mr. Ai RMF, you can also start seeing readiness to nest AI RMF, once they've done the assessment and reviews, which means you've tested your AI systems, your data sets, your models, and your sort of organizational processes, the accountability happens when the final sign off happens. So when these governance artifacts are created, one of the things we are doing is there's a review by the team, there's an attestation within the company to say, hey, you know what, I as a chief data officer have signed off that this system is okay to be deployed in production. And within the context of risk vino, we are willing to take on the liability of those risks. So there's this document that basically gets attested to that's where the accountability comes in. Obviously, when auditing standards happen, and you know, you can bring in a third party auditor or maybe your audit team separately, and in that case, you can also do third party assessments.
Thank you, Ross, Id question here.
Yes, I do. A question about your comment about fasters intention. What concerns me generally in this era of
belt tightening among the companies we all know and love in terms of technical innovation, that the teams that are getting hit the hardest, aren't the policy teams are the ethical assessors. So I was wondering if that is being discussed at all, or anywhere, because I see that as a real red flag and moving backward, not forward in terms of figuring out where we're going next, particularly with regulation on the horizon? Great question. And I would say that the companies that are letting go of their ethical and policy teams, you should see that as a red flag, and those are the brands that you should not be trusting what we are seeing within our practices, the companies that are actually which we call AI first, ethics forward companies, they're actually doubling down on this effort. So I'm sure you're taking examples of some of the known names that we've read in media. I won't take their names here, but too bad for them. Like that just means that that's going to be a brand that you can trust in the future. What we are seeing is, you know, it's it's not just about winning anymore with AI. It's how you win with AI, and the companies that are putting a lot of focus on how you win. They're bringing in policy focus, they're bringing multi stakeholders, they're really creating this enduring long brand. So that as they are bringing more AI, they are going to not only compete with that AI, but they're also going to retain their customers longer. They're going to retain their employees longer. They're going to be these enduring brands that emerge based on some of the decisions you're seeing right now.
Thank you so much. I think that's our end. Right. Thank you for that fantastic question. Thank you.