Yeah. Hi, I'm Richard Whitt, GLIA Foundation, longtime Technology Policy attorney. I was at Twilio, most recently before that I spent 11 years at Google before that MCI Communications. And I currently live in Silicon Valley and work with a lot of small startups there, a lot of them doing for around AI. So I hopefully come to bring a little bit of a perspective from the technology and market side what I see happening there, and then bring that into some of the public policy debates we're having around AI today. So Sam Altman, back at 2023. About I guess, 10 months ago now it is Developers Conference, made the statement AI about empowerment and agency. If he is correct, my my question here is, how do we get from here to there? Are they through some important steps that so far have not been spelled out in between, and then what are for this crowd, particularly some of the policy implications we should be thinking about? So obviously, last year was the year of generative AI, right, large language models. One thing I want to note is that there are other things out there percolating that may not have yet made it permeated into the beltway. We've got things like small language models, which are basically geared towards the individual or small communities, as opposed to the large language models that we hear so much about. We've got multi modal models, which is not just language, that you're getting to things like imagery, which is very interesting. You've got large action models, which is what's going to power some the robotics, you've got predictive AI, which is about how do you analyze trends and think about what might happen in the future. Perhaps the most interesting is cognitive AI, which is AI that not just sort of does what the LLM does do, which is sort of fill in the blanks based on a large corpus of data. It actually reasons like a human brain. And we're probably not there maybe this year, but in the next several years, the advances are so profound, we might actually see that happening sooner than we than we imagined. We know the companies that are involved here. We've also been hearing even this morning about the global public policy response, whether it's an EU AI act, it's the White House executive order. It's NIST and the AI risk management is OECD, Bletchley Park, they all more or less taking the same approach, which is to say, let's figure out where those guardrails are that phrase that just keeps popping up. I'm not sure who trademarked it, but they are a smart person. It's about transparency, explain ability. Where's the bias happening? And most importantly, where are those risky use cases? Right? We don't want AI taking over power plants. What are the risky use cases are there that we want to put on make sure are not part of the marketplace. So that's where things have been. Here's my humble proposal for where things are going. And you're already seeing signs of it in the marketplace. This is the rise of the so called personal digital agent. Again, Sam Altman, in an interview, talked about what he called the personal agencies. In November, Bill Gates wrote a whole piece about this. Sundar Pichai from my former company, Google has talked about it, you know, acting as an agent, this notion of being an agent for our lives, not just sort of answering questions, but increasingly taking actions making decisions on our behalf. The web platform companies, again, are out in front, each one of them that are already has their own LLM style, have their own personal digital agent, again, more coming on the way. But a smaller startup, some of them that I'm working with, as it turns out of the valley, also have some cool things happening. At this point, the public policy response is yet to be determined. So as I mentioned, the policy agenda of today with AI is around what I call AI accountability, regulating their behaviors in a marketplace. As I say, the guardrails against harms, it's like make them make them hurt me less. That is sort of the overriding principle and extremely important one, I'm not trying to slide it. But I think there's a complimentary agenda that we should be keeping in mind as well, which is really building the merge lanes for the actual benefits. If we're all worried about the bad stuff. How do we make sure the policy is there potentially, to help us to build out API's that actually work on our behalf? And the test case, I'll suggest is the notion of a personal digital agent, why call the authentic agent? And what I mean by that, if you go back to basic agency law, the notion of a principal and an agent, there are two things that agents sort of does, what is it makes decisions and takes actions in the world. And the other is a does so on behalf of somebody else. Now, of course, this is mostly in human relationships. There's lots of examples of that there are professions out there where people do this for you. And then so anyway, those are the two prongs, what I call the two dimensions of agency. As it turns out, open AI is sheet of paper back in December, I've got the link of the next I think in the next slide, I encourage you to take a look at it, or what they call agent TriCity. It's a very interesting piece it talks about the measuring of the capability of these systems in the in essentially solving problems, achieving goals in environments that are increasingly complex, with limited supervision from it from human beings. They use the term agent TriCity to describe that. So okay, I'll take a challenge. That's sort of the First, the first prong, the first dimension, nowhere to really talk about the second dimension. So my again, humble proposal is, let's talk about the other piece, which is the degree to which these agents are actually doing it. And under authorization representing me, my interests my and my edge in terms of actual actual relationship between me and these bots. So the agentic dementia that a open AI talks about on the paper, again, there's various elements of that in their various practices for keeping these systems safe talking about default behaviors, how do you tribute actions, but they do at one point. So you know, we're focusing on mitigating risks and allocating harms. Again, this sounds very much like the AI accountability agenda that we're all more or less talking about today. But this notion of, quote, user alignment, we're gonna leave that for another day. And it gets again, my point is, yes, okay. But I think we should not leave that for another day, that conversation should be happening right now. And so by a general dimension of relationship, idea is around his on behalf of, so how do you determine whether a bot that's doing something in the world, on my behalf is actually doing so with my authorization? Is a number of ideas here around me as the principal, what are my expectations? What's the base of knowledge the agent has about me and my intentions? What is the consent look like? Are we going back to GDPR days of I want to see the free cat videos, click consent move on. That may work in that context, I frankly, don't think it does. But to the extent it does, it does not, in my view work for AI, because it's not just about getting my personal data. Now, it's actually making decisions for me. And by getting to know me at a very deep level, the recipe I would submit a deeper relationship around here. And so these are just some ideas, how there's different indicia of relationship between me and these personal agents. So here's sort of one slide of tries to put it together if there's the personal digital agent is in the middle. On the right side is what open AI is talking about. Right? Its capabilities as an agent in the in the environment with other systems. The left side, the general part is what I'm actually most interested in terms of how do we ensure this is a human centered approach, not a agent based or AI systems based approach? So how do we get there? Two quick ideas. This was touted as the AI interrupt conversation, and it is I'll talk just in a few slides here quickly about it. Interoperability is something we're very familiar with, it goes back through decades, various kinds of industrial era analogs up into and including the Internet and the web. And so what I'm proposing here as well is that we think about these capabilities in the world, right? As they become more powerful. How do we make sure that they're not stuck in vertical silos, that in fact, these systems can interoperate with each other can talk to each other. So digital interoperability, that's more or less the web. We've known about that for a while, there are aspects of the web that are still not completely interoperable. And there are many advocates out there saying that it should be Cory Doctorow just wrote a book on this within the last few months, he talks about the inner operators, the things they do that drive, you know, more competition and lower costs, more choice for end users, as these as these heterogeneous networks connect and talk to each other at certain levels. In AI, though, we're moving it up a level. So it's not just about a connection, and sort of a one time transmission. This is a real time systems having an actual conversation, right, my AI talking to your eye, maybe we're having a conversation, maybe we're negotiating something, maybe I'm challenging something that your AI system did, maybe you want to reach an agreement about something, any kind of interaction or transaction where these these AIs are acting on our behalf. We need AI interrupt there. And that ecosystem brings in the competition, the innovation, the consumer choice, that I think we're looking for increasingly as the AI market starts to gradually mature. As it turns out, I asked Chet GPT, this question back in December, so is a interoperability valuable? And apparently the answer is yes, it does. It's it without Without it, you don't have integration, you're restricting collaboration, you're limiting the benefits, no connection and decision making innovation so they don't do all those things. At that time. So vertical interrupts and horizontal interrupt are the to show is a thinking about this that comes from the world of competition law. And I'm most interested in vertical drop as a way to look at the situations where we want these agents to be talking, talking with the platform's. I'll speed through these pretty quick because I know I've only got a few more minutes. Phil, the AI interoperability gap, a lot of questions asked here, the so called LCI M model is one that's developed in the healthcare industry. And now in the IoT space, it looks at the various elements from the conceptual down to the technical how these systems connect. Again, it's very complicated, but we need that in order for the systems to actually work together. The technical questions include,
do we need to use current standards and we need new ones? Are there open API's that are adequate substitute on the governance side who takes the lead here? We heard from Alan Davidson this morning. Maybe this is something for the Department of Commerce, maybe NIST, maybe it's I triple E or IETF, and others. And then on the policy side, is this an official a an effective remedy for market concentration concerns? Right? If I don't move my bot to another LLM or other web platform, should I be able to do that? Or am I stuck with sort of the one. The other piece of this the agential relationships comes through trusted intermediation. So if I want to have somebody there that's actually acting on my behalf. Bruce Schneier has written a who's a one on cybersecurity expert has talked about beware of the double agents, an agent that you think is acting for you. It's really not it's acting for an underlying platform. Maybe it's actually for an advertiser or somebody else. And so we want to get away from that. The notion here is we need trusted intermediaries to come in and take that role. I triple E 's endorsed this in its approach with going back to 2016. So this is the global standards body. For many of the software standards we see in the market today. They've endorsed the idea of having personal sovereignty over our digital agents, including having some sort of a trusted service underlying all of that, so that that Bot Plus you plus this intermediary are acting in concert. The World Economic Forum has also endorsed this and a report that came out a couple of years ago, the data stewards data fiduciary is data collectives that they talk about their. And this is one form of digital fiduciaries and data Trust's going back to the common law. A fiduciary is doctors, lawyers, if these are important elements that have in situations or somebody has expertise, and they have service, asymmetric power over me, my humble view is we should have this similar thing in the data context, not as a requirement. But as a voluntary, almost like a profession of fiduciary should come in and help ordinary people with these obligations online, we should want both of these to work together, right agency has both dimensions. And ideally, as you have more power, you have more responsibility going back to one of my old childhood friends, Spider Man. And so this is sort of the attempt to put that together in a slide. So you've got Clippy in the bottom, who's you old enough to remember Clippy, he was, you know, really sort of a minor annoyance for the most part. But he was extremely minor, right? He did nothing. And so we had no concerns about clippies agency in the world. As we move further up, it's a world of assistance avatars, eventually autonomous agents, they're getting more and more power, that's fine. But that power should not be coming at the expense of the agency that we have, the control we have. And so again, moving from a user context, that there may be a duty of care, maybe even the duty of loyalty beyond that these agents working for us, and not for somebody else. There's many viable use cases for this in terms of the authentic PDA, I don't really I won't go into it now. Because again, it's time is short. This puts it all together in one layered stack for friends, people who are familiar with or even fans of modularity. This is an idea all the different layers in the middle have to work sort of work together for the individual on the one side of the institution on the other side. And then finally, some next steps. We need open standards, perhaps through the trusted intermediaries. Yes. And there's some work going on. Actually, I'm helping out with some folks in Silicon Valley, who want to become part of a voluntary opt in approach, corporate policies, maybe but probably only gets you so far, public policy, maybe some nudging by some of the regulators, or policymakers in DC including using things like procurement as a weapon or a tool, and then a straw person around sort of a code of rights and duties. I've got some ideas around that. So with that, thank you all very much.