SOTN2023-14 Generative AI: What Powerful New Tools Mean For Washington … and Homework
7:58AM Mar 16, 2023
Speakers:
Renée Cummings
Austin Carson
Adam Thierer
Patrick K. Lin
Keywords:
ai
models
technology
gpt
generative
content
question
challenge
create
governance
conversations
people
point
real
best practices
tools
fact
renee
red teaming
part
Alright folks, thank you for joining us for this session on everyone's favorite topic. So we'll try to keep it a little bit interesting if we can, since I'm sure that everyone tech policy has been hearing generative AI, non stop, what I have, what I will say is I've been surprised by how many conversations I've been in, even with tech policy people on generative AI where they have yet to actually try the products, which is very interesting to me. Adam, you were there, too. And so I think that there's a mix between on one hand kind of like semantic satiation on hearing chat GPT, or generative AI and yet, on the other hand, is still so new and so unexplored by the vast majority of people that we have a lot of responsibility, I think, as active tech policy and AI policy people to do what we can to convey the experience to replicate it to get people involved and to explain the issues that are happening. So we will start with a brief two minute rundown the panel where we'll introduce ourselves and give our give our weirdest favorite anecdote about Chet GPT that we think is somehow revealing about the state of technology and its societal implications. So it's gonna be fun. All right, I'll start off. I'm also the founder, mu y. I'm Austin Carson. I'm the founder and president of seed AI, which is a 501 C three nonprofit that we started last year for the purpose of helping to build AI powered innovation ecosystems for communities and congressional districts around the country. And we also do a lot of work and have a big focus on bringing in AI experts and on educating Congress, the administration and anybody else who is willing to listen. So my brief anecdote is that one of my best friends, his father is a Korean immigrant Pentecostal preacher, which I have to imagine is interesting. I don't know if it's small, but at least very niche demographic. And he's 70 years old, and he's been using chat GPT to take his sermons and like, not fully proficient English, and then re summarize them in proficient English. And it's especially useful for children because they don't like you know, when he's teaching the kids sermons, it's harder when it's I guess, broken and unformed English so he's been doing that another friend of his who's also a pastor has been using it to actually write in Korean, his his it's the same thing. It's also in United States. So you have both sides of this of like, you know, you're communicating with people refining your native language, communicating, and then adapting a second language. So with that, Adam,
I'm Adam Thierer, I'm a senior research fellow at the R street Institute here in Washington where I cover the public policy implications of emerging technologies. Proud to say that I believe I've been to all 19 state of the net conferences, Tim Lord me a chairman sticker. I'm not the chairman of anything. I get a special ribbon. So that's good. To chat. GBT store I don't know about a chat, GPT store. But I'll tell you how AI has really changed my, my kid's life and I in a big way. I do a lot of online gaming and stuff my kids, my son and I our favorite video game of all time is a Japanese language video game only. And we do real time translation using AI tools, both of the menus and of the communication. I'm really, this is terrible. 50 year old man, we'll talk we'll talk Earth Defense Force him of it. It's basically a bug killer game. And my son is 18. Now we've been playing this since he was like eight. But now it's Japanese only. And through AI tools, I just put a little stand in between us. And we can read what the menu say here with said,
living. That's awesome.
Hi, I'm Renee, Director, technical research manager at Stanford Internet observatory. And we study abuse of current information technologies. So about two and a half years ago, now, I got access to GPT three, their academic research partnership, and I decided I was going to write about it. And I was going to have a co author an essay in the Atlantic with me. And I asked it to write the closing paragraph because this is always the hardest part. And anybody who writes you always figure out how to tie it all together. So I trained it on some of my prior stuff, just kind of fine tuned on on my writing. And it started returning the most remarkable things like the end of the eight would write a short paragraph. And then it would include like you might like, and then the names of other stories that I thought were like adjacent to what I was writing about. But the most remarkable thing that returned, I did this a number of different times to kind of test the outputs. Was it fabricated a research scientist at MIT fabricated him, I gave you names of papers he'd written where he worked this, you know, incredible AI scholar who had really been at the forefront of such in such a thing. And I could not find this man on Google. And of course, you know, I understand how the technology works. I had this, you know, seen it generate a series of other things that are just indicative of probabilistic responses, but the fact that it really convincingly bullshitted an entire person and gave him citations and papers, I actually emailed a friend at MIT and I was like, can you go through your directory and see if like, I just can't find this guy because it said he was active in the 1960s. So for me even the remarkable I'm authoritativeness have the outputs, even as somebody who understood what was happening was was what kind of stuck with me most, even a couple of years back.
Yeah. Hi, everyone. My name is Patrick K. Lin. I'm a fellow at the Internet law and policy boundary. And also author of machines, the machine do. A lot of my own work really focuses on sort of the bias and technological bias and ethical issues of how AI is implemented in the criminal legal system. And so a lot of what I've worked with is sort of predictive policing and criminal risk assessments and sort of how it gets it right, and how more often than not, it gets it wrong. In those sort of contexts, as far as weird stories involving Jack GBT and generative AI in general, because I'm so focused on a lot of the sort of like the policing and surveillance aspects of AI. I was really surprised to see just how much discourse and debate is in the academic space, a lot of school districts, principals, teachers, and sort of worrying about oh, like, will my students suddenly start using chat GPT to generate whole research papers and essays. And on the flip side, also seeing people say, Oh, look, I create this whole paper about this particular really complicated topic, and then seeing fake professors fake papers and fake sources being cited by people who might not be experts in that field thinking, Oh, this is legitimate, and, you know, having experts kind of come and check them. And I think that's really interesting to see, sort of beyond just the space that I work in.
Well, thank you. I'm Renee Cummings and Professor of Practice data signs that UVA, a criminologist and an AI ethicist. I think, for me, it has been the a number of academics, and of course, high school principals who have reached out to me to ask me, how do we incorporate this into what we're doing. And what is encouraging, is that most of them are not interested in banning it. They're interested in using it, but finding responsible and ethical ways for the students to use it. I think the other thing would be the number of companies who've also reached out to me because a substantive part of my work is a crisis management and risk management who wants to do this within the realm of sort of reimagining their business model, but wondering, what are those risks? What are those pitfalls? What are the the crises from contractual to reputational to to meet your risks that they can face? So I think the good thing about it is this desire to innovate and to continue to use it to innovate, and as well as just not generating an extraordinary amount of fear around the technology. So that definitely is very promising.
Yeah, and it ties in really well, to a next question, and we've got some polling that I want to read that I found kind of incredible over a month period. And then with a tracking poll from when we started the organization last October. So October 2022. When we see the AI kicked off, we did a poll on American sentiment around AI optimism and pessimism and things that would allay fears, etc. We found as my kind of random tracking thing, that only 3% of Americans had heard of GPT three, right, so everybody wants to go that is, you know, it's not, not terribly terribly. So let's look at February 1, YouGov, less than half of Americans had heard at least something about GPT chat GPT right. February 17 asked respondents if they were very or somewhat interested in a number of products. 43% said they were very or somewhat interested in tools for police or criminal justice, or use of AI with considerably higher stakes, right? That was strange, February 22. Despite these new AI search products not being available to the public, public yet survey of more than 10,000 shows that have 52% say it is a technology that is here today, or it is here to stay. And that today, we have people who like in that two week period of time, the number of folks that had used Jaggi btw jumped from 6% to 10%. Right, so you're seeing like a week over week increase of like four to 6% of actual users at this juncture, right? And so when we think about that, that rapid change, what do you think that especially as the API is opened up, march 1, right? And folks are going to be able to far cheaper, cheaply and easily monetize the technology? Like you know, what does that ultimately mean for our daily lives and for the fact that if this is here to stay, like what's our responsibility? And then what do we need to make sure that we consider folks always paying attention?
Sure, well there's gonna be an explosion of interest obviously, because these technologies are moving so fast. If we would have been talking about this even just six months ago, I'm not a lot of people as you just pointed out with the polling data whether you've been talking about chat GPT or no one it is and now it's on everybody's lips and it's even become part of like the the social content media holy wars are turning into the AI fairness, holy wars, right. And you already see like P Bill on the right and the left already looking into saying it's unfair, it's deceptive. It's problematic, you know, people trying to rig these systems now look at the anti conservative bias, look at the whatever bias and, you know, so I think this is going to just absolutely explode. And I recently did a study with Neil Chilsen. And stand together where we actually tracked like state level activity and interest and algorithmic fairness writ large and just tried to get a handle on just the explosion of legislative and regulatory proposals. And it's already become overwhelming in just a very, very short period of time. So yeah, it's it's only going to get more so with each passing one.
In turn, maybe you can turn to Renee here real quick. You know, I think you put a focus on our prep conversation on how do we innovate, but do so responsibly. And I think, you know, Rena, and Avi have both made the point that if it's a race, but you know, we're doing it so irresponsibly, nobody wins, and we certainly don't. Right. So from your perspective, how can we still kind of address this it's here to stay people are interested, a little nervous while being responsible? Well, I
think there's when we say AI, there's so many different buckets that fall into that. So if we're speaking about generative text, okay, so if we're if we're talking about generative text, then there's a there's a creativity component, people want access, they want to produce material. I think the focus and this is maybe an unpopular opinion, I think the focus on trying to get its outputs is is not focusing on the bigger picture. I think the bigger picture is how does it transform economically? How does it transform society economically, in a few short years, how does it impact people's livelihoods? How does it? How are artists compensated for their work when their materials are used to train the system that replaces them? How do we think about the impact for creators, while at the same time recognizing that it is a profoundly powerful tool, I use it to get through kind of low level stuff, write a summary of this talk, give me a paragraph about this, you know, it is incredibly effective. It's going to have broad, broad appeal. I think perhaps rather than focusing on gating out, there's some interesting opportunities to think about the market opportunities created by new spaces that actually prioritize either proof of person where you know that you're engaging with real people, if it's in a social space, or content where there is a strong, authenticated human component to it, you know, you can envision this sort of anti GMO type model of authenticated human content or seasonal models, there is I think, opportunities to see shifts in demand as the tools are democratized and outputs change.
Well, let's briefly talk about the NIST AI risk management framework. Because I think this goes into some of the, you know, as it's evolving, how do we think about it? What are the verticals, etc. And one of the comments as it pertains to generative artificial intelligence that I've always found the most disturbing as you're going through the AI RMF, right, as you're saying, if we're not thinking about generative large models or foundation models, then it's a different experience. And you put it in that context, you do your research, you run it in vivo, whatever. And it's a lot less randomly non deterministic. Right. Now, at the NIST AI risk management framework, when they were talking about testing and validation, they mentioned, you know, they got to the point of large models, and people were like, I don't know, we couldn't even, we'll never, there's no way we'll ever be able to really validate those or say that nothing bad's gonna happen, because it's insane. And no one person kind of paused and they're like, well, we could, but it would just cost so much. It takes so much time it would be pointless. And then everybody kind of all right. Well, I guess it was like a crazy moment for me. Like, I mean, that's you raise a good point that it isn't feasible, but that is like, Alright, I guess. Good luck. You know,
I'll just very quickly we tried to so we, at SEO did some work with open AI and Georgetown CSET asking the questions, how does this transform influence operations specifically, which is sort of the most manipulative, you know, active propagandistic measures to deceive the public? And what is this do for that? So very particular, very particular space. And there are efforts towards detection of content, there are efforts towards gating of content, but we tried to think about where the impacts were and where the harms were. And public resilience, making people aware that this content is going to proliferate, that you are not necessarily going to know if what you're seeing was created by a person, you're not going to have a strong sense of provenance. Perhaps the way that that work was done in the context of video deep fakes, making people aware, this is a new technology, this is emerging. This is what it can do. Be cognizant that not every video you see on the Internet is necessarily real at this point. I think there is that social adaptation piece of it that that comes into play. I wish that there was more in the way of guardrails. In this paper, we tried to articulate a variety of approaches, but I do think that ultimately public education and and population resilience becomes a kind of a key focus area.
And so would you say and I want to turn over Rene I feel like you've also been ready to say something but so would you say and open to anybody that for something where we can't have some degree of assurance, you know, or that falls within that risk management framework from an educational perspective, that it's almost like a box where it's like, we don't actually know what this is going to do at any given time. 97% of the time, it'll do something fine. But 3%? Who knows, you know, I mean, how are we going to move forward? And what how should we think about that education?
Well, I think the challenge is that we are using these tools. And these tools are being tested on us. And that's the most unique challenge of it. And while we do desire to approach any sort of technology in ways, which are responsible, of course, accountable, and, and transparent, the challenge that we're seeing with these large models is that beyond the fact that they're introducing an extraordinary amount of new risks, so if you decide to incorporate this into your business model, it really puts you in a position where you have to rethink contract law, copyright, intellectual property, subcontracting, there's just so many other things that you've got to think about. But I think our challenge now would be the AI hallucinations, it would be the fact that what we are, we are seeing there are things that are falsehoods that are believable falsehoods. And as Rene said, it comes back to that level of literacy, AI literacy, media literacy, data literacy, and whether or not we are really upskilling ourselves in real time to deal with these challenges. And of course, the challenges that we're seeing would be really the amplification of bias and discrimination, and other systemic challenges. And whether or not we have the opportunity to pull these things back, we don't. But I think there's an extraordinary amount of hype at the moment. And I think, within the next few weeks, we will see something else being introduced. But the main thing is that we've got to continuously show that we are not only ready, but we are really ethically resilient. And to deal with these challenges.
Well, and I want to tack on to Rene Alpha One, two, when I want to tack on to that story. Why, you know, I was red teaming one of the models some months ago, and I went in whatever direction of self harm to check that and it gave me a wrong suicide hotline number right into your point I was searching for programs that I didn't think existed and it came up with an extremely promising one. It was like heartbreak is like waking up from a happy dream. And like to him, I thought they already done the exact thing I was trying to do. So yeah, but you
said something's very interesting there a wrong suicide hotline, number. I mean, when you think of that within the context of, of mental health, and you think of that within the context of a teenager who may be at that moment looking for help. That's something that's absolutely devastating, to know that there is a wrong number there. And those are things that we can't take lightly. Because for you, it may have been a wrong number, and something that, you know, just you left off, but for someone who's really in a very, very low place, that's a matter of between life and death.
Exactly. Man, I completely agree. It is it is very much. And it's one of those things where a lot of people won't search the other one, you know. So, you know, looking at the speed of adoption, right, looking at the fact that on one hand, it is like, kind of a social Zeitgeist bubble, right, just inevitably, because of its raw speed. I will also say that, regardless of that fact, its raw speed is both very impressive and very terrifying, if part of your premise is that we need, like social and public education to keep up with this. And I would also argue that like social and public education for the last decade, two decades of technology epoch didn't really pan out super well, which is part of why I'm like deep fakes, but we didn't need deep fakes. So I don't you know, but what I will say is, I mean, how much do you think that we have to address this from, like, aggressively pursue it as a technology issue to address in the short term as we build up educational capacity? And what does that encouragement look like? You know, one of the things that we really focused on in his organization you'll hear about later is the national AI research resource as a great place to do some of this, you know, common social come together work of safer place, depending on how you want to look at it with the capacity to address some of these large scale issues, as well as all of the other like unsexy, how do we implement AI to help reduce shoreline, you know, shoreline decay?
You know, look, the NIST framework actually is really good on a lot of this because first of all, I love the way it's version like software, right? These risk management frameworks, they need to be iterative, adaptive, agile, flexible, and it's version 1.0. Right? And we're going to need more versions of that with more learning more literacy more resiliency based solutions built in It's a constant ongoing societal and individual learning process a lot of learning by doing in real time. Now that's that's a challenge. I think I agree with everything that's been said about like that. This is the difficult part of digital literacy has always been like, how do you get people up that learning curve, but now the learning curves are just like waves that are coming faster and faster crashing and crashing? I see you my friend Steven Bolcom here in the front, we've been numerous online child safety Blue Ribbon commissions and testifying or been on them formally. And we're constantly recommending digital literacy, media, literacy, digital citizenship, these sorts of things. But there's no doubt about it, we have to admit it's getting harder, right. And that that real time learning process is the fundamental challenge. I'd like to think the NIST framework gets us a lot further towards getting industry and other stakeholders to actually do a lot more of that and be a lot more socially responsible about educating their community and their base. But the problem is, you have both a supply side and demand side problem. The supply side problem is the pacing problem, the pace of technological innovation constantly moving ahead of policy and culture. And then the demand side sometimes known as the Colin rigid dilemma, is basically the fact that once the public sees something and wants this new shiny object, it's like everybody wants to have it. And it's like demand demand demand, right? And so that those are the that's the two sided problem, because every new AI app, I agree, there's gonna be something next next week that will come online people like well, I want that shiny new thing now and the next one in the next one, that that's a learning process at the speed of light. And in the old days, we talked about like video games and media literacy. Well, there were only 2000 Video games released in the old days per year, we can handle that problem, right? It's a different order of magnitude of scale that we're dealing with, with artificial intelligence, especially generative AI.
Yeah, and if I can jump in to just speaking to digital literacy, and sort of how we get, you know, that supply demand conversation going. I do think that, you know, whenever we're talking about new innovations in that race, right that forever race between innovation and and ethics and morality, innovation always ends up winning out. And whenever there's a new technology that's being introduced, I think, especially with things like generative AI, we need to also teach people to be a little skeptical, little critical about, you know, not getting lost in the enthusiasm and excitement about this new technology. And returning back to your you know, what you're saying about the API being more widely accessible, it's sort of that proliferation of chat TPTs, something about stable diffusion, it's used by lens a, all of a sudden, our social media threads are inundated with, oh, look at this magic avatar, it's been created. This, you know, robot has created this really cool looking, you know, anime or watercolor painting version of these photos I've uploaded. But we don't think about well, how long did they get to retain those photos for? Art, your photos being used to train future iterations of that technology? Also, where's is it really painting this for you? Is it really, you know, creating this artwork for you? Or has it taken artwork that was taken on consensually by artists who have not agreed to give that to you? And you're using that to essentially cobble together something, and now you're posting everywhere if there's no credit being given to the artist. And I think it's important to be critical about those inputs, right, what's being used to train the AI and before we start celebrating all the outputs, and all the really fancy, shiny things that it gets to do?
Yeah, I think that makes sense in looking at intellectual property, I mean, as we all know, everybody here on the panel and in the crowd, intellectual property baits do tend to get into everything else at some level, because it is the game of kings, you know. And I do have a sense that many of these issues will be either resolved or escalated through lawsuits, such as the Getty Images lawsuit that we're seeing right now, or through kind of the, you know, the rumblings of intellectual property lobbying that causes the counter lobbying, and then it swirls up into content moderation, or, you know, patent reform or copyright reform a very similar thing. So then your perspective, first of all, is there constructive path forward? Are there any methods or concepts or work that you've seen that's very promising in beginning to address that problem? And then in general, when it looks like information, provenance, or the humaneness proof of human as you were saying, Renee, like, what's the other side of that DRM puzzle? Because I presume that both of them will be some form of digital rights management or, you know, something like that the blockchain, the Raven crypto conferences here today
will definitely digital rights, most critical and of course, a digital justice is also very critical because the other challenges with these large language models would be questions around democracy and decision making and again, disinformation. And combining that with the deep fakes you know, we're creating a landmine there have have questions around justice and fairness and inequity. I think it's really promising because we're seeing more and more conversations and more and more scholars addressing the times OPIC we're also seeing conversations that are coming out from the Global South, and more multicultural conversations are happening around the model itself. So I think we're seeing, you know, a movement and a lot of traction. I think what we need to see more of would be the governance and the regulation. But then how do you govern something in real time, that's the challenge, because by the time you're ready to legislate, this is turned in and more often, to something, something else. So those are going to be some of the very unique challenges playing catch up all the time with innovation, and with technology. But I think, as an educator, and as a parent, as well, as someone who is using the tool. For me, I believe that it is innovative. But it is not that creative. Because it really is rote learning AI style, as opposed to really meaningful learning, which really includes more diverse intelligences, and it's just not there yet.
You know, I love that, you know, how do we govern in real time question. I think that's exactly right. And looking back, I mentioned a minute all these old state of the art conferences, I remember I'm a veteran of the old like Grokster Napster wars, and then the Viacom YouTube wars. And, you know, there were lawsuits, man, there were huge lawsuits. But there were also new business models. And there were new government's governance structures that developed out of that, and we live in a world today of, you know, real time streaming of all your music. And, you know, my business models evolved, governance systems evolved. A lot of the way it evolved, and I would argue in a very bipartisan way is through sort of multi stakeholder negotiations and a lot of ongoing collaboration with different stakeholders. I've done some really dreadfully boring Lavere articles on the rise of soft log governance for emerging technologies and talk about how the Bush years the Obama years, the Trump years, they didn't have a lot in common, but they did have in common a reliance upon this sort of like soft law, bottom up, collaborative multi stakeholder process of bringing people in a room and trying to hammer out some rough rules of the road for governance in real time. And I'd like to see for a lot of AI, especially generative AI, you know, NIST and NTIA and others get together and constantly in real time, bring people in room and say, we've got to hammer out some rules. We're like, right now, they don't have to be formal, we can debate whether they will be a formal law regulation. But I think that's so hard. I mean, we're still like 13 years running around trying to get a baseline Privacy Bill through, can't get that done six years running on trying to get a driverless car built or can't get that done. And there's a lot of agreement, we all need that, right. But we never get that done. So you've got to have a backup plan for governance in real time for AI. And I think that's where we can do a world of good and there's been a lot of really good Blue Ribbon Commission's best practices, reports. There's professional associations, I Tripoli, ISO, ACM, BSI, internationally, others that are doing this in real time. And we've got to start somewhere, that's got to be part of the answer. And a big, big part of that will be the digital literacy educational component. But a lot of us will be like a corporate sponsor of social responsibility, or what increasingly was called in Europe RRI responsible research innovation efforts. And I think that's ultimately lead in short term.
Yeah, I think that makes sense. And I like our AI. That's good.
You know, yeah, I think some of them want it to be more of a precautionary principle regulatory approach. That's a different question. And in Europe, they're gonna get it. Yeah. Right. This is the tension between the US model and the euro and the European model, right? They will get formal rules here. I'm not so sure. I think a lot of people are clamoring for him left, right and in between. But again, I mean, where's that baseline for MC but we can get that done time. So I'm just very skeptical. We get like an algorithmic Accountability Act. I just don't think that happens.
And I don't think we have the capability to do so either. I mean, this is kind of what I was alluding to, from that one testing evaluation panel. We don't, and even to having nimble governance, right. And in a rapid iterative, process and product. I mean, as the different new models are breaking through benchmarks so rapidly that we have to remake benchmarks for like aI about every three months at this juncture at latest, right. And so I think you have to have so much goodwill and good faith and a group to adapt that rapidly to that. And again, with the opening of the API. And personally, I find much more concerning rapid dissemination of open source products that have no protections whatsoever, the ability to do distributed training of models, right. Yeah. But on the flip side, you have the interesting question of you were able to tune your model with all of your writing so that it could write, like, you know,
I think one of the one of the questions that, that I've been thinking about as the its creation is only one piece, particularly if you're thinking about as you know, my work really focuses very much on manipulation. So that's the, you know, not to sound overly pessimistic, but what I work on, there's the creation piece, but then there's still the dissemination One Piece, right. And one of the things that's interesting is that people don't trust, random, no name accounts created yesterday that begin to talk to them on the Internet. So there is a natural kind of, you know, kind of human defense mechanism to some of the, you know, some of the ways that you would have to engage in order to put that kind of content out. Similarly, it is hard to create a persona out of nothing at this point, that is one of the ways in which we observe the creation of front media, by state actors is these people don't exist, they don't backstop them well enough, it's very hard for them to do that. Interestingly, AI will make that more possible and easier as we go. But right now, it's not quite there. And so there are these interesting ways in which you can create, you don't want to create a wholesale skepticism, right? You want to make people constantly distrustful that everyone around them is trying to manipulate them. But you can, using education articulate ways in which check to see what the sources are, who is speaking with you, or the again, it kind of ties in a little bit to basic media literacy. So while the content becomes cheaper to create, while the content becomes more persuasive, while it sounds more like you are a religious person, there are ways to think about that dissemination piece, think spear phishing emails are actually one of the areas that people are very, very concerned about the intersection with cybersecurity, where you can train a model, or create content that you where you study your target, see who your target talks to, and then create content that appears to be produced by that person. And so there are ways in which the manipulation potential significantly increases. But again, basic cybersecurity practices, don't click on a link in your email unless you know the person and to verify the email address, maybe these sorts of things become more of a focus of educational efforts.
And so do you think that we, you know, it's kind of a function of that progression that we're in this eternal cat and mouse game to some extent, right, like there's Do you think there's going to be anytime in the near future, some kind of solution that allows us again, to watermark or to print or to somehow be able to assert the, you know, human creation of something or the non machine creation, without it just being again, there's a tool that's released, we can detect 94% of content, the next iteration returning of the model, they're like, Oops, we can connect, you know, 30% of it, right?
It's been a bit I mean, even in the GaNS generated faces, or deep fake video, that arms race has been happening for several years now. You know, I think that there's that component to it when I was looking to new business models, and rather than trying to keep bad stuff out, can you create environments that, you know, are engineered around real identity? I think that becomes an interesting question. Again, I think that we're not necessarily quite there and how that would be implemented, there are some people who are going to feel that, that it excludes people who don't want to engage in that way. I think it does provide an interesting new opportunity where you might see some of those communities be created even around something like persistent pseudonymity in the style, like you have on Reddit, where there's kind of like a cred that goes along with you. And that sort of persistence, maybe becomes the model. Think, again, we tried to articulate in the, in the paper, various ways in which it could theoretically be implemented. But no one of course has actually done this yet. Do
you think that the content coalition's are going like the DRM coalition's that were set up a couple years ago and then all consolidated into a single one CCP to a or something? I mean, do you think that that should have a direct flow into some usefulness in this environment? And then a second question is one of the researchers that we work with as part of the Wilson Center AI pipeline program, she is explicitly focused on how, you know, being able to track individual, like the parts of the training data that disproportionately impacted the output for the purpose of being able to see the relative value of it in the operation. mean, do you think maybe something like that? Do you think the inverse thoughts feelings, are we just hoping that we run into something good,
we'll share documentation is critical to ensure that, you know, accountability and transparency and of course, explainability are always there at the top of the agenda. But I think one of the things that you spoke about what you alluded to was the cat and the cat and mouse game, but it is really a cat and mouse game is almost like the hamster on the wheel, right? Because we just keep going and going and going and going. The challenge is that these models are going to be more and more. We've got to do it responsibly. We've got to do it ethically. We've got to build the kind of trust that's required, and we've got to ensure that people are critical thinkers, and that our students who are using these technologies, understand it cannot replace critical thinking. You still have to find other sources you cannot take the information At face value, you still have to do more research, one of the things I like about it is that it's going to make the classroom definitely more engaging. Because when you present these papers, now professors and teachers and other persons in the classroom are going to ask you more. So there's going to be more conversation around your paper, we may just go back to the old days of show and tell where you've got to actually come and do something to say, this is actually my work. So as Adam says, you know, we're seeing these, we have an opportunity to create new business models, as we have an opportunity to create new learning models as well. So I think, you know, much conversation around generative AI. And like all AI, its long term impacts are critical. We've got to pay attention to bias and discrimination, we've got to see who has voice even though, you know, we're typing. And it's, you know, it's about voice. It's about visibility, it's about access, it's about opportunity. It's about how many people don't have access to the Internet, and may not be able to access these tools. When we think about the digital divide. I mean, think about all the things we need to work on. So this just gives us much more to work on and to get right in real time.
I love your hunger for hard work. I respect it. I respect it. Normally, I think of assault as masochist, but I like that you really dive into it doozy asically so we are in South by Southwest. This upcoming Saturday God saved me are doing a kind of calling it prompt detective, but it's an exercise we're doing Houston Community College, they're gonna bust them because we didn't one of our AI across America. That was me, Renee, I just hit my mic. We did one of our A across America programs with Houston Community College. Some folks that hackers on the hill, we're talking about Jeremy of red teaming. And so we've worked out this deal where it's like, here's the categories of failure or danger in these models, you've got hallucinations, you've got all this stuff. And so I think we've got 20 questions in there for like relative categories, working with GPT Neo, right, so we've got some of the open source filter listeners stranger in there. And then coming at the end of this couple hour thing and doing like alright, one team build a filter that you think will not be able to be defeated by the prompt of the other team. Now that you've learned all the failure modes. And so you know, looking at and this is just all in a desperate attempt to figure out how can we bring people into this and let them know how it functions in a way that then leads to curiosity and interest in that? I mean, so, you know, from your perspective, do you think that both this education, like the governance has to be the Adam recommends, that you guys comes from the ground up? Or do you think that we have the opportunity to like wholesale create programs? Before that stage, right, like, Do you have a clear enough idea of what that critical thinking educational change would look like? In general? Or do you think that we have like piloting work to do kind of in all of these areas?
Well, we've mentioned several times here, the idea of red teaming, I'm not sure everybody knows what that is. But you know, the idea of sort of like a lot of trial and error and figuring out in real time, hopefully preemptively like what are the bugs in this, you know, the code and algorithm, I think we need to try to figure out how to scale that up and find institutional structures and best practices to make that more of the norm. And that's part of that corporate social responsibility kind of mindset, or RRI mindset of baking in best practices by design. We've heard, you know, the term used for years, you know, privacy by design, security, by design, safety, by design, these can mean things, right, but they need to have some buy in from a diverse group of players. And there needs to be institutional structures. And I think the good news is today, there's 1000, flowers blooming in terms of like different frameworks, best practices, you know, ethical guidelines. The bad news is, there's 1000 flowers blooming. And you know, at some point, the all the flowers can become weeds and take over the garden. And you're like, what's, why don't we get some consensus here. And in the old days, I mentioned when Stephen and I are cutting our teeth, and like video game regular policy, and like trying to fend off regulation, censorship, we got buy in around a single entity and a single leg rating system. And but that was like a more of a static kind of concept, a set of content, much easier thing to deal with, we thought it was quite challenging at the time. But it really wasn't nearly as challenging as what we face today. So how do we do real time governance, Real Red Teaming, and figure out how to do that best practices by design, baking and ethics by design? And, you know, keeping humans in the loop. These are the two things you see all the governance frameworks gone around like Best Practices baked in by design, humans. You hear this again, and again and again, and all the different frameworks, but there's just so many of them and everybody's got a slightly different plan. So this is the great challenge. I've already said. I've already given away what I think needs to be done. I think you have basically a standing committee with NIST and like NTIA working together to have Real Time, Rough and Ready rules of the road and best practices being just like constantly made on the fly. It's just a constant iterative learning process in government. There's no end of that. There's no final end state. It's endless. And that needs to be encouraged. And you have to get around politics to get this done. The problem is the people on the far right, the far left are gonna say it's rigged against us. We already see this today. I mean, the there was this recent Washington Post story in New York Times story about that about like all the Conservatives saying it's woke AI, you know, it's being programmed against us. And then people in the left saying, No, it's very, you know, it's very bias, discrimination, discriminatory and disinformation. Right? Something's gotta give there. There's gotta be some light in between, there's got to be some effort to come together and say, Let's try to get some best practices through and trust each other on this. This is why I don't think we'll get former legislation. But I do believe there are still smart people in government and in industry and other shared stakeholders who can come together and find consensus best practices, to try to have a more socially responsible, algorithmic, innovative ecosystem, that's, that's going to be my hope. Maybe I'm a fool. I'm gonna go with that. No, you're not.
Just a second, a lot of that, too, I think to achieve a lot of that consensus, too. I think there's such a great importance for that public involvement. Right, I think with the red teaming, right, involving the community college there, I think that's a really great step in you know, just addressing some of the issues that Renee had brought up with how we are sort of the the guinea pigs in this experiment. And I think having very deliberate and intentional red teaming efforts, led by the public and not just by industry folks, or, you know, public institutions, I think having the public come in and, you know, trial, trial and error, a lot of this, I think that's the way to not only open up the blackbox, but also allow for people to actually see what the issues are and address it. And then you're involving people in this very intentional way. Because I think more and more generative AI, but I think AI more broadly, I think developers, companies, they're almost becoming like these private policymakers, I think these are institutions with a lot of power that are that have a lot of influence over people. And I think the people, the public should be able to have some say, and be able to see how that process is run, you know, in conference rooms and computer labs, things like that.
Yeah, I think that makes I think that makes a lot of sense.
We heard earlier about impact assessments and audits, you know, and there's going to be a debate about that I think all roads lead to some sort of format of algorithmic transparency is that people's preferred solution? The question is what that means? How is it made more concrete, I think there's going to be a lot of really important work done over the next, you know, couple of years about thinking through what transparency through an auditing regime kind of looks like. And even if there isn't a formal sort of part to that with an algorithmic Accountability Act, or the Privacy Bill provisions, I think you could still get a lot of buy in from people that there should be at least a private process. There are already and we heard from one of them earlier today, companies that are out there doing this in real time. And this builds on a well time tested model we've had in other contexts from underwriters laboratory to various other types of certification regimes, the professional associations I mentioned from I triple E to ISO two ASM, they all have wonderful frameworks for these things. And then the NIST product brought it all together. Right. And I think the only problem with what I've described there is that it's so iterative, that it's kind of messy, it lacks precision. People want certainty, and they want silver bullets. They want to like we got to solve this and we got to solve it now. Right? And there is no now there is no moment where like this is done is just a constant, ongoing challenge involves education and everything else and best practices, and that is legitimately frustrating.
So I mean, I think what we've really pulled out of your statement, and I think maybe all of ours, right, is that this is a society problem, right? I mean, we certainly have a technology issue that we're trying to address, but perhaps this is the most another clean mirror in addition to social media and Vorna Verner Hertzog lo and behold, you know, to really see that it's a massive amalgamation of humanity that assumes an archetype based upon
prompt Git right now, man,
you're saying, I mean, I think that that is like, but like, realistically, that's where I hear and it's like, it's like three o'clock, right? Yeah. But I mean, I think that really is kind of what we're saying. And it boils down to, but we have to break that into chunks, right? Like our responsibility here is to take that truth, which is why based AI conservative and liberal, it's because there's a deep knowledge somewhere in you, that there may be this version of humanity is not your version of reality and humanity, but it's gonna run everything right. Now, I would argue that perhaps the only way we can really address this and get that kind of buy in and get that kind of ongoing iterative testing and risk management is having a place for it, right is through having explicit places for it, which is one of the reasons I am such a big fan of the national AI research resource as an option. And some of the provisions put in about testbeds by one of our beautiful staff in here. And I think that, you know, I can't think of another option besides that type of public public private infrastructure that is designed for that social purpose, such as they are with AI Institute's and NSF funded research, right, where that's possible. Let me know if I'm wrong. But I would just be interested to know, do you think a that we just inevitably have to have something like that to be the place where we're able to test those things? Or be do you think that there's another option for the type of kind of common goal common good,
is definitely a place to start. But is it enough? And that's the question, is it enough oversight? Is it enough transparency? Is it enough knowledge in this space to deal with the challenges that we're seeing? And is are we taking this thing? seriously enough? So it's a good place to house it? But if we're doing these things in real time, is it really enough? You know, Patrick spoke about that public interest technology approach, which is so critical for stakeholder engagement and for really bringing those voices into the conversation. Whether or not we'll ever get those voices fully involved in the conversation is the challenge, but it's a great place to start. But the question is, is it enough?
You know, think of the controversy we saw last year with the DHS different disinformation board, right. And think of like the recent reports we've heard on the hill with like, what some conservatives are trying to do with the this new committee on looking into some of these issues. I mean, things get very political really quick. Yeah, they already are. Right. And this is why I started out by saying this social media content, moderation, holy wars are becoming the AI, fairness and transparency, holy wars, right. And the sides are just so distrustful, right? I just keep going back to this very mushy, pragmatic, like we've got to find a middle way. We've got it. And you may be right, maybe it is maybe it is that maybe it's the NTIA in this maybe it's all of the above, I don't know. And then there's an international component to this. Right. How are we negotiating this among a coalition of the willing of people willing to work with us on this globally? Because, of course, Europeans have a different model. The Chinese have a different model, right? And they're not very, very consistent. And then there's ours. There's been some work done by the, the bioethicists Wendell Wallach and Gary Marshall at ASU law school on the need for GCCs governance Coordinating Committees globally, sort of like grand Congress's have, like an issue a technological issue to come together and meet regularly to talk about these things. And I'm like, are you talking about the UN? Like, no, that's not the UN. Because we know that model hasn't worked so well, for these things. But it's a good place to talk about things and like, well, that's what this is. We need more that sort of international collaboration and just dialogue happening. I think, I agree with that. But we got to start smaller than that. But we're talking about here just trying to figure this out domestically, right. And figuring out pragmatic, practical rough rules of the road in real time. Real Time governance, right. That's, that's all I got, man. Yeah, we're we got to start somewhere panel.
Yeah. Optimistic panel. All right. So since I just made that joke, I'm gonna skip to our last question, because I think we're also almost over time, which is, if we get this right, right, if all of these insane intractable problems about the way that technology is changing, and society is changing, we just figure out what does this look like in five years? Right? Like, what's your ideal view of what success in generative AI policy or development looks like? Anybody. You got to have a good dream of we're gonna get there, man, this is way too depressing for us to get there without a good dream.
I think it's going to continue to look like us. And that's who we are. We are not perfect, we are imperfect. We're still trying to figure ourselves out and figure each other out and figure society out there is going to continuously look like us. And that is why we need those ethical guardrails. That's why we need the real time governance. That's why we've got to pay attention to questions of diversity and equity and inclusion, and look at questions such as bias and discrimination. And that's why we've got to ensure that these technologies don't undermine what it is to be us and what it is to be human. And what it is to understand that we always need to just find ways in which we need to engage ethically with technology.
I think we'll still be asking a lot of the same questions. I just think that as things improve, new things will break new things will get fixed. And I think you know, I think what we've all talked about this is an eternally iterative process. And I think, you know, as things get more complex, I think I find myself always concerned about the the biases involved when AI is when AI is being developed, but also the way that AI is being deployed, the way that it's being applied to the general public, right. And so even as we fix a lot of the more technical issues with generative AI and AI more more broadly, we also have to look about look, look at how AI is being used in classrooms, in policy, in policing, whatever it might be. And I think those issues will continue to exist, I think we'll just be having a little bit different conversations about the complexity of all, but I think a lot of those conversations will persist.
I think on the education front, which is where I do believe a lot of the effort needs to be focused, where I hope we get in five years is some sort of funding for that and someone actually responsible for it, because that's the piece that always breaks down and we talk about a need for education a need for, you know, the content, moderation wars, a need for counter speech need for you know, these things. We all acknowledge the need, but there's very little effort towards enablement. And that's where I hope that this time around, maybe there is actually some more cohesive effort spent towards creating funding for that new future.
Let me leave you with a positive story of hopefully foreshadows where we could be in just a couple of years. Last year as part of the US Chamber of Commerce AI Commission, which I'm a Commissioner on, and Mike Richards was here earlier today talking about it. And we're about to issue the final report on Thursday, we visited the Cleveland Clinic, and we met with doctors and nurses and scientists about how they were using machine learning and AI in real time to do some amazing things, to basically help in real time diagnose various types of ailments and figuring out how to do early stroke detection, heart attack, detection, organ transplant issues, mental deteriorated illnesses, Parkinson's, Alzheimer's, all these things were being addressed in real time with really interesting, but early human machine interactions. And the head of the Cleveland Clinic, Dr. Tom Gilovich who had been in medicine to the early 80s, so that when he started in the field of medicine, start practicing medicine in there in the 80s. The overall corpus of medical knowledge, overall biomedical knowledge was doubling about every seven years. Today, he says it's doubling every 77 days. And the only way to take full advantage of all of that medical information is with the power of machine learning. And my hope is that in another couple of years, if we can get this right, this is changing public health and real time, among many other things. We could talk about this in the context of transportation and all sorts of other fields. But in public health, this is how we potentially cure cancer, addressed debilitating diseases and a whole bunch of other things, right. But we've got to get all these things. Other things, right and build public trust. First.
Nice. All right, I'll go a little crazier on mine, just to close this out. I mean, in my view, I do hope that this is, you know, this is the kind of end of a barbaric age, if we get this right, right at a time where you each person has a random coin flip of dying because of a weird, you know, reaction to a medicine they didn't know about, right? Or there's a you know, there's a horrible cruelty because we don't understand each other, right, that's just getting worse and worse. My dream is that somehow their knowledge, the insights, to some extent, right, like philosophical insights, I think even we get from the aggregated knowledge will in some way, slip us out. And to your point, cure cancer, have your digital twin of yourself that's tuned to you person, you know, as perfectly as possible with Tim Berners. Lee controlling your data in the pod, so nobody's stealing it from you. And just acting as this avatar empowering every person ultimately, to have similar power as an elite, right? And so that it is truly the essence of humanity that drives your creation, your creativity, your knowledge, as opposed to gatekeeping skills behind socio economic classes, right, or your ability to buy a suit, you know? Anyways, I think that's it, I probably burned all of our time. Joe's I burn all our time. All right, well, if somebody wants to ask a question. All right. The enthusiast
Thank you. Think there's great technology with great potential uses, but it also Chicheley it's used by the general public has a great possibility of spreading artificial ignorance. And by what I'm concerned about, is the more folks use large language models by chance GPT to develop whatever. Those models search the Internet for the facts that then they then select, synthesize and use to put out and since the more people generate content with AI and clear it on the Internet, the more it's going to be like a xerox of a xerox of a Xerox with increasing disinformation. So is that a concern you share? And let me throughout? This may be a dumb idea, but it's the only one I've thought of so far. How about a truth and content that where anyone can use these models for anything, but it could be just a best practice, not even a law, there has to be a label of fix to the content, same generated by AI. And the models can't use that content, and generated their answers to continue the concentration of false conclusions, and false facts that get amplified in the process. Thank you.
Yeah. All right. I'll do speed responses and pass it down. So I'll say, first of all, I think your last point goes to what we were saying about needing some type of system to either watermark these things on mass or to, you know, do the inverse. Right. I would say, to your other point. You know, I don't, I don't think that we can, I don't think that we can ignore natural ignorance. You know, I think that that is like a real thing that we have not begun to address, as a society, to be honest. And I think it's only risen. So I suspect that addressing natural ignorance will also have a cascading effect on artificial. And then I would say, No, I got that. And you guys take the rest
of it. I'm gonna pass for now she's done a lot more work on this.
I think one of the interesting challenges right now is actually the idea of what constitutes a good and reputable source, right. And so that that is, in fact part of the morass, we find ourselves in with regard to what should systems be trained on, while the Washington Post got the lab leak theory wrong, et cetera, et cetera, et cetera. And so, so that battle is is is kind of percolating, but will continue to come as people talk about what systems are trained on and men and how they're tuned.
And I will say, one of our board members, Jack Clark, who's co founder of a company called anthropic, which is a bunch of people that broke off open AI who makes Chet GPT to start another research organization is definitely a concern. He's he's raised. I mean, you keep ingesting crazy, weird stuff. That's a little weird. And they've
got a really good red teaming approach and model that they're working on to. Josh, now,
Josh, that'd be a great discussion. There's a study that came out last week from MIT about how generative AI Tschechien pacifically could dramatically improve worker productivity wasn't peer reviewed. So maybe only numbers here. But you know, the obvious benefit of it reduces time to complete a task, and then increases like task output quality. That's to be expected. I think I'm pretty intuitive. But it said it's flattening productivity. Were lower ability workers, as a classified them were actually they've benefited more from this technology. So it's sort of flattened workers skilled and raised it across the farm. That seems like a huge win. From an economic perspective, governments want to make the rules in the national standpoint. And typically, when we pro technology policy in the US, like we balance those concerns with technology, of course, we don't always get that balance. Right. But I'm not hearing any conversations in PCM, talking about how do we deploy this in a way that will help me that, you know, we can we can build the road just like we're going to darkness? You know, how you think we should be doing that? And should we be doing that? And the risks are so worried at this point? How do you how do you balance those?
Well, thank you for that question. I think companies are still trying to figure that out. I also think that the companies that are making the technology are also still trying to figure out how that is supposed to work. I think in an ideal space, we would be having those conversations already in DC, but because it is it is so new, that they're not just there yet, because there also a lot of contractual issues. Because just a perfect example, if you've been contracted by a company to do a particular task, and you've used Chad GPT. Now, is that a co contractual arrangement between you is that a sub you have you subcontracted the technology. So there are so many contractual challenges, still, with the full implementation of this technology into the work of a company that I think it's just still very new. So yes, in the short term, it's doing that for productivity. But in the long term, we haven't realized what it has done to productivity just yet.
agree with that. By the way, there's new book out called working with AI by dev important Miller to MIT profs, it has I think, something like 39 case studies of real time collaboration happening human machine collaboration and how workers are sort of like learning new skills on the job, good learning by doing wonderful book, really, really good piece and then I've got a paper out tomorrow. What do we know about the future of AI and jobs? They serve all the academic literature on this and points out like we've been here before. Historically, and we just can't plan for everything. We used to have an entire profession of people called human calculators that did all the math by hand. And you know, we're at a chalkboard, right? And then mainframes came along disintermediated them. But what happened was people, they went behind the machines and created the personal computing revolution, right? We freed up time by automating, and they did more important things with their minds, right. And that was considered success. But if we would have been trying to plan for that, but I'm very, very hard, like, do we keep all the calculators at a chalkboard? Doing the hard math by hand? Wouldn't have been very efficient, right. It's very hard to plan for an uncertain future.
Yeah, I'll say that. I think part of it too, is that chat GBT. And like the playground on open AI, etc, are kind of tools and tech demos. Right. And I mean, toys and tech demos, they're just kind of a general. I mean, I think open AI in their own words, we're surprised that it blew up as much as it did when they put it in a chat format. You know, we are about to see, again, with the opening of the API, we're about to see what a massive influx of potential applications that will have disparate workforce impacts in a productized way what that means, but the point we're gonna find that out in like a month, you know, so stripe back then. Right, I think at the time, but thank you so much for your attention. Hopefully, it was interesting, and thanks especially to my panelists.