Good afternoon. Whoa, good afternoon. Hey, there we go. Okay, y'all need a break? Fair enough. Fair enough. I appreciate the chance to come chat with you. And I'd love to have this be interactive to some extent to, because I'm in the process of learning a lot about these issues as we go. And there's some where I feel a lot more comfortable with and other areas where I don't. So I'll just talk about a couple of them up front. I'm a former prosecutor, and also a recovering lawyer, generally speaking, I guess you would say, and so the fraud piece has been enormously interesting to me. I'm on the Judiciary Committee. And there's some of the stuff that we've dealt with. But as you can imagine, we're spending more time with Hunter Biden than a lot of these kinds of issues that I'd rather be focused on. But but the fraud is a key piece. And you probably heard about a couple of these, that have popped up already. The deep fakes were mentioned in the intro. The fake Biden, commercial, how many of you all heard that? Oh, really? That few. Okay, so Well, somebody ran an ad that had used an AI generated version of Joe Biden's voice, to basically tell people don't come out and vote, because it doesn't matter right now. You can wait until November. Now, the The interesting part about that was it was kind of a, these primaries aren't really mattering. You know, we know it's gonna be Biden against Trump ultimately. But the scary thing for many of us was there are primary races, and there are certainly going to be general races where the message of don't come out and vote could be very impactful. So for example, in a turnout election, and this used to happen. Yeah, geez, I guess my first time I ran was was 20 years ago or so 2002. And we actually had a fraud issue pop up, not at the AI scale, but people saying don't come out and vote sort of thing. And that was because we had a Democrat running against a Republican in a deep blue state, which is Maryland. And so turnouts, a key factor, because of the the black community turnout in particular. So this is something I've been sort of very interested in for some time, because it can happen very major impact on races, specially the ones we've got coming up in November and 2020 for not only presidential, but in some of the key battleground states. And the some of the main Senate races too. And the added level that AI gives this is the artificial voice certainly was a big factor I heard first time I heard it. I didn't know it wasn't Joe Biden's voice. It sounds, it sounds just like him. And the other thing, too, is now they've got interactive technology to on other types of fraud. So not just election fraud, but send us money kind of fraud, that is much more effective, much more far reaching, and therefore much more dangerous from an impact standpoint. And I'll give you an example. One was actually there, they didn't cross the line and start committing fraud because they weren't misusing the money. But what these guys have developed, I heard this on NPR about three months ago, was this is raising money, I think it was for firefighters. And what they would do is they created AI machines that would actually have interactive conversations with whoever picked up the phone and answered it. And they're targeting seniors, because they're more likely to be vulnerable to these kinds of things. And since I'm a senior, I feel comfortable saying that basically now but but the key part was, this was a machine. And they could do machines all at once. So instead of just one guy doing fraud calls, they could just they had a whole room that could generate these calls and do them nationwide. And the machine could interact, it would make jokes with the person it would respond to what the target was saying to them in the light. And it was very effective. And as I said it wasn't considered fraud, because they were actually taking the money and actually giving it to the firefighters. Now the twist here was they're only giving a small percentage of the money to firefighters and keeping the rest to keep building out their their AI mechanism to keep making these calls. But the bottom line was, you can easily see how this could be used for a purely fraudulent purpose, that the reach would be very broad. And if you did it as the other piece, you don't have to be in the United States to generate these calls. It can be very difficult for law enforcement to reach and even more difficult for people who answered the phone to detect that these are fraudulent calls. This these so there's just two quick examples of things that have been very concerning, in my view, about AI and on both of those law enforcement's falling behind already, we were really struggling on the law enforcement front to figure out how to detect them. And more importantly, even with those, generally speaking, the main barrier between that protecting the public from those kinds of frauds was educating the individuals, you know, make sure you know, you can ask them questions and and figure out what if they call and ask you for money that, but there are ways that they figured out now that really circumvent that and make it very difficult to talk to people about how to protect themselves. Another big front on this issue that we did do something on the Judiciary Committee with is copyright. And the issue there is the training language in the machines that they're using, so that in order to teach the AI model, for example, to generate a song, what they'll do is they'll input in some instances, hundreds, maybe 1000s of different songs that have already been actually recorded by people, and they'll use those to train the machine, and then the machine, the model can then generate new songs based on that. Now, here's the concern for the people who made the songs initially, let's say it's Bruce Springsteen or something. Bruce Springsteen is concerned that he's well, actually Bruce's at work because he's got millions and millions of dollars, I guess. But for regular folks who are writers, and this is how they make their living. They're worried because they're not necessarily getting paid for the use of their song to train the machines. And this is true for books, I had a group come in publishers, they train the machines with by, you know, putting in 1000s, of written, you know, books, articles, whatever they're training it to do, or maybe all of the above, on the challenge now is figuring out well, are these people who wrote the the original materials, retaining their copyright, so that they should get compensation for the use of what they generated of the artistic output that they generated. And so there's a back and forth going about that. My personal view up front, just to be candid about it is yeah, they ought to get compensated for the use of what they generated. But then there's logistical and sort of technical challenges to that, too. So for example, let's say you've got an AI model machine, you know, this machine generates a new song, love song to Glen Ivy, we'll call it just for, you know, sake of that, and it use, I don't know, 5000 songs in order to train it to do that. Well, even if you agree that there's fair use, that applies there, and that the people who wrote the initial song that were used to train the machine should be compensated, there's a real logistical challenge to going back and figuring out how to get the 5000 to all of those. Now, they already do that, a variation of that on a smaller scale for copyright for music already. And you've heard scenarios where they've sampled music, pull out a snippet of a song with Blurred Lines, I remember where was that Robin Thicke allegedly borrowed from Marvin Gaye, and they had to go to court about it. So that happened already. But then if you multiply that by x 1000, how easy is it to track? And then the other question I posed when I was talking to some of these copyright experts in LA, a couple weeks ago, what do you do with the next level? So model, one got trained from these X 1000 songs, but then you have models 1234, and five, all of which were trained on different songs that then train off each other and create the 2.0 level? And then that generates a song, how do you trace all the little snippets of that and made so that you can go back and give them their dollars. And then the other piece is, who's actually going to do that? Who's going to like track it at that level? So that you get your point oh, five cents, out of the snippet of the song that was used to train the model three, three layers ago. And so there's a lot of concern as you can imagine, because once these things get trained, they can generate things pretty quickly. I just saw I was in Taylor Swift. Oh, this was at one of the the previous speech. And so they they said write a song in the style of Taylor Swift, who just broke up with her boyfriend or something, and it generated the song. Have you guys seen chat? GPT that sort of so that, um, she's only one or two. Okay. Well, just to test it out. And, you know, I thought I was the old fogy on the on the tech here.
So my one of my kids was saying, Well, Glenn, or dad, you know, you should They're gonna start calling me Glenn soon because my youngest is 23. So he goes, you should check this out. And his The reason he brought it up was because I was looking for a speech writer said, you don't need a speech writer, you can just get chat. GPT. Right. Cover your ears there. All right. So I said, let's take a look right? So he pulls it up. And I said, do a speech in the language of Martin Luther King, and it came up and while you're spinning it around, there you go. Oh, that's yes. Robin Thicke, Pharrell Williams, yes, I will now pull up chat GPT. We might as well test it live. And so it came up with the Martin Luther King speech. And I recognize snippets of it because I've read a lot of his speeches, and frankly, used chunks of them to speeches and sermons. I have a dream phrase within there, for example. And so I said, Okay, well, what's it going to do if I say generate a speech in the language of Glenn IV. And it struggled with that a little bit more, it took, like almost two seconds instead of point five seconds or whatever for, for Dr. King, but it came up with a speech. And I promptly read it and said to my son, I'm much better than that. But it was kind of eerie, how close it was. And they were, I could see snippets of things that I had said, in speeches, or op eds, or whatever from like, who knows how long ago that popped up in this version. So from that standpoint, it's very powerful technology. Number one. Number two, we just talked about the copyright issue. Number three is the jobs issue. So I know, I was joking about the speech writer, kind of but you know, before I started practice, or before I got back into office, again, there was a stretch where I was in private practice. And at the end, I had my own law firm. And I would hire young students to do sort of first draft research for me and things like that. Because in the old days, when I started practicing law back in 1986, and yes, I'm going back that far. That's how they trained you. So they bring you into a litigation shot. And you know, it's a pyramid like this, you're at the bottom. And let's say there's 20, new lawyers at the bottom, and there's two or three partners at the top, you got a Glenn, go do this research and write this memo for me. Or if you got lucky, write the first draft of this motion. Right? Now, you can use this stuff. And you got to check it, because as we've seen before, it will write the first draft for you. And it'll do it in seconds. Sometimes it comes up with fake cases. So actually really do the research. Because one guy, actually a lawyer, use one of these had the chat GPT generate the brief form, filed it without reviewing it, because he assumed it would be accurate. And he got blown up by the judge, because it sort of incited fake cases. So clearly, there's challenges there. Because unlike the copyright piece, where we talked about where they're, they're picking what's going to be used to train it. It just went on the Internet and pulled stuff in and grabbed, try troubling information, some of which was inaccurate. And that goes to one other factor I want to chat with you about and that's AI models bias issue. And I'll just check. Time. I'm out of time. So much for the q&a. Well, look, it's been a great to have a chance to speak to you about this. These are really challenging issues. The one thing I will say before I wrap up is if you're talking with your members of Congress about this issue. One of the things members of Congress you're not good at as is being humble. And one of the things that we didn't get to this part, but there's there's some aspects of AI, in fact, major aspects of AI, that we have no idea what we're talking about yet. And in Congress, there's a movement away from allowing agencies you just had somebody up here from an agency where they had the expertise where they had the scientists and the engineers and the like on staff. We're moving away from that and trying to pull the control over those factors. In those decisions back to Congress. Here's the problem. Most of us are lawyers is that very few engineers, very few sciences, very few people and very few Computer Sciences for sure, who are elected to Congress who actually know what this stuff is about. And so And oh, I forgot This part where we're not doing hearings on Hunter Biden. The actual hearings we do on substantive matters, usually lasts two hours, frequently don't have people providing any kind of new information that's useful or insightful on the issue at hand. And therefore, you've got members that don't know what they're talking about witnesses who aren't speaking on the issue, and relatively small support staff on our of our own who can walk us through it, because we're trained to hire. These days. Now, usually, they used to be legislative and lawyers. Now their communications folks, and God bless us for that. But for people who are actually going to be making the decisions based on scientific issues like these were woefully understaffed. So to extent you have a chance to convey something to your member of Congress, your elected officials, please tell them to be careful about this. And you have to be because you're speaking to people who aren't humble, you'll have to be careful about how you say it to them, so you don't just piss them off. And they, they shut down on you. But this is an important time. Very important time. So I apologize for running over. And I appreciate the chance to come and speak to you. Hopefully we can do a q&a at a future date because I'd love to hear your thoughts. But thanks so much. Have a great conference and keep up the great work.