Well, we've mentioned several times here, the idea of red teaming, I'm not sure everybody knows what that is. But you know, the idea of sort of like a lot of trial and error and figuring out in real time, hopefully preemptively like what are the bugs in this, you know, the code and algorithm, I think we need to try to figure out how to scale that up and find institutional structures and best practices to make that more of the norm. And that's part of that corporate social responsibility kind of mindset, or RRI mindset of baking in best practices by design. We've heard, you know, the term used for years, you know, privacy by design, security, by design, safety, by design, these can mean things, right, but they need to have some buy in from a diverse group of players. And there needs to be institutional structures. And I think the good news is today, there's 1000, flowers blooming in terms of like different frameworks, best practices, you know, ethical guidelines. The bad news is, there's 1000 flowers blooming. And you know, at some point, the all the flowers can become weeds and take over the garden. And you're like, what's, why don't we get some consensus here. And in the old days, I mentioned when Stephen and I are cutting our teeth, and like video game regular policy, and like trying to fend off regulation, censorship, we got buy in around a single entity and a single leg rating system. And but that was like a more of a static kind of concept, a set of content, much easier thing to deal with, we thought it was quite challenging at the time. But it really wasn't nearly as challenging as what we face today. So how do we do real time governance, Real Red Teaming, and figure out how to do that best practices by design, baking and ethics by design? And, you know, keeping humans in the loop. These are the two things you see all the governance frameworks gone around like Best Practices baked in by design, humans. You hear this again, and again and again, and all the different frameworks, but there's just so many of them and everybody's got a slightly different plan. So this is the great challenge. I've already said. I've already given away what I think needs to be done. I think you have basically a standing committee with NIST and like NTIA working together to have Real Time, Rough and Ready rules of the road and best practices being just like constantly made on the fly. It's just a constant iterative learning process in government. There's no end of that. There's no final end state. It's endless. And that needs to be encouraged. And you have to get around politics to get this done. The problem is the people on the far right, the far left are gonna say it's rigged against us. We already see this today. I mean, the there was this recent Washington Post story in New York Times story about that about like all the Conservatives saying it's woke AI, you know, it's being programmed against us. And then people in the left saying, No, it's very, you know, it's very bias, discrimination, discriminatory and disinformation. Right? Something's gotta give there. There's gotta be some light in between, there's got to be some effort to come together and say, Let's try to get some best practices through and trust each other on this. This is why I don't think we'll get former legislation. But I do believe there are still smart people in government and in industry and other shared stakeholders who can come together and find consensus best practices, to try to have a more socially responsible, algorithmic, innovative ecosystem, that's, that's going to be my hope. Maybe I'm a fool. I'm gonna go with that. No, you're not.