Yeah. Okay. So I can say a couple things about that. One is just on my own, I kind of have framework for how I'm looking at it. And, and I would say, as I've been thinking about this issue, I've sort of gone through three stages of understanding and the first stage is pretty simple. You know, AI is a powerful tool, it can be used for good, it can be used for evil, we're just finding out what the contours of that are. So it is a candidate for regulation. And there's no doubt about that. And we need to figure out how to do it. It's difficult. Sort of second level of understanding is, you know, we are grappling with a lot of regulatory issues, mostly involving like, enormous billion plus user multinational data driven businesses that are, you know, in many of their own words, impossible to regulate you when you really think of some of the harms that are occurring and what you might try to do to prevent those a lot of times the industry is coming back to us. It's like, well, we'd love to do that if we could, but we're really dealing with extraordinarily complex, huge businesses that are capable of great harm and it but it occurs to me that AI is a tool we might be able to use to monitor, say, compliance, or, or, you know, content moderation. And then finally, I think, you know, AI itself is one of those difficult regulatory challenges. And it may well be that adversarial AI is how we go about that, meaning, if you're worried about a bot, and maybe you need another bot to watch that bot, and yet another one to watch both of them. So I don't really know if these are all, you know, this is where we're going. But this is how I'm looking at the problem. And then I would say, you know, with the to sort of the framework that NIST put out in the the bill of rights that the Office of Science and Technology Policy put out. Again, from my perspective, I'm sort of seeing these as proto legislation. And I'm not even sure that's a legitimate way to describe it. But I think for tech legislation, things happen so fast, and it's it's so likely that things will happen that nobody anticipated, you know, to make a bill that is a law that is effective and can be implemented, you really have to be very, very specific. But when you're very, very specific with technology, it keeps changing. And so by the time you get the law on the books, and everything settled, it's different, right? So I've always advocated what I kind of think of as a catch all. And a lot of times that takes the form of a duty of care. It's like whatever else you do, try not to kill your customers. Right? You know, it's should be obvious. But, you know, I think when there's a lot of money on the table, sometimes we see that businesses realize that accidentally, they're doing harm, but it's actually very profitable. So maybe they're gonna look the other way. And so so it's a very vague, a lot of people have a real problem with that, because it's very vague. And there's two sides to that it's hard to comply with it, because you don't really know what it is. And it's hard to litigate it, because you really don't know what it is. And so I see, I see this sort of thing as a necessity for situations where something really obviously bad happens. You don't stumble into this kind of thing by mistake. Like we had no idea this was a problem. It's like no, when you make a conscious decision that you don't care about your your users. I think that should be against the law.