Does this lead to policy priorities? I think that are I think going to quickly come to the fore as our regulations become more specific, the overarching issue towards more trustworthy and transparent AI. But we've recognized that non security issues are critical, critical to that trustworthy and transparent deployment. So just like over the past 10 years or so, we've recognized that security reporting information sharing is good for the ecosystem, which is why we've provided it with those legal protections. We need to recognize the same thing if necessary for the non security side of the house when it comes to AI. And that includes getting the AI operators similar protections to share threats, tech techniques and algorithms and longer abilities for non security problems. Not criminalizing ethical AI hackers, right. So individuals who are investing in AI for non security reasons, there's currently a petition right now before the copyright office to do this under Section 201 begins yet. And then lastly, clarifying security versus non security, legal obligations. Nomenclature ends up being important in this in this realm here. A lot of people refer to these things just kind of in general as vulnerabilities. But we have a lot of laws right now that places legal obligations on security vulnerabilities are not really intended to sweep in non security flaws. So security is important. My battery's already low, Oh, no.