you In most cases, when you ask it the same question over and over, you're gonna get different responses. And this is the nature of the technology. You also can't control for error at the moment. The So, for if you're thinking about using generative AI in your newsroom process to this high risk and to write fresh stories, I would recommend you think extremely carefully to see if it fits your risk. Tolerance. We know these systems make errors. So when they do make those errors, it risks our core value, which is credibility. And if you read the opening session, you heard you heard about that as well. It's called credibility is the currency of journalism. And so using technologies as high risk AI, it is very much use at your own risk. And so for most newsrooms, I would recommend that you don't use that for complete production. purposes, but rather, certainly for experimentation, you can do that. That's go over just AI use by large news organizations. Oh, A predictable form of AI we know when we put something in we know generally what is going to come out. I will point out something very, very important is that at the end of the stories, these automated stories we include that disclosure. AP is very upfront about disclosing exactly which vendor we're using to accomplish this and and how the data was provided in these in this product is available to AP members and customers. And a different use of AI also dating to 2014. Is comment moderation on digital platforms. So this is a screenshot from the coral project.