So I want to chime in on that. Of course, you need machines and you need humans. The thing is to figure out, which are the things that the machines can do best, and let them in this case, the LLM 's let them do that. And where are humans most effective and also least harmed by doing the relevant work? It's true, Eric, that you, you can't have perfect outcomes instantaneously. But you're gonna have much better outcomes that we have now more quickly, by testing again, and finding the right balance. So of course, you you can and companies already are deploying MLMs, to make decisions more quickly, and to hire fewer moderators. So that they can, of course, spend less money on them and also damage them less by exposing them to the terrible content. There has to be more systematic oversight, including outsiders, that is a piece of the puzzle that is also almost never discussed, and is absolutely vital companies are regulating human expression more than any government does. And more than any government ever has, yes, that includes China, on a much, much greater scale, they have much more power over human communication than any state than any government. How and we have virtually no knowledge of how they are carrying that out whether we're talking about humans doing the carrying out or software or MLMs. That's untenable. It's we should, you should join me in being up up at night worrying about this. So a system for regular oversight of companies control a company's speech governance, to include content moderation, but not to be limited to content moderation, is vital. And that would solve, first of all, coming up with such a system would require the people who come up with it, and the people who critique it and improve it as part of that process to dis to make some of those decisions that you refer to as some of those trade offs. And secondly, as it gets tested, one would find a better and better balance between the efforts of the machines and the efforts of humans take just one example quickly. We have no idea whether did for groups of human beings have similar access to platforms. When men and women post similar content when binary and non binary people, when Hindus and Muslims in India, for example, post similar content, is it being taken down at the same rate? The Oversight Board has absolutely no capacity to test that, because they get to look at one piece, one decision on one piece of data or one account at a time. So how is it possible that in this time when we claim to care about Dei, we're not even asking questions like that. So we need a standard, a set of standards for systematic oversight of content moderation and other and other practices by