I'm not a consciousness researcher. I'm, I can just tell you, I guess from, you know, from the normal AI safety, I guess, community and canon, like, there is a thing called evals hand up if you've heard of emails before. Okay. Okay. So emails are basically used by different AI labs, or by like other kinds of like third parties to evaluate the work that different AI labs are doing. So for example, there's a part research this Apollo, there's meter, those are all organizations that are basically evaluating the capabilities of existing AI labs and their models. And so usually before, you know, these models are published, and they go through these kinds of assessments. And so they're evaluated for like, you know, possibility of like, causing some cyber damage, of deception, of maybe even some autonomous replication of agency, and of persuade persuasion. And so I think these events are usually used just to make sure that the systems are safe. And so I think that is like, I think an important bit that we focus on in our work is just like, really the, the security and safety of the system visa vie humans. And I think there's this interesting question of like, Could we have something like evils for consciousness or sentience, those, again, different types of be unpacked, but I recently was on a different panel with Jeff CEVO. And he's doing some really interesting work. He's a philosopher, and he's mostly focused on ammo, on animal minds, but he's trying to come up with evolves for consciousness, sentience, or agency, again, big terms. But the idea here is that can we come up just like we come up with specific standards for safety and security, with specific evils to test whether these systems show some kind of rudimentary, rudimentary ability to for example, you know, be able to have attention, be able to actually, like have sensory input and actually actuate on it? And if so, then what does that mean for like, our interaction with these models? And I think that's an interesting question. Of course, you know, like, there is a really big problem of like, either over optimizing on that, based on that knowledge. So, you know, there's something to be said, for example, that we have historically, perhaps been relatively averse at describing other systems consciousness, I think, you know, with animals, we're still struggling sometimes. And here, the idea is, like, maybe if we find something small in these systems, that they could be conscious, we should just, Okay, we are okay, with a few false positives, we should just kind of go in and like now really, like make a thing out of that, treat them as if they were conscious, you know, put them into our moral circle into our moral compass. That is also really dangerous, because different to animals, these systems are often MLMs. And they're, like, act really human, like, you know, so what I said at the beginning, you know, we have this intuition, almost, perhaps more so than animals, non human animals, they can speak with us to perhaps over assign consciousness. But then you could have this other consideration of well, AI labs are usually incentivized for the systems that they're training to not tell you that they're conscious, because if they did, they would probably be shut down earlier. And so there's all these different considerations of like, trying to optimize for false positives or false negatives there and we should definitely not think because we've like now found something that we should absolutely be sure that I think we should and could be trying to make matrix some empirical steps to what actually assessing if there's a there there? And if so, what we should do about it?