all? Right, so yeah, I mean, the kind of theoretical framework we've been working on for, like I said before, for about 12 years now, kind of lends itself to this question of AI. So because it's so buildable, it's kind of an engineering approach or an engineering theory. And it kind of starts from this, it's actually in some ways you talked about Nagel, and consciousness is what it feels like or what it is like. And this is a theory, that's almost exactly the opposite. That kind of takes issue with Nagel, in fact, and says, From from the start, everything you think you know about yourself, everything you think you know about yourself, derives, ultimately, from information in there, if you didn't have the information, you wouldn't be able to have the thought, you wouldn't be able to craft the phrase, he wouldn't be able to utter the statement. So everything has to at least pass through the phase of being encoded in the form of information. And so information becomes paramount, and not just information, we're talking about information about the self. Right? When you ask people about their consciousness and their inner feelings, you're asking them to introspect, meaning their higher cognitive networks are accessing deeper models built in their brain. And then they're asked to come up with answers about themselves. They're talking about self information, what we call self models. And some of those self models are telling them, well, you can have a self model that tells you about your own body, like the brain has something called a body schema, that tells you about the shape of your body. So if you close your eyes, you still know about the size and shape of your body and its jointed nature. And we know there's a body schema, for example, because if you get an amputation, the body schema, anomalously can still construct, the model of the missing body part is called a phantom limb. So the brain builds a model of its own body. But the brain also builds a model of its own internal processes, right. And this is what we think is going on. So we ask people, you ask anyone, or you ask yourself about your own consciousness and your own feeling and your own experience, what you're doing is talking about yourself, drawing on information that the brain has constructed to describe itself to itself. And we know why that should happen. We know why systems do that why systems build self models, is because there's this very, very old principle in engineering was discovered more than 100 years ago, that if you have a control system, it works better. If it has a working model or simulation of the thing it controls. As is universally true. So if you want to move your body, you better have a good body schema or model of your body constructed by your brain. And if you're going to have your brain control itself and control its own thoughts and control its own attention around the world, then it better have a good working model of itself. Right? So this is something it's not some fluffy, philosophical thing for armchair thinkers, it's like really important for the functioning of the machine, no wonder evolution, selected for it. No wonder, you know, survival of the fittest gave brains this amazing ability to build little, almost cartoonish descriptions of themselves and their own inner workings. And this is what we call a consciousness. And so this is something that can be built in it can be built into machines, and it's something people all over the world are now beginning to realize. You don't just like Neil said, you don't just build something more and more complicated, and then magically consciousness comes out. That's sort of like, if you told the Wright brothers who invented motorized flight, you said, just build it more complicated. And eventually flight will emerge out of it. Like, no, that doesn't happen. You have to engineer it to do that. And so you have to engineer systems to have these self models. The attention schema theory says, the key model is the brain's model of its own attention. And attention, basically, is this ability to deeply process information. And so we not only deeply process information, but we also have a kind of little internal description or cartoonish picture of what that means for ourselves, and we call it our consciousness. That's this theory, the attention schema theory, and we think this is terribly important. for the brain to control its own internal processes to get along through everyday life. But more than that, if you want to build this into artificial intelligence, I'm not just talking about building things that are better at whatever they do. That sounds kind of scary. But like I said before, we are also very social beings, we don't just build models of ourselves, we build models of other people, right. And so I see consciousness in myself, but I see it in you, I see it in people that I know that I'm talking to, and it helps me, it's, it's, it's the key to why we are a pro social species, right, we are not pro social toward things that we don't think of as conscious, right, we behave well toward things when we think they're conscious. And one of the reasons is, because we're using the same machinery in there, to see them as conscious that we used to see ourselves as conscious. And so it's, it's almost like a built in empathy. And so this is something I think, is really important and really important to study. And this is what I've been doing with EY and with Judd to try to build up in a controlled way to understand how we can if machines would be more cooperative, and more social and get along better with people, if we gave him that kind of toolkit. And it may turn out to be yes. And it may turn out to be really scary. And then we'll we want to know that by testing these things. So anyway, that's a super brief description of this kind of framework.