I do Yeah, cuz it's just faster. Yeah. So I mean, this is, again, due to the caveat of like how I'm approaching this, right, I'm just going I'm absorbing as much as I can. And in a lot of cases, a lot of this is enabled by AI, right? So you're able to I'm able to read, you know, a paper on something and then kind of reasoned through it with an AI. And then sort of checking that reasoning, you know, I'm kind of going through the circular thing. You know, the, the, my guess that I've told a lot of people is 2026, like wearables sort of have a heads wrapped around it, where we can see from 2022. And again, this is like, you know, the public facing AI, generative AI with Chet GPT. But it really goes back to like, 2016 or so there's been a curve where you can see large language models starting to like, make sense and see, okay, this is like not a toy, like, this looks powerful. And we've seen, essentially exponential growth from there. But our brains don't understand exponential growth really, like it's, it doesn't make sense to me, really. But it the idea of, you know, GPT, for right now, and GPT, five, you know, GPT, five, could go to a trillion things that's trained on which GP GPT 3.5, I think was like 155 million. So it's just what does that mean, we don't really know, it will probably be much more accurate, much broader, you know, understanding of things more understanding of reason, and nuance is probably a huge thing that they can sort of infer, like reasoning, where it may make up something at this point are hallucinate. And that we get to that point, right, we're really at this point where we could start running out of information, that's very possible. However, there's a lot of recent research showing that these models can train themselves and synthetic information, essentially, they can serve as teachers for themselves. And if that kind of pans out, and we don't have limitations in the algorithms and all of the structures that it's not breaking down, then it's really just like time and money and compute power. You know, we think about like the power and sort of what this is, there's like, there's you think of it like two different things. There's like the the energy that you use to make these big models, which you can think of is like a university that you can go and ask questions of, but then there's also what's called inference, it's like the energy that's used to actually query and like you're asking a question, and what does that take? And there's obviously a lot of discussion about that. And like the environmental impact, and like, all this stuff, is very valid. However, there's a lot of research showing, too, that if you pour more power into the inference, and you essentially let it think more about its question isn't the instant that there's also scaling happening. So it's not only that they can generate more, you know, stuff and just have water, you know, lexicon of knowledge, but then also the ability to put more power into it and think about it more patiently and spend a day thinking through a math problem or, and then it can just crack it, right, because it's just, you're importing billions of dollars worth of equipment into it over a day or something. But that's insane. Like, that's where it's like, oh, it fusion energy or something. It's like, oh, we just figured it out. Because you're in cancer, right? Because that's, that's the sort of like, Whoa, yeah, like what is happening? And that could be, I mean, 2030 I mean, it's really possible.