So this is this guy developed this bike, and he's been offering $500 anybody that can write it, and he's been doing it for several years now, and nobody's been able to do it. And it's a good articulation of how difficult it is to, to kind of do that unlearning, unlearning and relearning and he actually himself learned learn to ride the bike, it took him 18 months to kind of rewire his brain to be able to do it. And now he can't ride a normal bicycle. So so that, you know, this is kind of all about, you know, thinking differently. And, and, you know, this, this articulates sort of the trajectory that we're on with, with artificial intelligence, and, and, you know, lots of other disruptive technologies, you know, we as humans, and therefore, institutions that are run by humans, kind of, you know, think in a very linear fashion, right, and these technologies, and AI is a great example, bumped along in fairly small and uninteresting ways, if you're not really paying attention to it, until it hits this this inflection point. And we have this, this exponential progression. And when that happens, that's when this exponential gap gets created. That's when disruption happens, some company we've never heard of before wipes off another, you know, a big, big enterprise company off the face of the earth. And it's kind of the Netflix and blockbuster phenomenon, right. And when artificial intelligence came along, that's exactly what we saw, right? Everybody threw on the brakes, wasn't sure what to make of it, and for good reasons, right, lots of security issues, you know, they didn't know what they were getting themselves into. So lots to think about and deal with. But, you know, it wasn't really, you know, wasn't really well planned out. And well, you know, sort of dealt with once it got on our doorstep. And so, we spent a lot of time sort of looking backwards, in order to look forwards, we use history as a guide to sort of create a mental framework for us. And, and so we look at, you know, certain things that we've learned along the way, and this is one of them, right. And, in the battle between the past and the future, the future always wins. And so, you know, as much as a lot of companies wanted to just sort of put the brakes on AI, the train is moving, right. And so we've got to just keep, keep moving forward with it. And, and, and, you know, keep driving on. The other thing is that, you know, we're a pretty stubborn species, we're resistant to change, we don't like change, even when it's clear that we need to do so. But when, you know, when we're forced to change, we're astonishingly versatile. And we, you know, we look back at, you know, world wars and pandemics and you know, we've we've come through a lot along along the way and, and we're able to do that pretty well. And this is playing out in kind of in real time in artificial intelligence, right. 97% of companies surveyed in this Cisco report say that they've increased their their urgency to deploy AI technologies in the last six months. So we're seeing this, you know, sort of adaptability. The bad thing is only 14% feel like they're at a place where they can leverage AI. So the challenge really is to rehearse the future and prepare for a bunch of different you know, potential outcomes. So as I said before, predicting the future is difficult to impossible, but we can rehearse the future and come up with plausible scenarios. And the way that we encourage companies to do that is by asking what quit what if questions, right? Because if we ask what if questions today, we can avoid asking what now questions in the future? And here's a good example. Will artificial intelligence be more profound than fire electricity and the internet? And what if the answer is yes, and we periodically survey through LinkedIn and others so usual channels to see what people think about this. And back in 2021. A lot of people, you know, said no. And, you know, even more said, it's too early to tell. And in April, we did the same survey and a lot of those too early to tell us have have moved over to Yes. And we're actually going to launch this here, again at the beginning of the year, and I think we're gonna see a lot of both the nose and the too early to tell us move over to DS based on what's happened in the last six or so months. So if we think about, you know, artificial intelligence, you know, its impact on on innovation and the growth of knowledge. You know, it really it's it's opportunity or ability to have an impact on on innovation could be, you know, really its greatest asset and tool. But we need a framework for how to think about it for how to rehearse the future, right? We can't just sit around and pontificate about it. And we need to look beyond just science and technology, we need to think about it in the context of geopolitics, the environment, you know, economic models, societal models, and so on. And so we've developed a framework to help us do that. We call this our convergence model. It's it's a bit of an eye chart. But what it does is it helps us take a look at as we're doing our scanning, and look at all of the things going on, again, not just in science and technology, but societally geopolitically economic factors, and potential future scenarios, and look at, you know, and understand all of those, and then we can look at it through different lenses. So we'll we'll take artificial intelligence as an example. Here's all of the different areas that can either impact or be impacted by artificial intelligence. And we'll just zoom in, zoom in a little bit and and take a few different examples here to help articulate it. So first of all, personal robots and AI assistants. Let's have a have a look at where that's headed. My name is Tom, I've traveled from Australia to meet you. Oh,