I think that's a really important question. And I think this is one of the earliest things that we did as a company is really two things. One is, we need to make sure that this technology works. Like I personally didn't want to quit, you know, full time academic surgical career to just create, like, a novelty. You know, it's Oh, it's cool. It's fun, but it doesn't actually solve the problem. That's That's not what I was in for. And so I'm like, I would like to know, scientifically does this work or doesn't it and then, you know, we'll figure out where to go from there. And secondly, the technology needs to provide value to like, not only does it need to work, but it needs to somehow improve outcomes, drive revenue, or save money for hospitals for the med device industry, otherwise, this this won't be successful long term. So those were two things that were very high priority for us. So one of the first things we set out to do was to get an academic center to do a clinical study. And so UCLA performed a level one randomized clinical trial, where they took 20 trainees 10 were trained in VR and then assessed to a specific proficiency threshold. And then 10, were trained traditionally, so this is didactic materials, text, materials, illustrations, and they were allowed to have as much time as they needed to prepare. And then they came into a test environment where they had to perform the procedure, and they were videotaped. And then they were assessed by a blinded observer, who didn't know what training modality they had done, and evaluated using a scale called Oh sets, which is the object objective structured assessment of technical skill. And what it is, is basically a score of one to five in different categories, like time and motion for the operation, etc. And what that study found that was published in the peer reviewed journal of surgical education was that the OCR trained and assessed individuals performed 230% better than the traditionally trained individuals, which is a difference in about 10 points in terms of the total Oh, SAT score. And what that study told me at the time, is not only does the technology work, it works really, really well. If you look at prior simulation studies, where oftentimes even when you see a study where it's like, Okay, this technology works, but the difference would be so small, it wasn't really clinically significant. So we were seeing a really big clinically significant difference. I think what also really stuck out to me with this study was the traditional trained group could take as much time as they need it. They only needed to come in when they felt ready, whereas the Oso VR group was measured to be ready. And this is, you know, something that I really harp on quite a bit. How we feel and what reality is, are often quite different. And the whole world right now of sort of surgical care is very intuitive and feel based. It's, you know, we feel like this guy's ready, or I feel like I can do this procedure, but there's not really an objective marker of proficiency. And I think this study is sort of an indicator of what a big difference that can actually be. And like, should we really be relying on how we feel in terms of someone's readiness to do something. So that was something that really stuck out to me. And then we had a second level one randomized clinical trial published just a few months ago in a top five orthopedic journal, that showed a 306% improvement in the ability to perform a procedure without supervision when trained and assessed in VR. And this is a very interesting outcome, because not only is once again, a big difference, but there was a study in 2017, that found that after 14 years of education, 31% of graduating residents could not operate without supervision. So to actually make that big of an impact, and that kind of endpoint is a pretty big deal. The final thing I'll say about the evidence is that these technologies that you're talking about looking into like AR and VR, they all get lumped together. It's like saying our books helpful for academics, and just, you know, saying that any book is exactly the same. And so what I point out is that not all VR is created equal. And we need some sort of way to evaluate specific VR platforms to understand what those key differences are. Because I find that it's all getting kind of lumped together where people are like, Oh, I tried Google Cardboard, like VR is not going to work, right? I mean, this kind of extreme example. But that's some of the mindset that's out there. And I think it's just important to understand that, you know, just like a bad website, or a good website, VR is a blank slate in which you can not only present an incredible experience, but also a whole ecosystem platform surrounding it, that can get quite complex in terms of how it all works, and works together. And so that's why I think these studies are not like, hey, VR works. Certainly they show that VR has the potential to work. But these studies are showing that also VR works that the way that also does training and assessment is different and effective. And I think that's just an important delineation.