No, that was very easy. So I mean, honestly, there are plenty I'm going to try, I'm going to try to go a top of my mind, I would say very, very first very beginning of the company, what we did was sort of like the most basic prototypes we could go with. And for that we were building, you know, this earplugs with conductive textile on top of it. And we would plug that to a very big board externally. And I would put that in my ears and microphone, there would be no you have to be the guinea pig, because you know, I have to code and like I have to look at the data. And so I would end up having my ears super irritated at the end of the day and like but at least we got some data. So I think the first technical challenge was like, Is there a signal? Is there anything you can use by placing sensors in or around the ears of the user? So that was really the all the beginning of the company was what can we get? What can our Are we looking at here? Right? What can we filter what we're gonna get to see? And I think once you realize that there is some signals, specifically, if we look at the facial muscle activity, which is the first one and Bill is like, How can I insulate a specific gestures from the rest of all the things we're doing everyday with our mouse to make an algorithm that's robust enough. So what's the data that I need to collect to make sure that when the user is going to be wearing my technology and using my technology, it's not going to be full of false positive, which is awful in terms of user experience. So the first thing was like, what's the data I need to collect, to not only be able to properly detect an event, but also to detect when the user is not doing something? So I'm going to give an example. So we're using, as I said, like facial muscle activity. What happens when you're chewing, what happens when you're talking and your activity, that same muscle? What happens when you're swallowing? What happens when you're running? And so all these kinds of figure these kind of things, you need to understand how your algorithm performing and collect data to kind of help the AI algorithm to understand what's a false positive from what's an actual true positive. So data collection, yeah, of course, that data doesn't exist in the market, right? It's not like pictures of cats and dogs that you can find on the internet, or like even like text for feeding an LLM, like GPT. You know, like, all the data we have to use, we have to collect ourselves because there is no neural data database that exists today that you can use and train algorithms on. So number one, the data for the algorithm. Number two, once you know what kind of data you want to go like, Well, you see, you need to build a prototype that has all the sensors that fits in the ears of the user, he knows like that big box I had at the beginning. And these earplugs don't make sense if you're trying to build a consumer ready product. And so here like, that's where we hired cameo, who's from like Softbank robotics, and I had the tremendous experience in leading hardware design, we actually spent a lot of time doing two things, one, of course, miniaturizing, everything so that it fits on the tiny PCB. But here, there's an extra constraints in what we're trying to do, because we want to be the one stop shop solution for every device makers. And that means that we can't use these kind of $100 components that you would have on medical devices, for instance, you have to think from the get go, how do I make the integration and the industrialization, super easy for any kind of manufacturer in the future, and how can I make a product that's not going to be 3500. But there's going to be the regular $250 that you're going to be spending for your earphones. And so same here, you have to spend a lot of time trying to look at each and every material on the market for the electric for the sensors, you have to look at each and every sensor to get close to the ticket in Texas Instruments and the analog device of the world should understand what what they're going to release to see if you can influence the roadmap one way or another. So that took some time, we got to some places right now where we have a wireless device that's been that's received an award at CES for innovate bits of innovation in the personal headphones category in audio. So you know, we got somewhere, so we have something that's still a bit bulkier, but we're getting closer to your earbuds. So data collection, miniaturization of the device. And I think the last part is AI optimization. And what I mean by that is that, again, collecting data is great, having a small device is great. But if your AI algorithm require 10 GPUs to run, then you're not going to go anywhere, in terms of providing the user with the quality of control they need, which needs to be super instant, you need to run on the edge need to run on the device itself. And here again, we spent a lot of time on actually, my co founder and the team spent a lot of time optimizing our algorithms, you know, pruning them, like doing a bunch of other other processing to make sure that both the signal processing, and the AI will run on the smallest chipset on the market. So like, you know, any audio chipset that's already on the market, like the Qualcomm, the IRL, this world. And right now, I think the whole stack for the first generation of controls, runs on 18 kilobytes, which is like very, very small compared to anything you can imagine.