The AR Show: Yacine Achiakh (Wisear) on the Promise and Imperative for Neural Interfaces
5:34PM Jun 30, 2023
Speakers:
Jason McDowall
Keywords:
working
ar
device
earphones
neural interface
technology
building
algorithm
product
part
glasses
sensors
interface
apple
data
company
co founder
augmented reality
future
play
Welcome to the AR show where I dive deep into augmented reality with a focus on the technology, the use cases and the people behind them. I'm your host Jason McDowall. Today's conversation is with Justine Asha, you're seen as the co founder and CEO of y, z are a company creating neural interface devices for AR glasses and other wearables that are packing their technology into normal looking earphones and working to deliver the capabilities of the mouse and keyboard hands free earlier in his career at USC and studied mathematics in business before joining Criteo, a fast growing tech unicorn in France. There he led AI driven products focused on advertising technology. It was also there he met his co founder alongside Roy who has a background in neuro technology. And this conversation scene describes the imperative he sees for a new type of interface to augmented reality devices.
For the upcoming revolution that augmented reality will be about there is a need to rethink the way we interact with technology, there is no need to replace the previous keyboard and mouse that will require you to be sitting and like, have your hands busy being busy all the time, to something that will be much more wearable and much more in mobility. And that's what we think neural interface will play a key role because basically, they enable you to have completely hands free, completely silence and like you know, always accessible controls over whatever device you want. And specifically, if tomorrow, your glasses or your computer is in front of your eyes, and you want to navigate through it, while using your hands or remote doesn't make sense, you want to use something that's much more natural, such as your eyes for navigation, or like your muscle or kind of a muscle quickness on motion as a way to select something you know. And so that's really the direction was to try to take like, there's a need for much more mobile and much more wearable interface, if we want to enable the upcoming augmented reality revolution, and neural interface is the right solution for this. And that's what we're building.
He goes on to describe the basics of neural interfaces delivering a magical experience without requiring a hole drilled into your head, how the technology works and some of the other technical challenges along the way. go to market strategy keys to consumer adoption, the current state of the tech and the roadmap ahead. As reminder, you can find the show notes for this and other episodes at our website, the AR show.com. And please support the podcast@patreon.com slash the AR show. Let's dive in a scene you grew up in France went to school in France. But why is it you decided to make the move to San Francisco.
So it turned out that seven years ago, I was with someone that decided that she wanted to save the world and work from Tijuana. Except that there was not too much work for me in Taekwondo. So I tried to find the closest possible place where my company would transform it happened at San Francisco was a good place for tech companies. I think it still is in a way. And that's how I ended up moving in SF in 2015, beginning of 2015. And I think I kind of fell in love with the US, you know, in and out the great California environment that we have in great food that we have. And so I ended up like yeah, falling in love with it. And I've stayed there since then. I moved to the East Coast though, for easier communication with friends. But I still stayed in the US where I kind of liked the vibe and loved environment.
I think it didn't not hamburgers is an unsung hero of advertising for California living.
I totally agree with that. And I would fight anyone that would assume or tell that Shake Shack is better than in and out. So yeah, I want my personal line with that
raising in what was it that you were working on in that company that allowed you to move from France to SF.
So back in the days, I used to work for criteria, which was one of the French first tech unicorn and made this one my co founder myself we met actually. And we used to work in Paris. And then we got sent to San Francisco to help growing the office over there. So I wasn't a product management side. And he wasn't the data science side. And that's actually how we ended up traveling there and meeting there and working over there. In so in the ad tech industry.
Did you guys know each other before you'd made the move to San Francisco.
We knew each other because we're working in the same team. But he sent me kind of a to move first and try out the Californian vibe. And then when you realize it was great, I decided to move there with his wife as well. So you know, that's how we kind of continued that friendship and, and made sure that we would work together on a couple of projects before starting our own I would say
and how did you integrate into that San Francisco experience?
Good question. I don't know if we're actually I mean, from San Fran San Franciscan people are really, really welcoming. So you know, that wasn't too hard to integrate. But I would say that one of the key thing that we really enjoyed over there was the fact that you feel like you're living a little bit in the future. You know, it's like, you try these autonomous vehicle, self self driving cars before everyone else can get to try it. You do get to try the Uber of washing machine or the Uber of laundry, which actually collapsing the total failure. So you need to get a glimpse of what the future will be about. You're surrounded with all these big tech companies. And I think that we're working in tech so of course that was helping us but you do get connected to a tissue of people that are super innovative, super thinking forward and forward thinking and you do find yourself kind of melting without getting like in sync with that.
You're around it. You get to it. perience it? Did you get to socialize within it as well?
Yeah, a little bit. We used to play soccer with a, like a couple of other colleagues of ours from Korea to we were not very good to be honest, I mean way, not as good as the French soccer team, for sure. And we were playing soccer every other Friday or so on. And we were losing most of the gains, mostly because I was the keeper. And so wasn't really good at saving goals from being scored. But turned out that one of the days we're actually playing soccer, we lost against a team that was composed of people from neuralink, the company from Elon Musk doing brain implants. And after grabbing a beer with them afterwards, we, I mean, I discovered what neural interfaces were and like how great they could be. But I think I also realized that having my brain drilled and an implant being put into it was not something I would do the next day. And that's kind of what was in a way the inception of the company. I also my co founder also studied neuro technology. So that helps with that was kind of the inception, like, how can we make neural interface a reality? Before we have to make brain implants? And because before we have to have brain implants in our head,
yeah, there's got to be a better step in between before, we have to drill a hole in our head implant electrodes in so you mentioned your, your co founder had a background in, you know, technology or neuroscience as well.
Absolutely, yeah, he studied neuro technology at the Imperial College of London. And before that, he graduated from Computer Science from one of the French top engineering school, it's on my end that was more on the product and business side.
And so there's certainly this this background that your co founder had, yeah, it's kind of insight and inspiration maybe you got from interacting with the folks from your league and understanding what they're trying to do and where that was gonna go. What specifically, though, for you made this an interesting and worthy problem to solve?
Yeah. I don't know if so I'm, I'm a big geek, like, that's what's interesting. So I'm the business and product co founder. And yet, I'm the most geeky, my co founder, who's the technical one is actually running a lot playing music, you know, all the healthy thing that usually the business people are supposed to do. So anyway, so I'm a big geek, I've been playing a video game since I mean, before I can even remember, I clicked something like 1000 hours on Call of Duty Warzone 2000 hours on like, Clash Royale used to be ranked. And so I think I've been going through so many gears like keyboards and mice and like, trying to find the best one that would allow me to be like super fast and have the best reflex and so on. And so trying to find the most instant way, or the most intuitive way to communicate with my laptop, if that makes sense. Now that my laptop answers me, but as always been kind of a key to what I was trying to build. And so I think this is something that's been driving me for a long period of time now, like, you know, going through iterations and iterations and iterations. To give an example, I think, when I was playing Clash Royale, I ordered something like five or six tablets in a row that they kept sending back, because I couldn't find the right dimension for my hands to reach the right corner of the screen at the right time, you know? So yeah, trying to find the most optimized way to interact with technology has always been kind of core to what I was doing. And I think most recently, since my available time, my strengths, I have become kind of a productivity obsessed. I'm using like so many of these vim cow, like most recently asked browser, where a keyboard shortcuts are like kind of the the key thing to really navigate and improve your efficiency. And I've been really trying to optimize the way I interact with technology to be as fast and as productive as possible. And so I think that's been driving me. And when we started looking into what we could build with my co founder, and adding on top of that Inception from neuro link, I would say that trying to connect the body or the brain or the neural neural inputs from a person more directly to a laptop or tomorrow to AR glasses. I think that's really something that struck us and like that we decided to spend a lot of time on and it was kind of the creation of vysya, like four years ago.
So if you break it down kind of the next level, you think about the near term possibility for neural interfaces, you you're obsessed with productivity and trying to find more efficient ways of solving things. And I deeply appreciate that I'm very similar. And and you see this kind of future of AR and VR emerging. Yep. And so what did you imagine? At that time? What did you imagine would be possible? What specific problem did you want to solve? With Weiser?
Yeah, well, I think again, they can getting back to the inception of the company. So we are like neural interfaces of the future. We're not going to wait 50 years to make this a reality. Let's try to find a way to make this available to people. And then we think about well, it's kind of the right time to start this. The reason is that, if you look at like the way we interact with devices, we've had keyboard and mouse that haven't changed for the past 30 years for laptops. We've had the touchscreen for smartphones, and I mean here I'm just going to be quoting Tim Cook, not that I'm as smart as him whatsoever, but still, like what he's trying to say that every new generation of computing requires a revolution in interface. And so what we do see, and I think you'll see it as well, and we saw it at AWS last week, is that for the upcoming revolution, that augmented reality will be about there is a need to rethink the way we interact with technology, there is a need to replace the previous keyboard and mouse that will require you to be sitting and like, have your hands busy being busy all the time, to something that will be much more wearable and much more in mobility. And that's what we think neural interface will play a key role because basically, they enable you to have completely hands free, completely silence and like, you know, always accessible controls over whatever device you want. And specifically, if tomorrow, your glasses, or your computer is in front of your eyes, and you want to navigate through it, while using your hands or remote doesn't make sense, you want to use something that's much more natural, such as your eyes for navigation, or like your muscle or kind of a muscle, the quickness of motions as a way to select something you know. And so that's really the direction was to try to take like, there's a need for much more mobile, much more wearable interface, if we want to enable the upcoming augmented reality revolution, and neural interface is the right solution for this. And that's what we're building anyway.
So Apple does a few things. Well, one of them is definitely around defining the sort of user experience and user interface elements, right with the iPhone, it was all about the touch interface that they had made great, that sort of multi touch sort of experience. And now, we had this unveil the apple AVP, Apple vision Pro. And one of the things that although the the folks had a chance to play with it talked about was how magical it felt to be able to use their eyes to gaze, and then to use their fingers to kind of click as they went through the experience. So this kind of gaze and tap or gaze and click using eyes and fingers was the was the big advancement that Apple presented as part of that that demonstration? What's a better way?
Yeah, I mean, first of all, I would like to thank all the people at Apple that have been working on the Apple pro vision Pro, I keep saying about provision, which doesn't make sense since like an accounting term. So the let's call it AVP, for the rest of the podcast, maybe. So I would like to give them a big hug for two reasons. One, I think like, the AR environment was a bit down before they made these announcements. And two, I think what they did is that they've just created something that will be the future interface, or they define what the interface of interaction with augmented reality will be the future. So you know, what, why do we believe that AIS will be the right way to navigate an AR interface for the past few years? And like, up until now, no one actually prove that and like the fact that Apple goes in the direction is great for us. Now, I think what we need to also look into is that what's the device that they've been showing, it's like a 3500 device. It's equipped with like a gazillions of sensors that are have like a very, very high computing power, very, very high cost. So I think it's great to build an amazing experience. And I think that's what they're trying to do right now. It's like, here is what the North Star of mixed reality, augmented reality experience will look like. And to do that, and to enable developers to build apps on that. We've built that device that's meant for developers, and that's meant to build these great experiences. Now, that's great. But that's not kind of the mass consumer project. Apple has usually right? The question is like, how do you move from that, that has all these sensors to something that we think is the future, which is simple AR glasses, like a couple of grams, that will have to enable the user to have the same kind of experience, and the same kind of interface interface. And so that's where I think we have a role to play. What I mean by that is that, in your place where your glasses are, like 40 grams, you won't have enough space to put as many sensors in as many cameras as what Apple has done today, for looking inwards toward your eyes or looking outwards towards your hands as a way to build that interface. The second thing is that you won't have the computing power to collect all the oldest kind of image data coming from all these cameras and processes directly on the on the glasses. So that's where we have to play a role as always here, because what we're going to do is really mimic what they're doing in terms of an interface, we're going to use the eyes as a navigation, we're going to use a simple muscular activity as a way to select something, except that we're going to do this with neural interface. And we're going to do this from a device that is as small as your earphones, because that's where our expertise is. So what we're really building is like the augmented reality human machine interface at a lower cost, and at a lower computing power cost as well.
18 and a half watts. That's what I read. Carl, go talk to the breakdown recently of what's going on part one of his breakdown was published recently. And he had calculated or somebody estimated about 18 and a half watts is what's consumed by that device. Yeah, that's, I don't know it's 50% More than Microsoft HoloLens. Yeah, and it's about 20 times more than what a truly wearable pair of glasses ultimately needs to be, which needs to be less than one watt for all that stuff. So I really appreciate the way that you described it. They're really defining a North Star experience for the sort of video pass through VR mixed reality device. So it looks incredible. It does amazing things, it's got a lot going on at it. But it is impractical. If the intention is ultimately to be able to deliver a pair of glasses, at least within the next handful of years, that has a suitable set of functionality that we would actually wear out in the wild. So this approach that you're taking, which emphasizes the practicalities of AR devices, which is a has to be a lot lower power, but still achieve the same sort of magical experience is, seems highly relevant, at least from my perspective.
I mean, I appreciate that, and maybe just two, because I like the figure that you just shared, for the fun facts. So most of the expertise of the team that we have right now is AI, and embedded software. And we spent so much time trying to reduce the footprints of our neural interface algorithms. And to give you a reference point, the overall stack that we're running right now runs on 1.6 million per hour. So I think we're somewhere like 1000, like 10,000 times lower than whatever Apple is running right now. And so I think like, if we compare it to like, classic computer vision from Toby, we're winning like 1 million times lower, like power battery than what they have. So I totally hear what you're saying, like in the sense that yet, it will need to be reduced in that what we're trying to do is like really be one step further, when these AR glasses will be on you, you will need to have external devices that's going to be powering your interface, that's mandatory. That's how it's gonna happen.
Yeah. And so you kind of establish this general direction, what you're trying to accomplish, how do you describe the vision for the company, the big picture
we use, and that's not even mine, that's microphone, actually, we use just we say that we're the keyboard and mouse for spatial computing, or for AR or whatever it is that you want to call it, but like, which is the keyboard and mouse of the future? And that's, that's the vision we have at Pfizer.
And what does that look like today?
So if you think about the keyboard and mouse, right, so we started by building the mouse, what I mean by that is that we looked at what a mouse is about, it's about clicking or selecting. And it's about navigating. So we split that into two steps. The first step is that, from the sensors that are placed in actually, I realized that I haven't even talked about how neural interfaces work. So maybe I should start with that. So the way to say this is that your body is a bit like a battery every time you think your brain sends electrochemical messages all across your body. So that means that when you think when you move your eyes when you Twitch a muscle, this creates an electrical currents that can be captured if you play sensors on the screen near to where the activity happens. So what we do here is that we take earphones, and we place sensors on these earphones that are called electrodes. And these electrodes, they can record any type of biological activity, beat the eyes, beats the facial muscle, or be the brain activity, which is something we keep for later because it's quite hard to play with. And so what we are designing when we're saying that we're going to be replacing the keyboard and the mouse of the future, is that we are leveraging all this electrical activity that we can capture to transform that into controls. And so these controls are like much more instant in a way because much more integrated with your body than any kind of you pressing a button or clicking on something because you didn't have to use any intermediates as a way to transfer your intent. So what we did is that for the mouse parts, we started with the clicker. So we are using micro gestures of your facial muscle as a way to select so you're going to be clenching, once clenching twice as a way to click or double click to replace that. The second thing we started building is leveraging the eyes electrical activity. So when your eyes are moving from left to right, or up and down, they create a motion of electrical particles across your skin that you can capture through the sensors placed in your ears. So we are using that as a way to detect if your your eyes are in the center to the right to the left to the top to the bottom. And we're using that as a way to kind of map that to whatever you're looking at in your AR glasses. So that's the navigation. So with the facial muscle activity, and the eyes, we are able to actually replace the mouse or whatever will be the mass of AR glasses. And that kind of mimics very much what Apple is trying to do with their eye navigation as well and their hand selection. So that's what we're doing. And the benefit for us is also that we're completely hands free by doing that, which Apple is not at the moment. So that's for the first part, we already have the facial masks already. We are working on having the eighth activity to a market readiness. So having a very, very good level of performance above 97% f1 score. And so the last part is how do you replace the keyboard right? And here again, we're looking at Apple. I don't know if you looked at the latest video of the AP now ADP Damn, I'm still to this one, the Apple vision Pro. So if you look at these videos, one of the things that they had that they think is very cool is that for complex comments, instead of having to We type on the keyboard all the time, what you can do is simply look at a search bar. And when you look at it, the search bar highlights and it triggers the voice command. So you don't have to say, hey Siri anymore, you can just look at the search bar and like, say something, whatever you want to say. And so that also goes in the direction that we've been thinking about for the past few years, which is that voice will be the main input for complex comments, like voice has been around for some time, right. And yet, not all of us use it, I think there are two main reasons why we not all use it is like one it likes privacy. Voice is great when you're wrong. But like maybe when you have your apple vision Pro on your face, it's great. But if you're in the subway, where is one, there's a lot of noise around you. And two, there are a lot of people just like all around you, surrounding you, you may not want to tell them or to for them to hear what you want to say to your partner about like when you're going to go home and what you need to buy before you go home. So the way we saw that, yes, voice will be used for complex commands. But the one way to make it ubiquitous is to remove the privacy issue that you currently have. And so that's why for us, the keyboard that we're building, after the mouse is actually around something called Silent speech. So it's the ability to actually detect what you want to say, without for you to need to actually say it. So you can just mouse it. I mean, I could show it right now. But people won't see it because it's a podcast. I won't go too much further into how it works exactly. But basically, what I'm just want to say is like this is how we envision the future, using facial muscle as a way to click using either activity as a way to navigate. And using silence speech as a way to do complex commands.
It sounds like a really thoughtful way to break down the set of tasks that are necessary. Thank you. Where Yes, and then complexity, right? Yeah, where is it that I want to focus my my intention and, and then the indication of yes or no back forward. And then more complex input, you reminded me of this company that I had seen, think the name, I tend to, and I have this product called the molar mic. And the idea is you basically you wear a retainer in your mouth, so even equate consumer grade, but for certain sorts of professional activities, that's fine. But you were basically retainer in your mouth and on that retainer is microphone and speaker system that allows the user to communicate in here. So two way communication was communication, using this sort of inside the mouth, molar. And it comes to mind because you talked about this notion of silent speech. And so one of the attributes of this sort of Milan bike was very similar that the idea was that you can speak silently without having to speak vocally for others to hear and still be able to communicate. It sounds like a really interesting and powerful approach to solving for privacy, while still allowing the much more rich complexity that can come from voice.
I'm 100% aligned with that, I think any solution that goes in direction is great. The one thing that I mean, and this is where becoming ubiquitous is always a big task. And like so hard to to to be like, but unfortunately, pervasiveness is very hard to achieve when the device is not already part of like the large number of device that you're wearing, you know, so the retainer works well, as you said, like for some specific context, but I'm not sure this will be kind of the go to device if you want that type of interface to become the one that's used tomorrow. And I mean, I'm giving a shout out right here to a company called Overmantel, from Thomas Vega. And they're building a retainer as well, that goes beyond even like the Mueller mind that you're mentioning, because they added some sensors in the pallets that you can trigger your tongue. And so they right now working on people have quadriplegic, and I think this is the right way to go for populations that have specific needs. I think it's hard to make this more mainstream or mass consumer appeal, if you don't fit in devices that are already part of your daily lives.
Yeah. But on that point, your solution to that is, people are wearing earphones they're comfortable with in the ear piece. And if you can take advantage of that existing behavior and existing functionality. And then add to it, the sort of you noted some electrons that ultimately stick in the air and try to detect and interpret the electrochemical activity happening across the muscles and skin of the face that has the chance, as a real chance. As you you can reflect back a little bit on the technical journey that you've had from conception to present, what have been like the biggest technical challenges you've had to overcome?
No, that was very easy. So I mean, honestly, there are plenty I'm going to try, I'm going to try to go a top of my mind, I would say very, very first very beginning of the company, what we did was sort of like the most basic prototypes we could go with. And for that we were building, you know, this earplugs with conductive textile on top of it. And we would plug that to a very big board externally. And I would put that in my ears and microphone, there would be no you have to be the guinea pig, because you know, I have to code and like I have to look at the data. And so I would end up having my ears super irritated at the end of the day and like but at least we got some data. So I think the first technical challenge was like, Is there a signal? Is there anything you can use by placing sensors in or around the ears of the user? So that was really the all the beginning of the company was what can we get? What can our Are we looking at here? Right? What can we filter what we're gonna get to see? And I think once you realize that there is some signals, specifically, if we look at the facial muscle activity, which is the first one and Bill is like, How can I insulate a specific gestures from the rest of all the things we're doing everyday with our mouse to make an algorithm that's robust enough. So what's the data that I need to collect to make sure that when the user is going to be wearing my technology and using my technology, it's not going to be full of false positive, which is awful in terms of user experience. So the first thing was like, what's the data I need to collect, to not only be able to properly detect an event, but also to detect when the user is not doing something? So I'm going to give an example. So we're using, as I said, like facial muscle activity. What happens when you're chewing, what happens when you're talking and your activity, that same muscle? What happens when you're swallowing? What happens when you're running? And so all these kinds of figure these kind of things, you need to understand how your algorithm performing and collect data to kind of help the AI algorithm to understand what's a false positive from what's an actual true positive. So data collection, yeah, of course, that data doesn't exist in the market, right? It's not like pictures of cats and dogs that you can find on the internet, or like even like text for feeding an LLM, like GPT. You know, like, all the data we have to use, we have to collect ourselves because there is no neural data database that exists today that you can use and train algorithms on. So number one, the data for the algorithm. Number two, once you know what kind of data you want to go like, Well, you see, you need to build a prototype that has all the sensors that fits in the ears of the user, he knows like that big box I had at the beginning. And these earplugs don't make sense if you're trying to build a consumer ready product. And so here like, that's where we hired cameo, who's from like Softbank robotics, and I had the tremendous experience in leading hardware design, we actually spent a lot of time doing two things, one, of course, miniaturizing, everything so that it fits on the tiny PCB. But here, there's an extra constraints in what we're trying to do, because we want to be the one stop shop solution for every device makers. And that means that we can't use these kind of $100 components that you would have on medical devices, for instance, you have to think from the get go, how do I make the integration and the industrialization, super easy for any kind of manufacturer in the future, and how can I make a product that's not going to be 3500. But there's going to be the regular $250 that you're going to be spending for your earphones. And so same here, you have to spend a lot of time trying to look at each and every material on the market for the electric for the sensors, you have to look at each and every sensor to get close to the ticket in Texas Instruments and the analog device of the world should understand what what they're going to release to see if you can influence the roadmap one way or another. So that took some time, we got to some places right now where we have a wireless device that's been that's received an award at CES for innovate bits of innovation in the personal headphones category in audio. So you know, we got somewhere, so we have something that's still a bit bulkier, but we're getting closer to your earbuds. So data collection, miniaturization of the device. And I think the last part is AI optimization. And what I mean by that is that, again, collecting data is great, having a small device is great. But if your AI algorithm require 10 GPUs to run, then you're not going to go anywhere, in terms of providing the user with the quality of control they need, which needs to be super instant, you need to run on the edge need to run on the device itself. And here again, we spent a lot of time on actually, my co founder and the team spent a lot of time optimizing our algorithms, you know, pruning them, like doing a bunch of other other processing to make sure that both the signal processing, and the AI will run on the smallest chipset on the market. So like, you know, any audio chipset that's already on the market, like the Qualcomm, the IRL, this world. And right now, I think the whole stack for the first generation of controls, runs on 18 kilobytes, which is like very, very small compared to anything you can imagine.
So you noted there that you are trying to run all of this on on the ear. Yep. And so what comes out of that year device is basically just the result of the interpretation of the algorithm, up, down, left, right click, you know, 1234, whatever it happens to be. And you're trying to cram a little tiny microprocessor. Is that the idea?
Exactly, I think the one we're using right now is in a RM cortex and four, it's like, one megabytes of memory 256 kilobytes of RAM. So in order to, we're trying to work on the biggest constraint because the thing is that even within these chipsets, you're fighting for space, like they have to run their audio processing, they have to run their noise canceling algorithm, it's because otherwise you won't die. It's even though it has the divisor controls. So you know, it's like, you really need to be highly optimized. And for the fun of it, we talked with hearing aids company as well, at one point, and so the constraint they gave us was like, oh, yeah, if you want to run on our hearing aids, you need to have everything running below kilobytes of RAM. I was like, do a one kilobyte? one kilobyte? Yeah. I mean, they have aI optimized chipsets, so that helps. But but still like you You're fighting everywhere you're fighting for space. And it's the same for any AR VR headset, you know, like, you're always fighting for space. So the smaller you can make your algorithm, the better it is. And so that's really something we've baked in from the get go to avoid going with a solution that will be completely oversized for whichever market we're going after.
And at this point, I didn't when I when I United chance to sit into a demo together at AW II, you were beginning to offer a Demo Kit, right or dev kit that anybody can now go to your website and buy begin to experiment and play with. What's the objective at this stage in the product development with that dev kit.
I'm just going to split the two steps. I think right now we have a demo kit that we don't think is ready for the general public yet. So we're selling it for a hefty price. And it's mostly meant to attract big corporates that would like to get like earlier access to that kind of technology. So that's how we managed to sign a couple of a POC deals with like large earphones, large XR and large walkie talkie makers. We're going to be releasing a dev kit next year, though, that's going to be meant for the developers communities that everyone can really take that technology, play with it build up around it and make sure that they can integrate into whichever apps that they're building. So the GMO gateway is really more of an appeal to try to seduce or try to work with like big corporates, and like give them a glimpse of what the future will be about, which worked pretty well. But not now, I think we're working on like, the actual products that we intend on releasing me next year for the deaf kids. And end of next year for the first generation of earphones that we're going to build is going to be tailored for industry augmented reality.
As you look ahead from where you are today, specifically, as it relates to the consumer adoption side of it. You talked about one of the big challenges being integrating into a device that people are willing to wear and ideally already wear as part of their their daily habit of technology that they've got. What are the other sorts of challenges you anticipate around specifically the consumer adoption of the technology? Why buy your version of the headphones versus air pods?
Yeah, well, I mean, how I would frame it that way, if tomorrow, Apple launches a version of their airports that's equipped with neural interface. I mean, that's like a big win for me, because that means that what we've been working on for the past four years is the right direction for the market. And that also means that all of Apple's competitor will be looking for a solution that we will be providing. So just like the small parenthesis right here, but I would say that this is for me, it's like if Apple launches such device, I'm more than happy, never to get back to you at your question is what would refrain people from actually buying a device or controlling their device with neural interfaces, right, there's going to be the main key thing, I think there's a matter of like the use case, like if you don't see the use case for yourself for AR yet, then there is very little chance that you're going to buy Wi Fi equipped products. So I think there's going to be an adoption curve that's going to come of our technology in your earphones, that's going to be linked to how many people are using AR glasses on a daily basis. And I'm talking about the consumer market here. So I would say like really the use case, like the AR use age will be the key driver for the adoption of new interfaces. Now, what's good for us is that beyond that consumer adoption, there's already a market where augmented reality is having a moment, which is the industrial augmented reality, and for which you have hundreds of 1000s of people working in factories working in oil and gas platforms that are already wearing AR glasses today. And what's interesting here is that when we talk to the actors of this field, the specific agents that we just announced the partnership with when we talk to these actors, they all mentioned that they're struggling with the current interfaces they have, they're struggling like voice and handles domain inputs, for people that are wearing gloves all day long, evolving in noisy environments. And so what's great for us is that this is kind of the proof that we wanted, that our technology is needed whenever it comes to interacting with AR and we are targeting that industrial market at first, because we know this is where our technology adds value for the users.
What do you think are the attributes that these devices need to kind of possess for that adoption and use to really take hold? How do you articulate that to yourselves internally?
Yeah, so for for Well, I mean, maybe just want one thing here is that the end goal for us is that the device you're going to be wearing that's going to be equipped to voice your technology will not be different than your regular air pods like he's going to have what the sensor was we're using and replacing, or just on the outside, like the replacing the ear tube that goes inside your ear, and they're on the outside of the casing of the earphones. So now that we've said that, what this means is that the main constraints and the main decision points for users to use that is whether or not the earphones are comfortable whether or not they have a battery life that stands correctly. Whether or not the control that we're providing are reliable the same way a button will be. So that's really going to be the key adoption factor here. So they're not different I would say from regular earphones except that now they're composed of wiser components. So for the comfort Have, we spent a lot of time trying any possible materials on ourselves, or the materials we're using are biocompatible. But technically what we're doing is we're using what's called conductive polymer. So it's just plastic that conducts your internal bio electrical activity, and sends it to the chipset inside the device. But the feeling under your finger is exactly the same as the current ear tip you would have for your earphones. So from that standpoint, there should not be any blocker that we see and that we foresee. The second thing is the battery life, because we're adding more algorithms, more electronics inside the device, the battery life could be impacted. Again, this is where it's super important to you to have your algorithms and being super optimized. And today, the impact we have on regular batteries is below 10%. So 10% is not something you would feel as a user because it never happens. I mean, rarely happens except me when I forget to charge again, my charging case, it rarely happens that your actual earphones go below 10% of battery or 20 Personal battery, and that our technology would be a bummer for that. And I think the third part, as I said, is around reliability. And again, this is our job to make that work. But like reliability is key here. What we know is that if we want to get to a point where the users are adopting our controls, they need to have above 95% accuracy. And they need to like have controls that are triggered below point five seconds on for the first integration of controls that we have, we're already there. And for the eye tracking activity, we are working on getting there. So yes, these are the three key things that we're working on to make sure that users will adopt our technology, and that are kind of the technical blockers we need to reckon.
So this speed of interpretation ends up being an important element that you have to vector towards. Because a mouse click is pretty fast, you know, the gesture recognition systems now, even though me me apples done it better than others before because they're pointing down at your hands, but more comfortable for sure now, but it's still pretty fast, relatively. And so you've said half a second is kind of that that threshold for responsiveness that's necessary. And maybe you can drill a little bit deeper into why that's a challenge that you can go a little bit deeper maybe into the the mechanics of how it's working, how you're interpreting muscle movement from the jaw, or eye movement from the eye in and how you need to process that in order to achieve those sorts of responsive times
sheriffing Okay, of course, I will only share what we can publicly share here because there was a lot of of it, that's like our secret sauce, right? So the way to see it is that when you are doing the gestures, you're going to have these electrical particles that are going to be moving from wherever your eyes are moving over your face is moving to your ears, that's where we're capturing that part is almost instant, right is the speed of light, almost what's happening, and then that's going to draw an electrical signal. So I know if you're looked at like an EEG or EMG, or you'd like whatever electrical currents. So you know, like these flat lines, you would see, for instance, in the inductors movie or whatever. So we're going to be capturing that biological activity and within the signal, there's going to be drone, we're going to be looking for some specific patterns of activity. So for instance, when you're clenching your jaw twice is going to trigger like two spikes that we're going to be looking at and trying to understand. So you have that electrical current coming in, you look at what's called the time series, there's going to have like your your signal being drawn, what you do first is that you're going to be cleaning all the noise from that signal. So what we're going to be doing is removing all the noise coming from the outlets, the electrical current around you, or coming from your laptop coming from your movie coming from anything else, that would create some noise in that signal. So there's a signal processing stack, and itself, it has some time to run. And also it needs to look at certain periods of data to be able to actually clean the noise correctly. So you're going to have to wait for instance, point two seconds to look at the data or point three seconds to look at the data and remove all the noise from it. Then after that, once the noise has been removed, you send that to your artificial intelligence algorithm, there's going to be processing that window of time that's been clean, and saying, Oh, I think Jason just did a double jaw clenching. So I should trigger a click, or I think Jason didn't do anything, so I shouldn't do anything. And so that itself also takes some time. And so then after that, once this is done, you predict zero or one or two, I mean, or click or go right or go left, and you send that to whatever device you're connected to. So the AR glasses most likely. So this whole stack from the signal being like you're doing the motion to the signal being sent to the device to the signal processing being applied to the AI algorithm being applied to then being sent via Bluetooth, this whole stack has like a given time that you need to compress. So what you're doing afterwards that you work on every of these sub components, and you try to make sure that you optimize them the best. So for instance on the signal processing we recently patented specific technique that allows us to reduce the time it takes us to do signal processing by omega three. So that's like a big win for us because this was one of the big parts of the of the of the stack and that allows us to remove almost point four seconds. We also looked into different way to look at the signal We'll reduce the window that we need to look at in order to actually interprets the signal. So that's another thing we do. But this usually can have a cost in terms of performance of your algorithm. So you have to make trade offs somewhere. So you have to retrain. So these are all the trade offs that you play with, and that, in the end, enable you to actually have very decent and very fast control
in the same one that being applied to the eye movement tracking, for example, I
think that's exactly the same. You're always looking at the same type of signal, which is electrical signal. And then it's just a matter of like, how do you find the patterns within the signal?
Yeah, how much variation is there between you and me, and somebody else randomly picked off the street. So actually,
not that much. That's what's interesting, it's, it's much less variation than what you would have for the voice for voice recognition. Yet, you still need to get a lot of data if you want to be able to have a decent level of performance. So to give you a rough example, I think right now we have more than half a million data points that we've collected in our office over the past three years, to just feed the facial muscle activity. So that's been one of our goal as well. And that's actually one of the constraints or the thing we're trying to get rid of as much as we can, is we don't want to have calibration. Why, because calibration can kill the user adoption. If you have to go through like a five minutes tutorial calibration for your technology to work, it can really kill the user adoption. So that's something we've tried to get rid of. But so from one user to another, there is not that much difference. But yet, if you want to have a good understanding of any user on the planet, you CMG collect data from large number of people in order to get to an algorithm that will be predicting at 95% accuracy, without any calibration. So, yeah, we're all the same. All humans are kind of producing the same type of signal, no matter the gender and density, and we've been trying it on like, a quite diverse population, we make sure that our, our data sets are as inclusive as possible. We like people from different age, from different gender from different ethnicity. And it's really cool to our company to make sure that we're building the most inclusive controls on the market, you know,
yeah, it's awesome. One of the challenges that every hardware technology company faces, and it's certainly one that's maybe pre Apple announcement in the space around a augmented reality is that raising money is challenging. Fundraising here in the hardware world is definitely hard. So as you were getting going here, how'd you initially approach the fundraising process? You're at the start of the company?
That's, that's the key question. It's always like, it's the starter game. And I think like, the more time I spend on vysya, the more I realized that your job as a CEO is really to be constantly fundraising. So the way we saw it is that we've started by doing the first round of angels, so mostly strategic angels. So we really wanted to have with us from the beginning of the journey, people who had either built, like hardware products, or well built licensing business model products, and that have been successful about this. And so we raised a first 1.2 million euros $1.3 million pre seed round last year with strategic angels, and was great when you are based in France, that France is highly supportive of deep tech companies. They have like public schemes, public funding schemes. And so that enabled us to actually add 2.4 million on top of this 1.2, that we raised almost one for two. And that's what's been bringing us to now bring us to a 14 people team are mostly composed of engineers, a lot of experience from different field, leading the hardware, the software, and the marketing and partnership. That's also what enabled us to kind of build this first generation of controls, and also made sure that we could build this first generation of Democrats that is not completely wireless. That's great. And then you're like, Okay, so I have my Demo Kit, I have my team. And I have companies that are actually eager to try my products. And then you realize, oh, this is where I actually have to invest even more, because I'm a hardware company. So that's where we're at right now is that we have companies like Digi nets that are asking us when we can release the product so that you can actually start selling it ASAP with the eyeglasses. And so we are in a situation right now where we know we have to speed up the development, but we also have to fund the hardware. And this is not easy to do. So we're actually raising a new 5 million rounds right now. And the goal here is to bring investors that are that are used to kind of helping the deep tech companies get out of the bush and start manufacturing and start releasing the product. The context is what it is, I do think that it's not a bad time to raise a seed round with like a longer maturation time than most of the companies that are being judged on their business figures right now, mostly because why investors, they they're going to invest in you right now. But they know the products can be released in 18 months. So you have time to get into better economic context before you start actually selling or releasing your product, which is going to be good for whatever figures you want to show at that time. So yeah, funding is key. I think we haven't done too bad so far with that, where we're in the process of raising a seed round. So ask me in a couple of months and I'll let you know but hopefully we'll be done and they're on track to to increase the size of the team and manufacture the product.
Awesome. Good luck with that for sure.
Thank you.
So as you as you prepare for that pitch and you start to touch these So as you talk about this path towards full commercialization, what are the biggest pieces remaining? The what are the big milestones there on that on that roadmap? Yeah,
totally. And I mean, yeah, that's a good, good point. So there are, I would say three things that we need to focus on. From a technical standpoint, we need to finish developing that eye activity tracking that eye navigation that we're providing, we need to bring that to the same level of performance. And what we have of the clicker, which is above 95 97% accuracy right now. So that's number one. This is the investment on the data science team, we have the killer team right here. The second thing we need to do is finish the miniaturization and get ready for manufacturing. So basically bring the prototype the GMO kit we have right now to a DVT, then PVT, then et phase, and build the moles that are going to be required to make to madstad manufacturing the manufacturing them as soon as possible and start selling them. So you have all basically all the manufacturing track from Oh, I have an idea, okay to something that a I have a product that people can just buy in the store and put in their ears, and he starts working. And so this is non trivial and requires miniaturization replacing, because finding the right partners for manufacturing requires the packaging, rigorous handling the shipping. So all these constraints that you have to go through and work with the right people to actually implement. The third part is, well, you have your tech, you have your product that's ready. Now, how do you distribute it. And so this is where we need to keep signing these deals with all the industrial augmented reality players and in the future with all the Augmented Reality Headset makers to make sure that when our earphones are on the market, or when earphones equipped with wider technology are on the market, they get distributed through their platforms. And so that's going to be kind of the three things we're going to be really focusing on on the next 18 months is finishing the tech miniaturizing the device and signing the distribution deals with all the big industrial and AR device makers,
an ambitious plan that sounds it sounds perfect. It's like signing those deals with the big makers, I think is probably the the biggest hurdle on that list.
That's where you're when you're solving the right pain point. I think this is much easier. And like we've seen this with Digi nets. Like we have very good discussions with the two others right now. So you know, it's like, we see where it's going. Like right now I think the main blocker for us is really to have integrity, and they do see the value of what we're bringing in.
Who are the ideal partners? Can you describe? Maybe I can't name the ones you're chatting with yet. But can you describe the characteristics of an ideal customer or partner,
someone that has money to defend us and and help us build the product? Nothing I think we're looking for that, like the notion of partner is, is broad in terms of what we're doing. Of course, the one we're really focusing on right now are like the AR headset makers. So the one who have a device right now that they're pushing to a specific population, that have been spending so much time in building the perfect headset, have the perfect optics, but that I can have a haven't been spending or didn't have the time or the bandwidth, or the computing power in the headset itself to really handle the interface like we would. So these are really the dream reports for us because we know we have a solution for them. And we want to be sold as like an accessory or like the human controller for their headsets. It's I think that's, that's the key companies we're trying to work with right now. But then that's only one part of the equation. The two other parts of the equations are the technical partners, or the manufacturing partners that are going to be building our device or that are going to be integrating our technology. Again, we're not trying to be a hardware company. In the long run, I think the model that we really like is like ultra leap or Toby, where they've started by building their own hardware. And then progressively they started licensing the technology to other device makers that are integrating it into their hardware. And I think that's really what we have in mind right now is why there's going to be building a first generation of earphones to showcase what the Northstar could be with neural interface. But the longer term is that we think neural interfaces will be everywhere. And we will be licensing our technology to every device makers. So finding the right manufacturing partners, ODM or OEMs is really key here to make sure that well, they are manufacturing devices with your tech. And the third type of partners, which is I would say, not as urgent when you're young startup but which is which are the right partners if you want to scale our partners who are building the chipsets that are being used in every of these devices, right? So the Qualcomm of this world, the IoT of this world. And what you want to do here is that if your library if your technology is embedded by default on these platforms, then it's even easier to actually be integrated in a larger number of devices tomorrow. So once you know like, it's just going one level deeper, instead of just working with the brands, you're working with the components makers, and you make sure that whatever you're building is part of their default ecosystem so that whenever someone gets the chip in, they can benefit from the Wiser technology.
When you kind of reflect back on your own journey to where you are today. You've noticed that previously, you were working in France and you came to initially to San Francisco through the work that you're doing with the advertising technology company. So this this journey that you've been on maybe hasn't been an obvious one also sent to your own background isn't isn't your attack, although you're inspired as you know, by the work that of neuro link, so how did this background of yours prepare you to become the co founder and CEO of a neural interface company?
I think that's a great question. But I think when we talk about deep tech companies, we need to separate two things, right? There is the deep part. And there is the tech part, I would say the deep part is the comprehension and the understanding of how the underlying technology works. And I mean, I have the best go from there for that. So it's like he's like, he's like the smartest person I've ever worked with. And like, he's brilliant. He has a background in neuro technology. He knows a lot about AI, he was eating and data scientists team in our previous company. So he's the one who really have the deep technical knowledge and the understanding of like how the underlying technology works, and like how to optimize it and make it better and crack all the technical problems, we have to crack. But this is only one part of the equation, right? If you have the greatest tech on the market, but you don't know how to package that into a product and sell it, then you're nowhere. And so there is this part that I've been working on for the past eight years, which is building AI based products, and selling them to so many clients that generate so much revenue, that is a success in the end, right. And previously, in my previous company, we I was product manager, my co founder and I will work together and like to product, we scale them to like 1000 clients that were generating 100 million US dollars in revenue per year. So the background I have is really the one that takes a very complex technology, and transform it into a product that can then be sold in scale to many, many clients. And so this part is very similar. Once you're past the technical comprehension and technical, creating all the technical books, this part is very similar from an ad tech company, which is one of the most major infrastructure companies market today to any other market that you would like to push in the same direction.
Yeah, that makes perfect sense. So the skill set you had built up directly translates really to the problem set that needs to be solved as part of your co founder duo. As you have been on this journey now spend four years, five years, almost four years, yeah, three, almost four years, as this journey of CEO has been going on 14 people on the team now, interesting, challenging moments here and there, what's been for you the biggest surprise learning in this role of CEO?
Interesting? Well, I mean, first of all, I think if I if I go into on the positive side, I would say that I'm amazed and still amaze, see more amazed every day by the involvement that you get from people in the earlier stage of the company. So the company we're working for before was like more than 3000 people. And so you know, when you're 3000, you don't get the same level of commitment or the same level of responsibility in the overall project of the company. Right. And so that's why you see, like some people like even like larger companies, or large organization like Google or metal that are Quietly, quietly quitting or something like this, where they don't really work anymore, or they have like very, very limited number of hours, and they don't really feel committed to the project that's being carried by the company, I think I'm really amazed by the 14 people team that we have, and like the fact that they will always go the extra mile. And even though you know, we're the founders, so we're the one who came up the idea, you feel that this is their company too. And so for me, having the kind of energy is like really powering me through the day, even when like I have a lot of things to do. So on the positive side, I would say the learning is really, you can have a team of like highly motivated people that are not just there for the money that are really carrying the project, even up to a point where you're 15, and I hope we're gonna be able to carry that to 40 people, but I know that at one point, you know, company gets bigger and bigger and bigger, and it's hard to keep it but I can really make my that. The second learning, which is more on the downside I would say is that there is so little hours during the day, you know, it's like, you have an never ending to do that that never happened to me in the past. Like I used to have no way to really control my work, you know, I could go home and like, think about it anymore. And so nice like you go home and like you only take all the three first items on your to do that you wanted to do and so you you keep thinking about it. And so you have to rework your your to do every day to remove items or to drop items. So everything on it, I had them to work on like one piece. So you need to be super focused in to be super productive. And that's your daily life. And it's frustrating, because in a way you never have the time to do everything you want to do. And so you always, always feel like you're lagging behind or you always have stuff to catch up on. And so that's kind of the second thing that I discover is like you always have too much work or like too much stuff to do. And you really need to prioritize, but that's frustrating in a way.
Yeah, for sure. Let's close with a few lightning round questions here. Sure. What commonly held belief about this market this AR VR spatial computing market do you disagree with?
I think the first thing that is gonna be a common one. I don't think that the fully immersive Metaverse is the right way to go with XR or augmented reality the future I don't think like the future is going to be first we're going to be great. We're going to be great experiences Ready Player One like like video games, and then more and more people will join in. I think it's immersive VR will be great for gaming and that sets maybe media or some specific experience, but it's not going to be the mass consumer shift that we're all expecting. The second thing I think for me is maybe like here, it's gonna be more new in a way or like more wiser centric. I think that if Future of spatial computing will be distributed computing. And what I mean by that, and that's kind of related to what we're talking about earlier. I don't think your glasses will contain every sensors, everything, every computing power that you need to run that spatial computing, I think maybe some parts will be devoted to your earphones, for the controls to your smartwatch to your smartphone, maybe in your pocket for like, you know, some AI synchronous processing, because your AR glasses won't have everything. So I truly believe that the approach where everything is to fit into one headset is not the right one if you want AR to become mainstream, and I think distributed to the right way to go here.
Personally, I agree 100% With both of those.
Thank you so much appreciate.
Besides the one you're building, what tool or service do you wish existed in this in this market?
Right now, right? Like To be fair, the one thing I'm really looking forward to like a very simple AR glasses, that would just give me the GPS on my glasses, while I'm like riding my scooter or my bike. And I mean, that might be super trivial. I haven't found any right now that really works for me. So I'm in Paris right now, actually doing that podcast. And there wasn't the bike yesterday to go to a conference that I had to speak for, like Viva Tech, we were speaking with Huawei that we have like some partnership with and on my way there. So it's like a 30 minute bike ride on my way there. Like every stop that I would start by every like, you would see people putting out their phone looking at like, okay, where should I go and then put them putting their phone back in their pocket, I'm like, This is dumb, like this shouldn't happen. Like, you should have a way to have that in front of your eyes and like, interact with it very simply with your eyes activity, or any kind of muscular activity or any kind of way to do it really, and just have this information straight in front of your eyes, that will be way less dangerous as well, you know. So for me, that will be the kind of the killer use case I would die for right now. And that if it could exist right now, I would definitely use it. And then if we go crazy, anything around gaming, like any immersive experience, I'm open, of course we'd like yeah, this the short term will be this one.
Yep. Navigation in a basic pair of glasses for movement, when your hands are occupied by the scooter or bicycle, or even vehicle car steering wheel. What book have you read recently that you found to be deeply insightful or profound?
Funnily enough, the one that I've been reading, it's a it's from that person that the author is she's Canadian and Japanese. And so she's like writing this little like, I don't know how to describe it's like, very simple stories. And yet very poetic. And like, you do find yourself like in the story. So this is very, like, when I'm when I'm gonna go home at night, I'm brain dead. So I need to read something that's like, not too hard, not too complicated to read. So she's writing about the stories of families, like one of the latest one I read is called Sue Becky. And so the she's writing about these stories of families in Japan, and these are very simple stories, but it's very poetic. And you do find yourself in kind of the struggle or like the emotions that the characters are living through. And I don't know it can have a, it makes me reflect on myself. And it also kind of makes me more more peaceful at night. So maybe it's to Becky. And the last name is shimazaki, ikechi, Maliki, and like really great books and like, this very simple thing that I found in a bookstore one day, and I just kept reading all of her books. And so it's, it's good. It makes you more human, I guess, or like, less involved in tech or whatever brings you back to her to to hers.
Yeah, awesome. If you could sit down with your 25 year old self and have coffee, what advice would you share with 25 year old? Yeah, seen?
That wasn't that long ago, right. So I think, if I could, if I could do that, I would tell him, maybe the biggest advice I should tell him is like, if you hear the word Tesla, or Aetherium, just buy, you know, just buy. And that's it. Now, apart from that, the 25 is slightly before we started the company, if you would have told me when I started wise here like four years ago now almost that it would take me five years to get to a first product on the market. I think I would have not gotten into it, I would have been like, Oh, this is too long. It's gonna be I'm too impatient for that. Looking backwards, I think that this is definitely the right way to go. We're having so much fun, right? And Jeff is just trying all this prototype cracking new stuff, like really solving problems that have never been solved before. And this is where the front of the job comes from. And so if I were to talk to that 25 years old of me, I would just tell him, Hey, man, like, It's okay. Just be patient. Just have fun, have fun building new stuff, new cool stuff that no one else have been doing, have been tracking new challenges. And don't just go for the easy solution. That would be like build an ad tech business, which we almost did when we were thinking about like creating a new company, go for something that really you think has true value and that you can really enjoy yourself in. And I think that's yeah, that's that's where we are right now. Like, we're four years in, it's gonna take us another year and a half, and I'm totally fine with that. You know,
amazing. Any closing thoughts you'd like to share? We were at AWS like two
weeks ago, was it that we met I think the two weeks ago, that was right before the Big Apple announcements, and I don't know I have this kind of a very good feeling. It's like, everyone was vibrating about this. Like everyone was like, oh, This is the end of the metaverse winter, the truth of solar, whatever the name is, you know, really. And I really liked that. I really like that excitement around what's going to happen in the coming years that the fact that maybe we're finally cracking or getting at the door of making AR and VR mainstream for everyone. And for me, this is kind of the day really, I don't know, it was really a good feeling. When I left the conference, I really had that very good feeling. And I, I'm really excited about what's coming in the next two years, you know, like, I'm really excited to get my hands on the, on the apple, the apple vision, pro provision, headset, whatever the name is, you know, but I'm really happy to get that and to see all the subsequent devices that are gonna be pulled out from all the other companies on the market, you know, I think it's gonna be really cool. So yeah, that wouldn't make frozen foods. Like let's get excited for next two years.
Yeah, one of my favorite aspects of going to aw conference, besides seeing all of these great friends that I've made throughout the industry, and getting a sense of what's coming, and all the different bits of technology, that's incredible, but I feel like I come out fully reinfused for the next year, like it fills me up to be able to go to the next AWB conference, and do all the things in between. Because it's, you know, it's gonna be a bit of a grind in between the AW conferences year to year. And to have that sort of enthusiasm and being with all of the folks are super passionate about the industry. And as you noted, to have Apple re infuse also the whole industry with a sense of, of hope and enthusiasm and an inevitability as well, is really rejuvenating for sure.
I agree. I 100% agree, I
would be willing to learn more about you and your efforts, their advisor,
I think we have an amazing team that's doing marketing, you know, and the best way to hit a virus like on LinkedIn. And that's where we share all of our information. That's where we have all our like deep dives and like, thoughts that we're sharing about the industry. So yeah, and if you want to reach out to me, please, anytime. It's a yes seen so why ACI n e x, y z, w i s e AR that IO. So please reach out to me if you have any kind of cool ideas or if you want to looking for like equal investments in your interface or like even like making AR headsets and you want to chat about how we could partner up to have a chat to fantastic. Listen, thanks
so much the conversation.
Jason, thank you so much for inviting me, I appreciate it.
Before you go on to mention that the next few episodes will be replays of some of the great episodes of the past. We'll be back with some fresh new interviews later in the fall. Enjoy the rest of the summer and thanks for listening