All right. So good morning or good night everyone. We are starting a very exciting episode of everything everywhere XR. Today we have all your planted and Otto penty Kanan both from Europe connected from your thing this moment so I had to wake up a little early but I really really wanted to spend some time with them because they're doing amazing stuff. If you connect for the first time with this podcast or you're watching this podcast on my channel or my Vimeo or the AI AR VR XR podcast, sorry. This court space we do this like once a month or you know once or twice a month is very much for educational proposals. I really don't get paid for doing this. It's not connected at all with Magic Leap is really for me to get in keep in touch with the things that are happening around and also talking to amazing people that are doing amazing things know their story and you know, connect more people possibly so you can interact also with them. You can leave questions in the comments. I will pick them up and if there is no time to do it now I will just pick them up and put them in the discord. So without further ado, let's just go ahead and just start with the introduction. This is like mostly how I start every time this this series. I think it's very important that we know the stories of the speakers and how did they get here, and how why they're so passionate about the subject of XR. What is happening in the field, everything, every kind of background stories you guys have, please go ahead and start Introduce yourself. We can start with the auto and then all of you.
Alright, thanks Alessio. And hello, everyone. Good morning. Good evening, wherever you're calling in from I'm OTA and I'm currently based in Helsinki, Finland. And so I worked at double point which is a touch interface company. We basically do gesture detection on smartwatches. So when I when I tap my fingers together, you can see the watch face flush. And basically we've created this algorithm that detects touch from off the shelf smartwatch sensors. And how we got started was I have a CTO called Jamin. And he is a classically trained pianist. So kind of finger movements have been in his DNA ever since he was a kid. And he had this tendency to subconsciously type his thoughts out. You know, he was looking at something that was visually appealing and and he'd think that was amazing. And then he'd kind of type it kind of in his in his kind of subconsciously. And it was only when someone pointed that out to him that he realized he does it and he uses the 10 Finger system actually, to type his thoughts into his brain. So our company got started when Jamie wanted to make a wristband that could detect these finger movements with the wrist band. And we started working on the project, basically when COVID hit so we were doing some other things, some other projects, and then kind of COVID cancelled all of that and we got started on this new project, because we were basically bored in our own kind of apartments. And, and we were building this kind of wristband, which was, let's say a more of a control labs alternative so rather than it was based on EMG it was based on optical sensors, so it was actually detecting tendon movement rather than muscle activation. And basically how we got into XR was we showed this kind of wristband to a couple of people. So of course we were first selling it to piping translators or we were selling it to, you know, some journalists or whatever, like someone who needed to type quickly on the go. But then it was people from Silicon Valley that were saying like, Hey, here's this augmented reality thing. And the input challenge is still persistent. So of course, we knew about computer vision. But then we realized that touch is actually difficult to detect with computer vision to the to the kind of the degree that we need to detect, especially on lightweight Augmented Reality headset. So if you've seen that vision Pro, kind of keynote then you know that it can detect gestures, but all of those gestures are detectable only when you have what is it like 12 cameras on that headset? So I think seven or six of them are outward facing. Correct me if I'm wrong, but it's in that kind of degree. So lightweight, augmented reality is what we were targeting and that's basically what we've evolved to start working on so. So we've kind of arrived at a product that we sell to lightweight augmented reality. And and yeah, that's kind of the, the journey we're in at the moment. We're currently a team of 13 based mostly based in Helsinki, some people in all across Europe, but basically most of us are in Helsinki and and of course, want to kind of talk about the elephant, the elephant in the room. So if you saw Apple's double tap, that is kind of a first kind of validation of this risk based input being used in a consumer electronics device. So we're in a very interesting time at the moment where kind of risk bands are starting to be kind of somewhat mainstream already. Which is really weird because last week, this time, they were not mainstream yet. So it's a it's a very interesting time for us.
Amazing Yeah, I was I was about to say, you know, like, I think that there is the most important thing is been the validation from, you know, I think Apple validate a lot of concept that a lot of people worked on for a very long time before Apple, probably apples worked on it for a very long time and they never revealed that I feel like the switch between movement detection and muscle activation is the you know, the new paradigm of input here that is going to lead to have like maybe like a very like lightweight headset, or you know devices that is so integrated in our daily lives. So very, very exciting. And really just, you know, I know how it works, but I feel like after you know like all of your introduce yourself and all of these mission, maybe we can start to understand a little more the details of how to retrieve something like muscle activation in the input system that you build. How do you come up with? So thanks so much for being here. Please, Oliver, go ahead and introduce yourself to
Yeah, sure. Thanks to Alessio, for for receiving Flexi myself and the whole team obviously. So I'm basically I'll tell a short story or like a short story of my life. Basically, I'm a French Canadian. So born and raised in Montreal, I'm now living in Barcelona, between Barcelona and London. And I've always had this entrepreneurial mindset, this sort of inventor mind questioning the status quo, right. And so I think it shows up in the work we do at Flexi. So we started our company with my two other co founders in 2015. But it's a project that started a bit before which with a vision of creating like the future of textinput. Right. And so, it soon became quite a big deal after we, we were pitching at the Mobile World Congress with what we had in mind for the future of keyboards on mobile smartphones. And so we wonder, we want an award over there and a couple of innovations. And, and where we come from, I think at Flexi I think it's interesting for for people to understand is, we started this b2c era. I call this b2c Because we were really consumer focus, mobile smartphones textinput, the fastest in the world, to millions of users and but we in the past couple of years. We kept receiving demands of other companies wanting to tap into our IP. And so our expertise was already always around text input algorithms, swipe and swipe text, texting, languages layouts. Next word predictions, you know, it's right. Everyone uses a virtual keyboard every day on their smartphones. No. So this started really the sort of b2b era where, where we were able to provide our SDKs to other companies. Building Products around textinput. And since then, we've solely been focused on b2b, helping other businesses with their virtual keyboard needs. And I'll spare all the details but I'll mention two examples. One is, obviously the virtual keyboard as we know today, on your touch screen is capable of harnessing a lot of different types of data. And there's a lot of mental health care companies using our SDK to understand mental neurological systems like so basically, preventing Parkinson and Alzheimer and very interesting stuff just by how people type not what they type, but just how. So that's one area of of where we, we have several clients in health care, helping them prevent Parkinson 10 years before it happens, or Alzheimer 10 610 years before it happens. Right. So before it's too late, we help them in their way for the benefit of the humanity, right benefit of humanity. And the other example is really around augmented reality and this is the topic of today I think. I think for us augmented reality was much more of about a year ago, and innovation upstream so it's part of our laboratory experiments. And and for there I think what we were amazed of is how the virtual keyboard and voice will be together. Living in, in in harmony as an input method for augmented reality devices. Yeah, virtual reality as well. And so, we we really want to solve that problem of typing on those virtual environments today is kind of still difficult. I'm not gonna say the bad word about it, but it's really hard to type on. And so we we this is our expertise. So we say well, our engine is perfectly suited for that. So let's try to solve this. Our team is like minded people, awesome people that I work with. I'm very glad about our team and very, very proud of them. And we focus on being remote 100% around 15 in the team, and we focus on helping other businesses provide a suitable and delightful typing experience for their product, which is always been the challenge for everyone trying to build their own virtual keyboards.
Amazing, I just want to make sure both of you guys have fully remote companies.
Yes, on my side, so we're hybrid. So we have some hardware operations in Helsinki. But other than that we're fully remote.
Yeah, like so the topic of the keyboard is been a topic that you know, like I always really I felt very close to because I work on a virtual keyboard for almost like a year in a row and gave like, sort of a presentation on the web like a year ago. And I think that here we have two of the topics like you know, this kind of muscle activation that seems like to be the new paradigm for XR and and the keyboard interaction which is in my opinion, one of the most challenges and one of the most challenging one and xr and if it's unlocked this use case, I feel like all of the other ones are you know, like I feel like especially with the hand tracking and things like that. I don't know if that if there is some muscle activation like cursor like let's say if the if the if the watch is gonna help you to track the hand for typing on the keyboard like you know, these kind of multimodal things that that are coming up and they're maturing in time. This could be like the way to do things in the future because, you know, maybe not everyone is very familiar with the with the topic, but you will notice that the reason why it's so hard to type on the keyboards in XR is because you know like you have like a surface that you need to hit with your with your hands like with your finger and not always that surface is so small as those keys that we are used to type every day on on the keyboard like this sensation and the satisfaction that you have on typing on a real keyboard is a very different one from just typing or nothing and and that kind of like you know almost like haptic feedback, that is that is transferred by the computer or anything you're touching is very hard to replicate in XR, and that's I feel like if at least for me is like one of the main blockers I've seen some demos where they try to from multiple companies. This one something I've seen around for a while like you try to transfer everything on a desk so you can just view like type on a desk. Like literally on the table. I've seen also some solution where there are some headsets that actually take in consideration the laptop as a workspace configuration and just like they show you just the keyboard right and so you can type on your normal keyboard but have like everything else he makes are there are you know a lot of things that are coming in it feels like a transitional moment to me because there is no such a you know, a thing to do you know that everyone is doing. It's very, very interesting and very much like growing and constantly evolving. But yeah, so let's go into some, like deeper understanding of both the technologies. I will try to be you know, kind of splitting the time equally between you guys and thanks again for being here and just like narrate your story to everyone seems super valuable. So let's start with the this kind of muscle activation versus moving movement detection input. So like you also you narrate you said basically there is this pinch detection for the wrist band, but does it also do anything else like for example, with the you know, with the phone we can take whenever is the direction up down? Or you know, small movements, accelerometer, something like that, like what are all of the features of like something like a wristband that you that you build or the recent that you want to build in the future which may be you know, you're you are envisioning new features that now are not there yet.
Yeah, absolutely. So our algorithm has been built to know where and when you touch and it's, in a sense similar to a touchscreen because the touchscreen also knows where and when you touch now of course, there's a lot of work that goes into like the different environments and the different I guess surfaces you will be touching. So we can for example, distinguish between touching the tip of your finger or then touching for example, your own hand. But it's of course very difficult with the watch itself because it has no kind of knowledge of the absolute space. It only knows relative space to know like the different buttons on your hand for example. So then you might have to use some other visual cues. For example from the headset to build many buttons, but currently, we're able to do you know that one button very well and that actually takes quite far as a product. So we found that kind of like you know, the mouse, it's basically a cursor and a selection mechanism, but it's incredibly powerful. I mean, there's the right click, but, you know, you can, you know, this can be a different click than, you know, you know, your hand this way can be another click than that. And that's easy to know with with the gyroscopes on on the watch in terms of where we want to take this So currently we're building feature parity with with hand tracking. So you can know you know, taps you can know double taps, you can pinch and hold. You can know, you know zooming you know if you want to zoom back and forth, you can no scrolling. So if you want to scroll back and forth like that, you can no drag and drops. But basically what we see with computer vision, vision based interaction is that this is pretty strenuous at some point like at some point that gets quite uncomfortable on your wrist. So what we'd like to do is be able to provide the same interactions but without the need to move your wrist. And of course we're used to controlling the smartphone in a very, let's say comfortable way because you know the movements are like tiny they're not like large movements. So, that also is another reason why we believe wristbands will be needed is to detect these kind of more micro movements internally. We can call them micro gestures. So they could be anything from well basically how we think about them is it's kind of like using your phone but without the touchscreen itself. So you know, for example, you know, swiping like that, or then single handed you can swipe like that. Kind of like you would swipe on a phone but just against your finger rather than against the touchscreen. So micro navigation is for us is where we think things will go towards and kind of our IP is then how do we make that happen in a robust and user usable way. So kind of we think that it's going to be weird if you're going to be on a subway in the theater and you're still going to have to, you know, flick your wrist every time you want to scroll your interface. It's going to be weird, especially in a kind of culture like Finland where everything is super like, you know where we're quite discreet in that sense. And also, it's going to be kind of more comfortable. Just just being able to like do these gestures, kind of without these large movements. And of course like this is all on the continuing towards BCI where you wouldn't ideally have to or I guess technically ideally, you wouldn't have to move anything. Just think but it's on the continuum there. And it's it's, it's kind of all kind of progressing, kind of, not necessarily in early but in that direction generally. But just from like a kind of top down point of view. We think that a lot of devices will be controlled with gestures. So, so of course you have you know, the the hand tracking algorithms that you can use to control extend reality. Now you have double tap that you can use to control the the, the wearable wristband smartwatch with single 100 gestures, but we think that this can be used to control other things like for example, my IoT home, so you could, you know, point out your light, turn them on, point that your speaker turn up the volume point at your TV and, you know, select a a another channel so we have this kind of vision for a universal controller and of course you're not going to have cameras in every single situation all the time. So then you need something that's wearable, that's something on your wrist. So kind of this is the direction we're taking towards and and our technology. The single kind of button takes us pretty far. So just knowing where and when that single button is activated, and and for how long as well. But then, of course there are other ways. Like let's say we want to talk about this kind of typing, kind of vision, you know, would you want to actually recreate the keyboard on your table. So, you know, if we were able to, you know, detect between different fingers and do finger classification, and then knowing you know, touch events, you could actually recreate the keyboard on your table. Now, of course this is more difficult to do, but it's something that is absolutely kind of possible. Now, given the way we kind of did our URL investigations with doing these kinds of text input methods on a table, for example, we found that for most users, it's very hard to learn so so if you want to have anything that resembles a keyboard without actually having a keyboard it's it's, it's you're probably going to be dealing with a lot of adoption curves, and pretty difficult ones as well. So what we found, at least in our current investigation so far is that point and select is the most kind of intuitive way, kind of on a smartphone it's like basically point and select on the on the on the on the keyboard. But yeah, that's kind of our thinking.
It's amazing. Thank you so much for the overview. I feel like you mentioned to very, you know, kind of like I can see, I can see some of my some ideas in XR which are making a lot of sense and they expand also outside of the field of xr you you mentioned like micro gjester and universal input I can see these two things being you know, these are kind of like almost like Apple like pitch for new products in my opinion like it and when I say that is not because I'm like mentioning the company itself, but I feel like it's kind of associated with some products which are very universal and just tailor to like everyone and so there is a very much have a different expectation for for that product to have like its influence and I totally agree with you on the micro gesture because I feel like from really day one of the hand gesture no one really felt comfortable ever doing those things in public like you know, like you go to work maybe we work and you're working on XR stuff and you're just like waving the hands to people and no one really wants to do that. You know, like that. Let's be let's be honest. So I feel like there has been some some headset that tries to increase their peripheral you know view just to get the hands inputs that are maybe just very much like below this area and just you just do very comfortably and but if you have of course a wristband because you're not like constantly watching on your on your wrist that might be the right move to go there the right move to do for something like that. You mentioned also that like you know a lot of people don't feel very natural typing on something that is not a keyboard. So I just want to you know, pass the ball to Oliver and tell me how you feel about this. And if you can start from this comment to make up for your case. Like what like, how did you transition from I assume that facce was more on a touch screen before or was like real keyboard and then you transition to xr. How can you make someone that wants to type on your keyboard and xr feel exactly like a normal keyboard?
Yeah, um, so I'll start off just by something that auto commented and I want to just contrast obviously, what we see from our our side, right. So I think what we need to do is put things in context on before the digital era. I mean, people were typing typewriter on the QWERTY keyboard was like super big travel. You would press very hard on the letter so it actually prints on the paper right? So let's take this as an anchor. When we had the mobile communication, boom, right with blackberry and their email app, people started to type on very tiny keyboards right before T nine, obviously, but very tiny keyboards was still sort of a travel and a sort of haptic feedback. But it was a very different experience. Very different. It felt like connection between I now have a typewriter in my hands in my pocket, and I can type on it with very little travel, but still sort of this very satisfactory clicking sound. Right? And further down when the smartphone iPhone moment came to the to this world. There was a lot of backlash, right? Like no, I need my physical sort of travel on my keyboard. The BlackBerry lovers were shocked. I mean, press was very binary, right the day and night some of them loved it and some of them said it will never succeed. So we started in that moment of touch screen and we broke to record records of speed right as a product at was a very fast typing experience. And we had these gestures that were very innovative at the time of like just deleting and so to merge a bit this virtual keyboard typing in when you don't have any feedback, right and what what I always say is you have to imagine this as the next step where there there is going to be hate and love about piping and XR. But I do believe that combining and I you know I mean I'd love to work with Otto and double points, folks to make this a reality is how precise we can get and how magical it can be to type on a virtual keyboard with no feedback at all right and I think that taking, you know, obviously this aha moment of Apple coming in in the market with visual vision OS. You really saw an attention to virtual keyboard typing. And this attention and focus was just a proof for us in terms of we're going in the right direction by bringing our technology from mobile touch, physical feedback to the actual virtual world where there's no feedback and I think that no feedback. I would challenge that a bit. I think that with hardware devices, wristbands, like something is cooking up in terms of having that type of feedback. And I think there's going to be a hit emerging point where these technologies will meet and then it will be just awesome and we'll look back, carrying black boxes in our pockets. While going to the bus and the metro and saying, Well, that was kind of boring. Because augmented reality our our our reality will will be augmented that's 100% Sure it's just a question of now we're seeing the development platforms. So that was just bouncing off on those comments on virtual keyboards and typing and obviously it's normal that people say I'm not really, you know, on there now it's normal. We just need to work toward that vision because I think it makes total sense. Now in in terms of text input in what we on our side see is is that augmented reality per se will be much more strong for text virtual textinput. Then virtual reality as we know it, like the meta example was a good one where you wear the class, you wear the glasses, the goggles, and you type on any surface and it just types really fast. That's that's great. But I'm more imagining use cases where in the future, the devices that we wear on our our our heads will get smaller and smaller and smaller and smaller. And it will be as as evolutive as the smartphone started with a very bulky phone all the way to very, very thin materials and because there's there are going to be there's going to be a big market for that. So what we do on our side is we focus on detecting like merging the signals from hand tracking and I'm talking about hand tracking versus controllers, but it could be any type of input right, even altos our algorithms could be the the sensor for our algorithm to understand where is the geo spash spatial where is the finger at that moment, where is the hand at that moment to represent the cursor. So we depend a lot of on the technology, the advancements of the of the sensors, right? The hardware sensors, essentially and the cameras, so, but the magic happens behind the scenes where our algorithm understands which is going to be the word autocorrected at that point, depending on the actual placement, and the curve of for example, swipe typing, right? We much more are strong believers of swipe typing in augmented reality because it's really fast the type and it's less of a shoulder movement. Ergonomically makes more sense. If you can track properly the finger. It's microinteractions and it shows it's it's less tiring, right and I could do this on my lap, on a desk on a surface or without any surface. But essentially for us, it's all around the algorithm quality to send back the autocorrected word and the next word that comes after while also understanding where is the finger at a given point on top of the letters the keypad layout, right and then the elephant in the room if this is the Zed axis, like when do you say now is the letter Q is being typed? Right or the letter H in that spatial spatial environment of x y Zed right so yeah, that's yeah, that's what I would say on my side.
Yeah, I feel like you know, like you mentioned that your strength that where you see strength is like, in predicting what what the user is typing based on the movement of, you know, globally the hands so not really, you know what you're typing Exactly. Even because there is that kind of depths challenge in every headset, I feel like but yet, like looking at, for example, something like large language models, which is the new in a ways, I mean, I don't want to say it's text prediction because there is way more, you know, black box magic happening in the background, but at the same time, it's kind of predicting what you what you ask and what you would like to receive. So there is a sort of prediction which goes way beyond what we were used to having text prediction for key words. So maybe you know, like, you get so much data based on the movement that we do, because we type the same way, right? Like maybe you everyone types differently. And you get so much data out of that person that you can also just like type something, waving the hands, like very minimally on this keyboard. And I feel like that would be the unexpected moment that you know, like, no one it's like a little bit more like an advanced swipe, like you mentioned. Like from who's connecting at least what I what I think about swipe and why. So it's so powerful is because you're not focusing on one single letter, but you're focusing on like, sort of like a semantic field out of that precise letter. So you are it's like an exercise of probability which is exactly what LLM does in a way so in that in that sense, I can really see like things evolving in that direction in have like a very advanced. Maybe the keyboard doesn't even look like the keyboard we are used to like the receipt, you know, some reference of letters, but you won't need them all like I know some don't like the I know I'm really thinking ahead now but it's hard to think sometimes like backing like you know, far ahead in the future because things change so differently like Who knew that there was the touchscreen coming for the smartphone you know, like, yeah, maybe who was really exposed to that technology might have just understood but not you know, no one probably was expecting where a lot of backlash is. Usually it always starts like that when something is so innovative, like everyone hates it at first and then it just takes over so yeah, I really really liked that. So like you, you probably you both guys probably work a lot into some XR suites like for example opening XR or unity things. How you see now, let's take for example something like you know, XR Toolkit, which is like a very standard, some sort of like a toolkit where you can do a lot of things you can swipe things you can type you can move you can you have like a rate control and control how your technology kind of embraced this standards. Do they embrace the standards would you like to like I assume you have SDKs so this is the case as assume our unity and realistically if you can talk more about those things, maybe we start with Alto and Oliver again.
Yeah, absolutely. So first of all, I wanted to touch on your previous point. I don't think we are in any disagreement on the need. of innovation for textinput. I think what I was meaning to say it was that the very first text input demo that we had the human had to kind of envision the keyboard in their head. They had no kind of visual feedback on what kind of what letter they would be pressing. So they kind of had to guess without with having that in their head. And that was based on the multi finger classification just in a table without any visual feedback on what kind of letter was pressed with each finger so that we found Okay, yeah, so So my point I'm sorry for the kind of was it
sorry, Otto, was it about the the tap you know, this wristband that this sort of glove company that has zero layout and they have like combinations, tap anywhere I think their name is but are you referring to that, like the user wasn't seeing any layout at all?
Yeah, so it was similar. But it was based on a 10 Finger system where each kind of finger corresponded to a certain area of the keyboard that they had to visualize themselves. So it wasn't like you know, double tap for B and triple tap. For C. It wasn't something like that, but more like you had to type in the same way as you would have typed on a keyboard with our 10 Finger system. So kind of my original point was that it was like we had a couple of these kinds of trials and innovation, unlike, you know, what could be like, super futuristic textinput you know, just typing 10 Finger systems on your lap for example. You know, without having even any headset on but then we just found that the user needs to have kind of some more feedback, at least on what they're trying to select, but that we're sorry for the confusion in relation to the the interfacing with standards. So our product basically works is first of all, we have the free SDK that we've built for Android watches. And that is basically an app that you download onto your Android watch, and you get to try our gesture detection capabilities. For off the shelf Android watches. Then we provide a unity SDK, and some other ones like Python and JavaScript that you can build. You know, for example, XR demo with or with Python, you can do an IoT IoT demo with or you can do with JavaScript, maybe a web demo, if you want. Now, in terms of and then we, you know, provide some other kinds of integrations, such as Android native for b2b clients. So if they want to, let's say interface with their Android headset, directly without unity, then that's possible. We've definitely looked into open XR, but you know, open XR, they have, you know, support for hand tracking. They have support for eye tracking, they have support for controllers, but they don't have support for response. So either we would need to, you know, act like a mouse or wrist, act like a mouse or a controller, or act like hand tracking, but it's not really the same thing because there's a bunch of things that make like we could add, so you know, we've just looked at open XR like okay, well do we start writing do we start like, building the API ourselves? We've for now decided not to, but I do think that generally standardization as I mean, open XR has a long list of companies that work on it. Then of course once you know on the way towards making response, a standard you know, it's it's like from the vision we have, it shouldn't matter where you get that type from. You should just get it somehow. It can be from computer vision, it could be from wristbands, but ideally that would be standardized, so that any OEM kind of or any, you know, stakeholder tapping into that API should be able to access that tap you know either from computer vision or or or from, from from from wristband. So that's definitely something that we think should happen and will probably also happen to an extent in terms of standardization, but currently, most of our developers that work on our SDK, they basically need to evaluate against let's say other input methods, such as hand tracking, or maybe a button on the head, or maybe a handheld controller. So it's a bit different work. And usually they don't need open XR for that, but rather something like Android native, but that's kind of just my two cents on, on on developer integration. Oliver, go ahead.
Yeah, I mean, to your question, I think it's a good question. I think we need standards at one point, right. And it needs to be it needs to be there. I think as as far as on our side open XR has everything that is required for for what we need in terms of text input, right. So at the present moment, our unity SDK is it's open XR compatible. Companies use it right now to integrate the text engine component in their UI, right? So they have a user interface of some sort and working into integrating and calling our, our engine on the position of the letters and and then we can understand where's the finger and actually when there's a tap and also predicting the next word. So it's really sort of a text processing algorithm available for for any developers out there. There are a few unity developers that have virtual keyboards that are intuitive or technology. And there are a bunch of also larger corporations that are integrating our tax engine. Because it is something to create a UI with a very simple icap, h e l l o, hello. But the remainder of processing that text and making it faster and that's that's where we, we provide our technology I think it's like the spit the same analogy as the web. Right, there was this sort of standard of how the web will function and for any XR application, I mean, unity is great. I mean, I won't mention the debate what happened with the unity but open XR is definitely I think, very interesting. For, for for for anything else. Obviously Apple has their own their own take on that, but I think it's important to work towards sort of using a standard so we are cross compatible and and I think it's going in the right way in the right direction. I mean
yeah, I think so too. Like I you know, I'm always kind of an advocate of, of having like this kind of standard pushed, I would love to see it, you know, like when I open unity, like there is there are some plugin providers in the settings and to see like, you know, double point or you know, so it's like the like it should be like a long list of things that just works the same way with everything and I hope that they open in more in that sentence or that there is something that you know, they do for people to adapt. But yeah, like so let's let's kind of like continuing asking you guys would like so far. I mean, you talked about your product and I you know, it seems like both the solutions are incredible, in my opinion, and that really advancing the way we see inputs, what is some of your favorite interaction model that you, you know, outside of the solution that the you that you built, or maybe the interaction model that you wanted to extend or the one that may be hated? And you say okay, I'm gonna make my own. Like, I wish I think that would be cool if you guys can present some comparison with something without mentioning necessarily the name of the product of the company but something you you tried and then make you feel that you had to do something or if you really liked it hmm that's a very good thing. For example, just to give you some some minutes to think about it, like very simple for me, when I tried when I tried the Interaction Lab, on the other hand Interaction Lab on Mara, which do all of those little games with your hands, I felt like oh, wow, this feels you know, very good. Like there is something about it, it's makes me feel like my hands are really like, so powerful in this world. And, you know, that sensation. I also had it like very long, you know, very long time ago. I feel like one of the coolest thing was like it's still there, and it's doing great, but Leap Motion was one of the first thing I explored as developer and just like put it on on my computer and see my hands and interact with things in the computer. I was like, Maybe we don't really need a keyboard. Now we just can do everything with the hands like we can navigate pages and stuff like that. I wonder if you had any of these moments in your career? Um,
well, I think in terms of I mean, I have a couple of things right. One of which is I think there's a there's an interaction with controllers that I think controllers per se, right at one point for mass adoption. I'm not talking about use cases that are very enterprise are very specific and require some sort of controller. I think it has its its limitations, I think hands is much more natural. Now. It is the problem to solve, especially with what Otto said about, you know, multi finger detection and this is where Ultra leap Leap Motion today right is I think very good at it's quite impressive what they're doing with the finger recognition. So for for wider adoption, I think it will go much more sort of in Apple leeway with the recognizing the fingers and now it's just a question of how precise it is. When I'm outside of the field of view what happens and because we're humans, we move a lot. We have a lot of gestures and especially Italians, obviously upon their lseo but essentially it is I think, just for for connecting AR what I've encountered I think is and what I think is interesting in in the world of interaction today is developers will shape the future of how we interact with apps. And they will be sort of a merging point where people will adopt the same gesture from another app developer that nailed it, right. From a keyboard standpoint. What we are interested in is how can we leverage swipe left to delete a word for instance, or hold the spacebar to select my whole text? And then have these types of interaction models that are familiar and save time on the mobile and adopt them in XR because especially in XR we'll need to have this toolkit of features I would say or capabilities that helped me select the whole text or change and move the cursor better. Delete very fast like one word or maybe if I continue and I hold that gesture, I can continue to delete all the words right? I could hit the swipe right to do a space and but slowly surely, these will become known and standard like the former keyboards are firmer in interactions. One of the interactions that I specifically like in smartphones today is and it looks very simple, but it's this and I think Mozilla created that interaction very back in the days where you swipe down to reload a web page. And then you have this snap interaction. And that's one example of instead of hitting reload or a button, you use a gesture to just reload the page that you're looking at. And I think we need more and more time to think about those types of natural gestures that people will adopt that scale. Right. And I think app developers will will show us how that should be. And I think you know, Otto and his team is working on a lot of those. And I think that it has a great chance of becoming some sort of contributors to the standard or if not the actual standard, and in the same way that we're doing on our side is it's much more beyond just the keyboard. We really want to nail the actual experience of textinput beyond voice because both of both again will collaborate and coexist. And that's inevitable, right?
Amazing. I love the swipe the you know, I can see to see the same moment with the wrist and that the kind of like elastic scroll. Yeah, that's weird right? Now it feels so natural, but think about it, you know, like, like you have a reload button and you just like will study do something else is it seems like a very far, you know, seems like two opposite almost, but very cool. Sorry, go ahead.
Yeah, I was I was I was gonna say that one of the one of the funkiest and Best feeling things I've tried in this company. Was good eye tracking for the first time. And I think I think it was the quest Pro. I mean, I tried several ones before that, but I think it was a quest Pro which was like, like, damn, this is good. Now for input, like eye tracking. Of course, like eye tracking does not originate in input. It's more about attention. Attention tracking and and and of course, like phobia, rendering or whatnot, but it's for for input purposes, like it felt like like just knowing like what you're putting our attention to, and then somehow else triggering it. I mean, of course, like now it's apparent its gaze and pinch and, you know, we've been working on for a year now, you know, kind of selecting, kind of with a combination of AI and kind of touch base finger gestures. Like of course, like we were very happy to see that vision Pro has has that as a primary input modality doesn't ship with any controllers. You know, none of that it's gazing pinch. But when I tried to get eye tracking for the first time, like I think that snapped in my head, like I also felt that this is possible. And it feels very natural. Like it's wouldn't feel natural to select with my eyes like to stare or blink or something like that. But as a as a way of directing attention. It's it was great like it was they've, they've done done an amazing job on that. I wanted to mention haptics as well. So haptics. I think the trick with haptics is that a lot of companies, they have these kind of like, large form factor ways of recruiting haptics and it's weird because you somehow feel like you know, me, touching the wall. Like should feel like touching a wall when I have these kind of haptic gloves and says, whatever, like backpacks and whatnot, but it's, it's very hard to recreate kind of the force that's, you know, the world kind of, you know, sends it back to me when I touch it and like to actually like, feel that there's something there. So I feel like my best haptics experience is having a actually been technically the most powerful haptics that I've tried ever, but they've been the ones where I haven't, like expected it to feel like a wall or it to feel like a table. But instead, you know, expect basically nothing like I expect just, you know, to interact with a small virtual object, and suddenly I feel a click on my wrist or a click on the tip of my finger. So when I'm expecting less than that, even the little haptic kind of motor or actuator feels like a lot. And of course, like since, you know, we work on risk based inputs. Haptics is something we work with as well. But it's not the primary focus, but like the the demos that I've tried, where I have little expectation, that's where I feel like haptics work. The best for me, it does feel very good, even though it's a simple actuator. And none of that like crazy, you know, hassle that you sometimes see. Now of course, then like I think something that was really interesting for me as well, maybe as a as a third example was when I started to interact with real world objects in a similar way I would interact with virtual objects. So let's say so, so we went to visit this very early on neuron company, kind of journey, we went to visit this company making smart home devices, basically, mostly speakers. And their, you know, the I mean, I don't remember if it was exactly the interaction but it's like, you know, look at the speaker and turn it on. Or point that the speaker and turn it on. Like that, you know, when you don't have to have a headset, but actually kind of you can do the same interactions but for everything virtual around you like that also was super cool because you don't expect the devices to respond to that because you're used to you know, having you know, these handheld controllers you know, to for every single like, different item, you're you're expected to have like these, you know, it's so it was that, I think was like something clicking it clicked in my head as well, like, yeah, like this makes a lot of sense. But it's kind of, of course like something we work on as well. But I feel like the inspiration came very early on from some of the other companies that have worked on this as well.
I feel I feel the same when I you know, I have like this. I bought like this very cheap lamp on Amazon, they can just turn on with your phone application. And he's so nice, you know, like, it's just a simple movement. It's just tapping my phone and you know, I think that they also have the possibility to hook up to a wristband. There are different ones. But yeah, like it's it's such an easy way to do things you know, like, and I feel like there are those moments that he was like, oh, yeah, it's possible. It's possible, but it's not really integrated in our daily life. So I feel like there's still a lot of work to do to make it happen in the seamless in this very seamless way that everyone can access it like it should be something in there ready should be an app that should be a native app on your phone or on your wristband, whatever, you know, like the interaction should be something like that you can that every provider knows that everyone has this. I don't know. Something, something of that kind. Maybe there is something similar but I'm not fully aware of the very last question. Oh, no, no,
I was I was just about to comment, just that it goes back to the standard and sort of how this is integrated and I think matter is the the way to go now. For for IoT now. And I think that will be an interesting moment in history where you have this augmented reality when I can augment my my world as well as connected with my interaction. I can interact with hardware, I mean, devices around me in a more natural way like like a commander, Commander in Chief of my home no and point to things and turn them on and turn them off and I could look at my oven from you know, cooking something and, and just do that gesture of cranking up opening the oven and cranking up the the temperature very interesting. And that's that's not far away. It's not really far away in.
Yeah, it's more about the way we think and the way things are just set set up for I think I've seen something like this in a lot of more in the entertainment field. I have to say like if you go to some theme parks or if you go to some specific you know, immersive experiences Museum and things like that, like they kind of require you to download an app. And with that up, you do a lot like you can interact with sometimes cultures you can see the map of the event you can keep in touch with all the people that are going to the event it's just like, you can see like, you know, like where people are like there is a full multi multi, like suite of multifunctions that that or that it could be possible doing just with the device like with your device going around the space. So kind of spatialize your presence in space. And it creates like all of a sudden a standard that everyone is respecting within that space which is also very social exercise in a way kinda but your time is super valuable. So I don't want to keep you guys forever. You've been so nice to just giving like a very like, you know, like very long question long answer to my questions and I feel like everyone really enjoyed if there is anyone and please want to ask question, leave comments, like just drop it in the comment or please raise your hand and we'll be happy to, to to involve you in the discussion. And the very last question that I have was more like we talked about like haptics and feedbacks. The fact is that like the challenge is like how do we make it feel to touch something that is not actually there physically good enough to be the to make this standard or to make people really want to do this? And I feel like we didn't mention a very important part of a lot of Pixar experiences or, you know, digital experiences, which is sound. I was just wondering if you guys in your company, you have allocated talent to build some sort of sound feedback. Or you know, if there is anything like that, that you thought of. Yeah,
I'm on our side. I mean, I can talk about the sound sound design that we've created for typing on a keyboard. I mean, that touch screen, obviously because, yeah, not in AR is different, but I think I think it relates in terms of textinput. What is important is that the sound of each key has sort of a harmony, a harmony. And if you listen that some sounds of keyboards you can really see the ones that worked on that and others that didn't. But I think the I think sound is very important. It's it's very important in terms of, I mean, obviously with haptic feedback and the vibrations and I can give a different wavelengths to feel a certain texture right. So we've done the sound that Flexi to make sure that it follows the speed of typing and gives this sense of speed of typing. When you type and I think it's quite unique. In today's world, about 30% of us it's it's more than I taught before in our research, it's 30% 35% of people keep the sound on their smartphone. Most of us keep keep that on silent. But yeah, still 35% of people use sound I think in when you have headset it's going to be different I think so it's it's going to be connected with your with your ears and because it will be a necessary basic function. That's my take on sound.
For us, the role of sound so I talked about us like providing a button basically, the role of sound very similar to Olivia is, is it's it's also a proof of confirmation. So you know when when you've done a selection gesture the sound plays a big part on the human brain, getting confirmation that you have selected indeed you know, especially since I mean we have a machine learning model, it's inference. It's not an absolute measurement of what your fingers do, but it's a guess on what their fingers do. So especially in these cases when it's not going to be 100% accurate in every single situation every time. The sound kind of plays a big part on the human brain learning on like, what does a successful action look like? So for example, we have this game where it's a it's a it's a gazing pitch demo actually. It's where you basically you have a Unity game that spawns up a lot of balloons kind of from thin, thin air, and then they start like, you know, or bubbles actually, they start like, you know, they start flying around in your kind of virtual world. And then you're supposed to look at them and then you're supposed to pitch so we provide a sound every time you successfully hit a bubble. And that's I mean, I don't know if I've tried it without but, but I mean, the sound is a very important part. Of the experience for me at least. And I think for a lot of others, as well.
Yeah, thank you for I think, yeah, sound really is like, to me, like a very big component of a lot of exciting experiences. I feel like when we work on this virtual keyboard was a very big part of the work and the presentation that I did, I gave we we really wanted to express how for example, when you touch and you push down like that kind of movement should be sort of like all of these parameters like makes like a big difference in feedback. And I feel like there is so much to learn still in looking at, you know, the work that you guys are doing. By the way, I just wanted to say thanks again. I will never say enough times. But yeah, they want to keep you all day. It's been more than an hour usually that's the length of the podcast and thank you so much for you know answering all the question if there is anything else that you would like to say before because zap or you know, like how can people find you a little bit more about how to reach your company or how to reach your products. This is not promotional. That is more like to keep in touch with you as a human. Please go ahead and finish it up.
Sure. You want to go ahead or
Yeah, yeah, so I was just gonna say that. Yeah, I mean, happy to connect with you on LinkedIn. So usually I share you know, what we're up to what are some general things happening in the industry that are interesting, of course, like it's an interesting industry because of, you know, last week double tap is now part of a consumer product, the Apple Watch. So I think for us, it's very interesting to see how this will play a role, especially next our input and hopefully as being the company to spearhead and pioneer that as well. So happy to connect on LinkedIn and then just reach out to me on LinkedIn. For example, asking any questions you might have. And then of course, on our website, double point.com. You can see all our demos, you know, some of the demos that were mentioned here, on XLR input, but also IoT control, so pointing at lights and turning them on, pointing at a TV controlling it, you know, just to get your brains creative. So a lot of this stuff is actual reality nowadays. But yeah, so often on on LinkedIn with double click and thanks Alessio for having me.
So on my side, I think you can find us on flexi.com. So flexi.com is the best way to get in touch with our team and we're all over LinkedIn as well. And I think we invite any unity open XR AR developer, due to leverage from our technology SDK, our unity SDK. And because you know, we are on a mission to democratize access to textinput across mediums now, so it's we're going beyond the screens. I think now it's it's beyond the screens knows that. That's that's the future, definitely. So we're really inviting any developer to build on top of our technology so they don't have to reinvent the wheel. Because, you know, seriously, it's hard. So it's better to use us then. Try to reinvent it. So yeah,
I agree. The keyboard is one of the biggest project ever. There's so many things and no one really realize it when we use it. I think it's like a full set of a lot a lot of tools altogether like a lot of features that that work together. Well, thank you again, guys. Have a great day or great night there. Oh, there you go. Thanks. Thanks for Cisco. And just a very final announcement like feel free to you know, follow this podcast. It's part of like a bigger, a bigger discord channel, Discord server. We have like a lot of speaker coming in like every every month that are you know professional in the world of xr like all of your in auto and they are changing and shaping the industry. If you want to keep in touch with them also write to them directly or on the discord server and have a great rest of your day everyone. Bye bye