The AR Show: Ohto Pentikäinen and Jamin Hu (Port 6) on Touch as an Essential Form of Input for AR
5:31PM Mar 27, 2023
Speakers:
Jason McDowall
Jamin Hu
Ohto Pentikäinen
Keywords:
touch
ar
detecting
people
interaction
projects
jamin
headset
company
hardware
augmented reality
creating
finland
sensors
problem
algorithms
fingers
port
point
device
Welcome to the AR show where I dive deep into augmented reality with a focus on the technology, the use cases and the people behind them. I'm your host Jason McDowall. Today's conversation is with Ohto Pentikäinen and Jamin Hu. Ohto and Jamin are the cofounders of Port 6, a company creating Human Computer Interaction technologies, which are initially focused on detecting the moment of touching an object. Ohto is a serial entrepreneur having been a co founder and CEO twice before the age of 20. And before co-founding Port 6, his earliest venture was reimagining the high school learning experience by creating a platform for multidisciplinary self directed learning. Jamin has a passion for music and earned a degree in classical piano performance before expanding his focus to pursue a master's degree in Biomedical Engineering. And this conversation, Ohto and Jamin share how they crafted the unique high school curriculum to train entrepreneurs, what's missing an AR and other gesture sensor tech and how mixing piano with bio engineering results in a new type of touch sensing technology.
When you are truly like interacting at high pace and you want high reliability, you can't really afford to have these sort of imaginary fingertip buttons that feel sticky, or that feel like they accidentally trigger without you wanting it to trigger, you want it to trigger the exact moment you actually feel it triggering with your fingertips. It's about improving the quality of the interactions,
They went on to share their vision for the company and their approach to getting there. As a reminder, you can find the show notes for this and other episodes at our website, the AR show.com. And please support the podcast@patreon.com slash the AR show. Let's dive in.
Ohto, it seems you have been starting companies and working in entrepreneurial endeavors your whole life, what was one of the first you remember?
So one of the first projects that I remember doing was actually where Jamin was one of my clients, I had this business in in middle school, where I would look at the school lunch menus. And I would pick out which ones were going to be bad. So Finland has this kind of healthy, you know, multi nutritious school lunch system. And I would, you know, see the pattern on whenever the the food itself would be bad. So there were certain portions that would just not be good on any metric. So before these days, I would go to the grocery store and buy in bulk, a lot of snacks that different students could eat, and they can microwave in the in the whole met class. So what I did was basically in each of these mornings, I came to school with a big backpack full of food. And then I would sell that to all of my fellow students. And I would usually, you know, take a pretty hefty margin, let's say 5060 70% on top. And I would use that basically as my pocket allowance. And quickly some competition started to rise. So people started to, you know, sell brainless chips around the hallways. So I actually ended up stopping it after a year and I went on to do something else.
Wow, a full year of operation. I did not. I did not know that. Yeah,
yeah, it was a it was a year of operations, actually. So Wow. And that's how I got my pocket allowance for middle school, the first year of middle school.
Amazing in Jamin while you were buying those snacks, because clearly you agreed that what was on the menu wasn't wasn't desirable enough, apparently. Yeah. You were also busy studying the piano practicing the piano on the side. Right? Where does that love for piano for you originate?
Yeah, so I guess if you trace it back originally, I started when I was four. So I don't quite remember a time that I didn't play the piano. So I think my earliest memory was from when I was around four. Yeah, I'm not exactly sure if I can pinpoint where love comes from general. But I think it's just been a part of my life since childhood. Sort of
how is it that your parents if you started so young, how's your parents structured your experience with a piano in such a way that you didn't lose your passion for it as you continue
to get older? Right, right. Well, actually, I think piano playing was a serious matter. I think for most of my life, like it was a it was an art form a serious art form it I wouldn't really use the word fun to describe it necessarily. Although it was very enjoyable and rewarding and but it wasn't like fun the way that a video game would be or, or the thing in the playground is so it was competitive. I was competing. And I think it was at university where I really started like playing around more. That's where I went into like Jazz, Jazz Theory, chamber music, ear training, producing things like that. So that's where like, I would say the fun took off more, but for most of my childhood, it was like, yeah, it was a serious art form that I appreciate it and enjoy it but not the word fun necessarily.
Not necessarily fun, but something you pursued. relentlessly nonetheless.
Yeah. And it was enjoyable. It wasn't like, yeah, it was it was a, like an intense sport.
So intense sport. Yeah. So how did this intense sport of yours intersect with your other studies there and university and biomedical engineering?
Yeah, right. So well, like two things in my life was pretty costly. It was music and science. So I think biomedical engineering was not directly related in the beginning. But it became pretty clear that like, what I wanted to do in life, in addition to understanding and expressing music was understanding more about the fabric of the universe. And so that, like, the biomedical engineering degree was just the closest thing to a general engineering degree that I found so.
And so in this pursuit of science in this deeper understanding of the universe, while also pursuing this ability to appreciate and express through music. How do you bring those things together, ultimately, and the project you're working on now?
Yeah, that's the part where I think part six might not exist if I didn't play the piano. So basically, like it all started support 60s, this AR company, that author and I run, it started when I was trying to build a wristband that could decode and interpret my subconscious finger movements. So I would just, you know, play plays, or I don't know, Chapin boss, especially, but because they were, I didn't know something about Bathmate, my fingers just continuously playing subconsciously. And so I realized that my studies was pretty relevant to helping me build the sensors and the algorithms to decode that, and then I showed it after he showed it to some people he knew. And it became pretty clear that this is an input technology that is very capable of solving some of the AR interaction issues. And that's sort of where it started. So it, it came together because of this finger movement that just happened in the background in my life all the time.
Awesome. That's really interesting. So as you kind of made this bridge from this desire to kind of capture these subconscious movements, maybe without the piano in front of you, but doing so through a set of sensors, and in realizing there's a connection to that, and some of the challenges around augmented reality. I wonder, because, you know, this is probably my own bias and my own perspective in the industry, but as an industry, there's a lot of focus on getting the display and the optics to be truly wearable. So help put this this idea, are this even this type of input into context into the bigger picture of solving for AR?
Definitely, so great question. And I've had to ask myself, that a lot. Like, why are we doing this? You know, why is everyone else working on the focusing on the displays. So it became pretty clear in around like, half a year or a year that, when you boil AR down to its essence, the way we think about it, it's, it's basically interfacing with the signals coming in and out of our brains. And so of course, the visual signals, I mean, mainly, it goes into the brain, there's no visual output from the brain, necessarily, but visual signals coming into the brain that's like really high bandwidth, you know, it tells us a lot about the world. And so if you can intercept that information, and you can replicate it, or modify it, you know, you can make a convincing version of reality. But there's also like sound, which is also neglected. Nowadays, spatial audio is a thing, but I still am still yet to experience a good spatial audio experience where they interface with the sense of hearing, and then if you go down the list of senses, we have touch by touch. And the thing that was important about touch that we realized was that whenever humans interact with anything, touch is there. And it's sort of been like that for most of history. So that's how we put the thing in context as in as in what we're doing with our hands and what we're feeling with our hands, the stuff that we could detect, with our sensors, that specifically tries to make it possible for us to interact in AR and VR better, because interaction directly depends on interfacing with our sense of touch. So that's how,
yeah, that's interesting. Philosophically, if we explore this a little bit, I think of this human human interaction at a distance doesn't necessarily involve touch rate as much about sight and sound. But within our little personal area, touches a much more meaningful and important element. You remember, as a kid, I used to drive my mom crazy in lots of ways, but one of them was it whenever I would walk through a store, I had to touch everything, every piece of whatever we were wandering by, whether it was a clothing store, or that Tronic store or whatever store, had to feel it, because for me in that personal area that helped my understanding of how the world was constructed, and how everything worked. Anyway, this is kind of interesting, this notion based on the distance touch becomes much more important, the closer it gets to us this thing that we're trying to interact with and understand
exactly, but you also mentioned this human to human part, which to put things in context. It's fairly I'm fairly new, like humans and their complex social behaviors, like, it's fairly new in the evolutionary scale of things. So, but it's equally important. And when you know, people talk about presence, and feeling like someone else's next to you, that's where, you know, these visual cues with like face tracking, eye tracking, body tracking, that's where that comes into play. So definitely, there's there's a need to do that as well. And I would call it like, yeah, a different category. But But equally important.
So as you think about this evolution of augmented reality, and the set of problems even solved, how do you fit this kind of perspective on touch into that necessity of AR?
Yeah, so I think here, the way I think about it is that touch is one of the things that you just can't fully, virtually simulate, like compared to displays, you know, you can make a display that makes something visually indistinguishable from reality, like the brightness, the color, the contrast, the depth, and everything that's physically possible. But to physically make something that can fully virtually, like simulate and sort of display any virtual touch element, like making it feel like you can lean on a wall, physically, that's very difficult, almost impossible, like near the limits of what's possible. So I would say that touches real and AR is going to be such that you'll need to borrow the real world, yeah, as your source of touch information. So meaning that very often, I think, instead of like, pretending that there's a table in front of you with a touchscreen interface, you need to actually use a physical table, and then project a touchscreen interface onto it. Because yeah, touch is just inherently such a hard thing to fully synthesize. So you have to like all the real world. So it's sort of baked into AR, I would say.
And so this, the your vision, then is that you ultimately create the sense of feedback and touch and sensation by leveraging some elements of the real world, whether you're carrying a tablet, or a background or the table in front of you, or something else to kind of give you that sensation that there's something matter matters there. Yeah. And the other side of that, yeah. And so how does this end up tying back to the work that you're doing at Port six, and in Where are you beginning to tackle this set of problems that you see around touch, the opportunity really see around touch, Port six
is original, like, purpose of existing is to solve AR interaction problem, but that's how we put it. And for now, it does seem like touch is the biggest missing element to make that happen. We've basically been trying to do at Port six is like, have a few sample AR use cases, try to come up with the best interactions possible, and the best mental models to help the user achieve the task, and then develop the hardware, which often involves touch sensing, to actually sense these interactions. So our work mainly nowadays is based on trying to design interactions that often involve touch, as well as making the hardware that specializes at protecting touch. Yeah, and just as an example, like, computer vision, for example, is sort of the, the opposite of what we want. Because what computer vision sort of forces you to do when you're interacting, interacting with things is that it really optimally works the most when you're not touching anything. It's not good at detecting, you know, when you touch things, ideally, you keep your hands like not touching things. In fact, you shouldn't even be holding objects because that will, that's occlusion and prevent the camera from seeing your hands. So you can't even touch things, you can't hold things. So computer vision has its place. And I'm not saying it doesn't, I'm just saying that at Port six, what we're trying to do is go like try to directly sense these touch events, that otherwise would be very hard. The way I
think about it is that when I want to leave my mark on the real world, I touch it, I manipulate it, or I press on it. But when I do that my brain knows that I've touched something, I've tried to manipulate something. But if you're taking that information to the real world, there's no other way of telling the virtual headset that you know, I'm trying to act on something unless you have something that detects that information. And basically, I think of our technology as that kind of agent that is detecting what my hand is detecting. And feeding that information to the headset, you know, with which you can do whatever you want, but we use it for input we use it for interaction. Excellent.
So I thought maybe you can go back in time a little bit and kind of tell your story also, you know that you and German your seller vendor customer here at one point and middle school as you've known each other for a very long time. When did you guys first meet?
We met in our seventh grade class? So Middle School in Finland starts at seventh grade. You know, I was the vendor Jamie was the pianist. And that's how we got to know each other
and we We're fifth team,
we were 30. And 3030. Got it? Yeah. Yeah, we were 13. And basically, kind of, well, we had, overall, just just a very special school, we have all sorts of interesting kids there. But I think what you know, brought us together ultimately, to do, you know, just more than schoolwork together was that we kind of complemented each other quite well.
That's, that's been the really nice dynamic. I've appreciated. Yeah. So So
this led us to do some joint projects. So actually, for the first projects, I wasn't too involved, because they were a bit kind of geeky, in my opinion. So things like sending balloons to space, we Jamin sent a weather balloon up to like, I think, was it 2030 kilometers?
Yeah, typical, typical heights. Yeah.
Yeah, yeah. And, you know, all sorts of other projects that, you know, typical middle schoolers don't usually do. But then things became interesting in high school. So we actually thought that these projects actually teach you a lot about the real world. They teach you a lot about how to, you know, build things, how to organize things, how to structure things, and the different kind of the multidisciplinary topics that you need to master, basically, to pull off a successful project. So we kind of thought that, you know, this is maybe the way we want to learn, instead of, you know, spending time with textbooks and exams, we thought that, you know, this kind of project based learning, it's, it's something that excited us, and we were sure that it's going to excite other people as well. So in high school, we created this, this program, it was basically a course that we run a students. And then we recruited all sorts of kids from different high schools to come together to create their own projects, we would fund it with foundation money, which we applied to, so we had a budget around 20 30,000 USD or euros each year, and then we would fund that fund those projects with this money. And because you didn't have to pay any salary to anyone, because it's high school, you know, we got things done pretty cheaply, or pretty affordably. So here, we actually got to see a whole variety of different, interesting projects get done. Now, the interesting thing is that, you know, barely any of these projects actually turned out to be successful companies. Well, I mean, what do you expect from high school students? Right? But actually, almost everyone who did that course, or did the program, either has founded a startup, or is working at a startup in like a very, very, let's say, meaningful position. So we saw that when you kind of give this ideology of being able to do things actually on your own, some people just gravitate towards that. And they wouldn't do it in the, in the in the future as well, rather than going down some typical tasks, or typical parts that might go down in some people's careers.
Yeah, I think here, I maybe want to add, like two things, or put it into perspective, I think the main like, Sure, there are plenty of programs in the world where, you know, school asks the student to do a cool project, and like, Yeah, that's fine. And that that's already better than nothing. But these projects, like I think the thing that made this different from most academic, you know, Initiative Project, is that here, no one set the criteria for what is success. So, here, we did that. So I think in most cases, no matter what project you're doing, for most of our education, like someone else has decided what is good enough, or what's good or what's bad. But here, this was completely student run, and there was no evaluation criteria. So you yourself decided what was good enough for you and that of course, it's something we see every day at, at our startups at our at our companies that are in our lives, we decided what's good enough. So I think that was like one distinguishing factor. And the second thing by the way, these these projects could have been anything they could have been, as long as they were challenging. So some some were like a full length original feature film that was shown in the movie theater and everything with popcorn and stuff by these 15 year old kids, and an Instagram page dedicated to To the stories of immigrants that came around that we uncovered things like there are families that that have a toward a large part of northern Europe to really settle down in a place where they feel as comfortable. So they, in the Nordics, they actually have a choice between, you know, Sweden, Estonia, or Finland. And we heard about, you know, how near death experiences and things like that. So, so any project that the students felt meaningful and was challenging enough that those were the sorts of projects we were talking about.
So, so amazing. In the school system itself, let you do it, it gave you get school credit for this was purely on the site as well.
Yes, that was the part where we had the support of our principals. So our principal was really he recognized from the get go the value that this could bring to the students at the school. And yeah, they got credit for us.
That's so cool. So amazing. How did you decide that when German brought this idea of trying to sense what was happening in his fingers that were playing subconsciously, these Bach melodies? How did you kind of decide that this was worth jumping into yourself and pursuing as a joint project as a company together?
Yeah, that tell us
so I was actually meaning to just help out. But Jae Min was going to do some stupid decisions. He was about to he was, he was about to ruin the cap table of the company, he was about to start. And I saw it instantly. He was also going to do some very dumb funding decisions. So when it's basically the same as someone plays a bath piece, like absolutely badly, and they're like, Oh, this is amazing. And shows it to Jamie. And that's kind of the same, I saw the same what Jamie was about to do
questionable analogy, but okay.
The point is that, that I just wanted it to be structured properly, to ensure success, because at the end of the day, I do care for or like I, I hold a lot of care for, for people that I that are around me, so I didn't want to have him, you know, go the hard way of some of the ways I've gone about startups previously. So then I thought that, you know, let's just give it some actual proper structure, do the funding, you know, well enough, do the paperwork well enough. But, you know, then I just stuck around. And, and I guess that's what I'm still doing, I'm still sticking around, just trying to achieve the next stage of where this kind of invention can get to
know but often tell us why you've stuck around why, you know, so
actually, there's also a part of me, which is super interested in small, efficient, and well resourced and incentivized teams. So I find a lot of value in the format of the startup when solving problems that are actually super hard to solve. So my dad run a nonprofit for the majority of my childhood, and he would have all sorts of frauds there, he would have all sorts of inefficiencies. He would, you know, have people doing absolutely nothing for, you know, a huge amount of their time, and no one would care. So even though they were solving a super important problem, the structure of the organization was not suited to deal with some of the problems that they had in their hands. So when I got kind of involved in startups, I saw that actually, this structure can work very well to solve some of the bigger problems that humanity has ahead of it. So I guess port six, in a lot of ways, for me is a way to learn some of these things, and you know, being able to contribute to the future of augmented reality. But for me, personally, I would say that, this kind of chance to learn how these structures work, and then apply the same structure for different problems, I think is for me, that's the biggest value.
So what is your current hypothesis on how to form a highly efficient team, or that a strong culture and motivation and incentives that drive people to to perform well, to solve the problem?
Right, a couple of things I would point to, is I just read the trillion dollar coach book by by Bill Campbell, or of Bill Campbell, one of the well known coaches in in Silicon Valley. Some of the things that really resonated for me is this sense of security for a team member. So they're able to know that whatever they do, whatever risks they take, someone's always going to hold hold their back or someone always going to, you know, be there to support them. Then if you combine this with like enough resources, which is not a given in startups, but if you're able to give enough resources for people to embark on the The mission basically why why these people joined the company. And then if you have proper incentives, so usually the best incentive is obviously, an internal one. So if there's an internal, you know, strive to build products or build services that, you know, fix a particular problem that you feel strongly about, you know, this is great, but then also incentivization. Also, like in terms of that, you know, you're able to, for example, provide a salary, you know, if someone needs to think about money constantly, they're going to have a worse time solving the problem. And then third, when you combine kind of security and incentivization, then it's just about finding the the teammates, that this person this individual deserves. So not only for support, but also being able to tackle a problem, which is by definition, multidisciplinary. And there's a part of me, which thinks that this can be done with a roomful of people. So you don't need hundreds and hundreds of people to work on, you know, first stages of a product or service that can solve a real problem. But you can actually get a lot of it done in a roomful of people, basically, you know, 10, or 15, or even 20.
Max. Yeah. Excellent. So you take this, you have this hypothesis, you're building out this team, you have this belief that touches an important and missing element of the AR experience. So what have you accomplished so far? How far have you taken the technology?
Yeah, I think the I would split this into sort of two categories. So first of all, the thing that we have most to show for that anyone can download onto a, a Android watch and try it out themselves, is we've been trying to saturate the touch sensing capabilities of existing smartwatches as a first step, meaning that we wanted to show how just with the accelerometer and blood oxygen, yeah, the heart rate monitor and electrodes on a smartwatch, you can actually detect various touch if that's so simple things like touching surface very lightly or, or sort of pinching your fingers together, like here, when the exact moment you pinch, you can detect it. So we wanted to show what was possible with existing sensors. So we were doing that. And that's already out there to be used as an input device for AR headsets. Second category are some of our more custom hardware projects that we have in house, this hardware specializes. It's basically designed to detect touch, and that can be through different methods. We have rings, we have symbols, we have respects as well, but they all operate on the sort of electromagnetic domain or the acoustic domain, or am I missing something?
Yeah, basically, just to point out the touch event Jamin showed there is a tap between the thumb and the index finger. So this we use as a selection gesture, mainly because the video is not going to be showing on the podcast. Oh, yeah,
true. True. You described that the actual magnetic spectrum acoustics those are the key sensing technologies that you're you're leveraging when you when you kind of think about and as you now have really wrap your brain around the scope of the problem? What makes this problem particularly hard?
Yeah, so great question. I think the way I would put it is it first I would think about it from a physics point of view. So why is touch harder to sense that, let's say, visual information or sound information? And the answer is that vision and sound, they are waves that can travel through a medium, sort of remotely, and that says they can affect things at a distance. But touch fundamentally cannot. So it relies on the fact that so if you want to detect touch, ideally, you have to literally be like at the location where the touches happening. That's the best sensing location. Whereas for for vision, you can have it, you know, next to the eye, or you don't have to be at the retina to detect the light that's hitting your eyes. And also for sound, you can have a microphone anywhere like near your head, you don't really have to directly interface with the eardrum, but with touch you like how do you sense a touch event remotely from a distance? And that's the hard problem. It's, it doesn't behave like a wave or like a well behaved wave that vision and sound moves.
And then another thing to point out is that actually, when you think about it, a lot of things are touching at the same time. So for example, we wear our clothes, we accidentally tap a surface. We accidentally you know, create vibrations. Let's say when sitting on a tram. You know, there's all sorts of like touch events which which happen and we don't even notice it because we're used to it. You know, things in our bodies touch it. each other, you know, tenders might touch each other by accident. So actually filtering out the touch, which is relevant for interaction from all of this other touch noise is also super hard. That's an excellent
point. So when you think about the sort of product that you're envisioning, that you're creating, and the techniques that you use utilizing to do so, can you distinguish your approach from the work of some somebody like wearable devices and their mudra? Solution?
Yeah, sure. So, I mean, we can I think, start from the things I mentioned before, which is, what are we actually sensing, so we are sensing touch events, right? Whereas, you know, we've actually tried to look around and there are very few products, there are products out there that do this, but there are very few that dedicate themselves to sensing touch. So Mudra, the mudra bank, for example, they do a very cool thing, which is they measure how much activation is going on in your tendons and muscles, to able to know you know, whether you're pinching something or whether you're trying to pinch something, whether you're releasing something, things like that, and that is all good, except they aren't directly necessarily detecting the touch event itself. But they're instead detecting what the human is trying to do. So the signals coming out of the brain into the muscle and, and trying to act on things, which means that the precision May, the precision to detecting touch events might be harder to sort of, to set at a high enough level. So I guess, if I rephrase that, I will say that the difference with a lot of other input technologies is that we are specifically trying to detect touch instead of trying to detect actuating signals coming from the brain. So the result is that we should be able to detect touch events more more accurately, and more relevantly to the interaction
in what is then that net benefit to the user in the distinction between detecting the intention to manipulate the fingers, as opposed to the actual touch event?
Yeah, great question. So let's take an example. So you have a table in front of you. And if you want to lightly, you know, tap the table surface, what you're going to do is your hands going to be relaxed, and you're going to move your arm, and you're going to relax your arm a little, and your fingers going to touch the table very lightly, without even actuating anything, not really activating anything, but you've touched the surface and your body has detected that your finger has detected. And so you expect that to trigger, let's say some sort of action in AR, this would not show up on the signals for, let's say, EMG signals. So so the muscle activation sensors, it most likely will not show up on the mudra ban devices out either, of course, I would test it first. But just fundamentally, if you look at the signal, that touch event that occurs with barely any actuation from the from the body. If that's not detected, then then Okay, now, if you imagine if that's not detected, what are the drawbacks? Well, then you might need to press buttons on a table in a way that isn't natural, you might need to really like press onto the table, instead of just lightly touch it, you might need to position your hand in a way that that isn't natural, or you might need to Yeah, the interactions won't feel as natural if if the only thing you're sensing is activating forces, and not that sort of touch response.
Maybe another perspective on that is that if you're trying to point them select, for example, in a in a car situation, let's say that in the mantra case, it would be pointing and activating. Now what if you go about your daily lives and your you try and use your because these would be kind of wearable devices, you know, not necessarily always on but you would be wearing these devices around you constantly, then, you know, what's the challenge of knowing exactly when you actuate something in a pointing situation? And how is the wristband able to you know, differentiate this from any other actuation event you might be doing in your everyday life. The thing with the touch detection that we do is that we optimize for the pointing and selecting experience. So it's not only about kind of when you touch, but when you touch in a certain context in a specific way. And kind of the end result there for the for the user is that you just get a better pointing and selecting experience. Rather than trying to like use a lot of computation power or you know a lot of time to calculate rate on the device, whether this is the event that you're trying to go for. So we're going for something that's super low latency, super robust for everyday use. And something that can be used, basically always on, if you don't mind, the battery optimization,
if you think about the simple action of pinching your index and thumb together, ideally, from an interaction point of view, what you want is that the exact moment that you touch your fingers together, something triggers something, you know, it's like an imaginary touchscreen that's on your fingertip. And what you don't want is for like, If the hand is just slightly, you know, not touching, but still a pincer, you know, moving the fingers together, but not really touching, you don't want that to cause a trigger, you want the trigger to really happen the only when you really touch something. And that is the distinction that's really hard to do if you only measure the activating signals coming out of the brain, because if you just look at the muscles and muscle activation, it's really hard to see and distinguish between just you know, non touching, but moving your fingers too close together, or actually touching your fingers together and there having been contact, it might sound a little like we're nitpicking here. But when you are truly like interacting at high pace, and you want high reliability, you can't really afford to have these sort of imaginary fingertip buttons that feel sticky, or that feel like they accidentally trigger without you wanting it to trigger. You want it to trigger the exact moment you actually feel it triggering with your fingertips. It's about improving the quality of the interactions, if that makes sense.
And maybe just the final point is that ideally, we would have both, yeah, of course, yes. Ideally, we'd have the actuation knowledge. And ideally, we'd have the knowledge of the touch event. So we just try and turn to the fit the piece that's missing. So there's a lot of companies doing actuation. But we think this is the part that's missing out of the stack.
And so when you are constructing this as a company, you think about the full value chain, and who's going to be buying the thing that you're creating, how does that come together? Who is your ideal target customer given your current focus,
our goal at the end is to solve this interaction problem as widely as we can. So what we currently have in the market is a algorithm that detects some basic touch events, using existing smart, risk based wearables such as smartwatches, or fitness trackers. Now, the way we distribute the technology now, it's it's an app that you can download download from the App Store for smartwatches. Now, that's absolutely not the ideal end result, what we want to do is we want to go deeper down, we want to make this part of the native UI, we want to make it part of the operating system, we want to provide the knowledge from the smart risk based wearables when needed, basically, so you wouldn't have to open up a tailored up or dedicated app for these interactions. Now, in terms of customers, we like to go very deep down. So we're talking to you and doing business with OEMs ODMs, as well. So the companies that provide reference designs for OEMs, or any company basically trying to make a hardware device with their own brand. And then we all go all the way down to the chip level. So chip manufacturers, component providers, algorithm providers, sensor providers. So the best work is if we can talk to everyone jointly together, but obviously an OEM, you know, that has the whole headset figured out has the whole interaction figured out. We have different conversations to with them than we would have a with a chip provider, for example. So we're able to provide value to both, but then it just depends a little bit on where he is, you know, set yourself on the on the stack, on how well and with which tools were able to help help the company out.
Yeah. You noted that part of the mix of potential customers and or potential solutions involves additional custom hardware, whether it's a form of a ring or a washer or something else that at least you're experimenting with to utilize that additional that different sort of form factor instead of inputs. How does hardware itself fit into your business model? Right? You kind of just start off as a hardware company who is still a hardware company, where does that fit with the algorithms and other work you're doing?
So we actually did began a begin as a hardware company. So early in the very naive days, we thought that, you know, of course, we're going to make a hardware product out of this, you know, everyone's going to want it. But then we started to think about kind of the role of hardware in humanity overall. So we think that there's plenty of plastic out there. We think there's plenty of devices around us and most of Have them aren't that useful. So I think it was Tony Fidel, who said that, if you're going to find a hardware company, you know, think, again, think how you can do it with our hardware, you only want to do hardware when you absolutely need hardware. So, you know, we thought that if the purpose of our hardware device is to provide the most important gestures for augmented reality interaction, so you can interact with AR basically, without having your hands in front of your face, as computer vision, you know, a lot of the times needs to, we can get started with what's already on the market. So there's a bunch of smart wristbands already been manufactured a bunch of crazy Well, engineered devices already been manufactured constantly. And a lot of people are starting to have them. And they're being a very, very kind of familiar fashion device already. And an even for, you know, health tracking, of course. So we kind of redid our strategy. So we thought that we would start to repurpose these watches these wristbands in the early iterations of our product to be controllers. And we want it to already used engineer work that out that's out there to get that done. Now, of course, the smartwatches on the market today, they haven't been built for introduction, they've been built for something else. So the way we see these watches and these wristbands, especially going forward or advancing is that if the market does settle on this kind of risk based solution being on one of the ways in which we need to control AR, we think that the design of the smartwatch is going to change, or the design of a wristband is going to change. And that's the work we do internally. So we're trying to come up with the right sensor design choices, in which, you know, let's say a upcoming smartwatch from a major brand, you know, could evolve into to capture more of the interaction knowledge that's actually happening in our hands. And that we can pick up with with a special kind of arrangement of sensors, and very smart signal processing and machine learning. So we're trying to affect with a reference designs the future of smartwatch, but we're not planning to make these watches ourselves. So we would consider ourselves a reference design provider, a algorithm provider, a interaction provider, but I would not start a factory for this at the moment, because I think there's plenty of companies doing amazing engineering work for this future to be enabled anyway.
Yeah. In spoken about this idea that you're trying to solve the broader interaction problem. And you're leveraging the tools that are already out there, the pieces are already out there and filling in the gaps. Initially, with this, this focus on touch, make sure you're detecting accurately the moment of touch. When when you fast forward five years, what does the company look like? What is it that you're making at that point in time? Is it still algorithms? They were focused on? Or is there a kind of a broader element? An additional element? Excuse me, that you you're building?
Yeah. So ideally, in in five years, there's a company that wants to create an augmented reality device, and they need to fill in the pieces of like, okay, you know, who do I get the display from? Who do I get my salam from? Who do I get my eye tracking firm, there's already companies that provide these components to the market. Now they're looking at their interaction set, and they see a big gap, like, okay, my users can use our headsets. But they're going to get very tired after two minutes, since they're going to have to keep their hands in, in the field of view, who do I go to, to solve this problem, and we want port six to be there. So we want port six to become a go to provider for interaction solutions where you need to interact in AR discreetly, and with the added sense of tactility. So for us, it's completely about using this touch detection for discreteness for something that feels super light, super ergonomic, but it's very powerful in the in the actual interaction space. Now, in terms of practical technologies on how to provide that, of course, it's going to be algorithms detecting these information from from human bio acoustics and bio signals. It's it's you're going to need some algorithms to do that. But hopefully we will be also able to give this company what they need so that they can start producing my hardware that would be able to capture this this information. addition and send it over to their headset. So ideally we'd have a reference design, which we license and basically have a have a ready to go component just to ship it off to a factory and you know, get started on your AR experience.
And as you look out over the next 12 to 18 months, what's the biggest next challenge you're tackling to achieve that
vision? Right. So commercially, the most important thing that we need to do as a company is to go beyond the the watch app. So we need to dive deeper into the stack, we need to be able to optimize our algorithm for existing kind of bare metal, and then upcoming bare metal as well. So currently, our algorithms, they have been kind of optimized for a different purpose. So now we need to go deeper down, we need to be able to make it actually kind of integrate double width with the level of depth that we need to achieve in order to actually provide a full interaction experience. And we're going to need some partners to do that. And this is a kind of a major focus for us as a company.
Excellent. Let's wrap with a few lightning round questions. What commonly held belief about the AR marketer, or the future of AR Do you disagree with this might
trigger some, some people like imaginary six fingers, I'm not really sure if there's a role in the next, or in the short to mid term, but in the future in the long distant future, I definitely see it as being a useful part of humanity. But yeah, it's adding imaginary limbs to your body, I would say it's a little overhyped by some particular companies,
maybe if I just give mine. So this may not be with your listeners, Jason. But I see a lot of company or a lot of people actually, let's say, from the more general public, you know, seeing the metaverse as, as being kind of, you know, where it's going to end up going towards so that it's all gonna be about people spending time with their headsets are being entirely disconnected from everything that's around them. And, and just completely focusing on on the virtual. Now, I think that's dystopian, and I don't want to, you know, live in a world like that, where that's kind of the frame of thinking. I think that the role of these technologies is more like on the wall one use case, I think about this, that I live in Atlantic Ocean away from my parents. And the way we've done phone calls so far has been on a on a zoom call. So you know, we have our phone in front of our face. And then you know, it's it's super distant, it's super disconnected. I don't feel like I'm close to them at all. Actually, when I just have this kind of zoom call with them for work, it's fine. But when you actually want to have a personal connection, it's not great. So the world I want to see this going towards rather than being this kind of dystopian Metaverse type of thinking for everyone, I see the role being more like that we can make some of these connections more human and more present. So yeah, maybe it's maybe it's just a misconception between some of the general public but yeah, this is what I think
we're not all going to be sitting in our headsets all day long. Hopefully not. Besides the one you're building what tools or service do you wish existed in the AR market?
Oh, yeah. So several, I have several times expressed to author how, oh my God, I wish this would exist, or, oh my god, if poor six doesn't work out, we should do this. And I would say so first of all, I think spatial audio in general, like how is it like whether it's the Apple AirPods max or purpose Max was pro it was Max, or the Sony headsets, whatever. Like it's advertising special audio, I still haven't felt a convincing AR experience with good spatial audio such that I could close my eyes and actually feel like I'm in this completely new reality and that they're really there and I can tell where like, what is where I think whoever's building that is going to significantly affect the realism of AR experiences. So and actually just try this yourself even like post like to get an idea of how important spatial audio is or how important audio is that realism things so get on a phone call with a friend in the same room. And ideally like noise cancelling headphones maybe but but just just notice how like I was trying this with with a few friends that when I was on the call, of course you start seeing them in real life as they are in real life but you don't hear them in real life anymore. You hear them from the headset and for This as large number of times, the headset version of them that I heard, feels more real than the visual person there. So So there's, there's certainly something to, like virtualizing audio, well, that makes someone or something feel real. So just like this is, I guess what I'm trying to say is like, it's pretty neglected, even though the phenomenon can be explored, just with some headphones by anyone on the street. So special audio, definitely,
the one that I see myself wishing for is a database of human hands. So not only in terms of like external form, but in terms of limbs, right, in terms of skin, right, very similar to when you're making a self driving car, you don't put it on the streets, you drive, you take it to a driving game, to drive and, and that's how you teach the algorithm. Sometimes I wish that we would have better tools, or actually, a lot of times, I wish we had better tools for synthesizing human interactions, especially from hands on the level of where you get some of the acoustic properties, some of the vibrational properties, some of the touch properties that we do in real life. So I mean, obviously, at point six, we can't do everything. But this is something I see myself wishing for.
Another thing came to mind, like, whatever we're doing with and tracking that level of precision, please, it would be great if someone could create the same, but just for general object tracking, like I would like to use a pen that I have lying around as an Apple Pencil. Like, I don't want to buy an apple pie, I want to use the things around me as input devices. I want to be able to, I don't know I have this, I have this bottle on my table right now. And I want to be able to like, just rotate this to adjust the color of something, or you can have like, a bottle isn't the best, like interact or object for that. But but just in general tracking things that that your hands would be holding and using those as tools for interaction. And AR I think someone's going to do that at some point. Yeah, I wish that point would come soon.
The real becomes the virtual input. Yeah, mechanism is the dial it is that stylus it is the whatever it is you're trying to manipulate. Yeah, totally, totally. Wondering, so what book have you read recently that you found to be deeply insightful or profound?
So I read The Argonauts by Maggie Nelson, which is a kind of super insightful book about the relationship of this author with her partner, and the partners family. And it's like, it's not an autobiography, but it's it's, or it's kind of an autobiography, but it's like, super reflective, like, it's almost like a diary. And there was a paragraph there, which completely changed my perspective on step parenting, and the difficulty of being a step parent. So having to kind of be there constantly without kind of, you know, having any possibility of actually demanding anything back. So you need to be there constantly, but not getting anything back necessarily from the kid of your partner. That was super insightful, because I know some step parents, but I never had like properly understood the experience.
That's a hard position to be in.
Yeah, I had never occurred to me. I think I have one that's more AR related. But also, it's not really it's like a Okay, the dawn of the new everything by Jaron Lanier. I think it was this is probably the personally most relevant book I've read in my life so far. So he, I think it's because I mean, he was a musician he thinks about AR very philosophically. And he's built a AR VR startup when he was in his 20s. So I think there was like some similar parallels, like similarities that I saw in him when I was reading. And I think he put AR in perspective, sort of what is it to humanity fundamentally? And how does it change our perception of the human experience? And so I think one example was that basically, augmented reality falls into the same category as the sort of three inventions that are three technologies or three phenomenon. That was a hot topic back then in the 60s, and the three were lucid dreaming, nano tech, like self assembling nanotech, robots, and AR. And the thing that they all had in common was that it was about creating anything that you can imagine, like all of these three, whether it's lucid dreaming, or the nanobots, self, assembling nanobots or AR was all about being able to shape our reality and literally create anything we imagined. So that that's really what AR is about. It's about one way he put it was that it's the reason why AR is can be so profound to humanity is that it responds to the child's, or the childlike call for creating what you can imagine. So upgrading anything you can imagine. So of course, he put it more eloquently than I just did, but, but I think there's something fundamental to the AR has to humanity. And I think he put it really well. So great.
So this one, since you guys are younger, will adjust the age here. But if you could sit down and have coffee with your 20 year old selves, what advice would you share,
I would tell my 20 year old self that you can relax. I think, as a 20 year old, I was already on to my second CEO position. And I feel like I wasn't I wasn't really grasping some of the finer things in life that will so I was being part of some social activities. But nowadays, I have this idea of us being here for each other. And I didn't really live like that back then. So I was more present for some of my own projects than I was for some people around me. So I feel like I want to practice this kind of thinking, of course, working on great projects and world changing problems. But also being here for each other. And yeah, just just being present in with people.
Damon,
I tell them to read a lot more. Yeah. And specifically, the thing that really affected me later, and really made me feel much better. Just in general reading in public transport every morning, and every evening, when you're coming home like that. There's like a standardized time where you definitely can get stuff read. Yeah, I recently, it was only two years ago that I started properly reading what all sorts of things came too late. I wish I started sooner. Second, meditate, probably meditate. And trust me. So. So. Yeah.
One interesting comment that actually Jim and I just realized on our CES trip is that generally, Jae min tends to be very interested in things. Whereas I tend to be very interested in people. And I think in order to pull off the project that we're doing, you need to do both. And you need to have a kind of internal strive to have these both on at the same time.
Yeah, that's a great observation. Any closing thoughts you'd like to share?
Well, first of all, I think, Jason, you're asking a lot of great questions. And I, I hope there is a chance where we can interview you back. Yeah, I think there's a lot that we could learn also from, from your plenty. I think in general, maybe closing thought just keeping in mind that there are other there may be other people listening to this. I'm in this project to try and sort of, in a way, sort of similar to what Jared said, like creating whatever you imagine. So I'm I'm here to try to make cool shit and and also talk about it and think about it. And so if people anyone wants to reach out, just talk about it casually. I'm all over it for it. And I think you're just I think you Google port six or something. There's Twitter and LinkedIn or something, it just literally like, you probably might even find my numbers. just cold call me. And let's talk about AR I think these conversations I'd love to have way more. So
maybe just a final remark from my side is that I don't know how people perceive Finland, usually. But we're not always perceived as the most social or the most socially flamboyant country. Usually we're very introverted. So when people ask us, like, why do we insist on discreteness so much? I'd say some of this kind of comes from our cultural background from being raised in Finland, where voice commands are not an option, you do not make voice in public, or you do not make sound in public. So even though we're both very international with Jamin, and we love kind of just doing things internationally and going beyond Finland. I do think that some of the strive for making things actually very discrete is does come from this kind of cultural background.
So such an interesting connection that you've just made. I think that from going back to this notion of being discreet, but also the, the opposite challenge of having to hold your hands in front of your face. No For the cameras to perceive what's happening with your fingers or your hands, it's not purely about the district. And it's it's also about the comfort factor. Like there's an element which he has talked a bit about during this this conversation. Absolutely. But both of them are, I think essential for long term sustained input, that it can be discreet, it can be comfortable, and it can be effective. And with low error rate, right, all those sorts of things are possible. And better done when our hands are not held out in front of our in front of our faces.
Exactly. Just think about texting, or, you know, inputting your password in a metro. So if it's not discrete, you know, it's, it's, it's going to be interesting. You know, what it's going to look like and and how people perceive whatever's happening. So I think these two, I mean, it's basically how we use our smartphones, it's, it's super discreet, it's super reliable. And you know, when a person is inputting, they could be doing anything, but kind of the level we're trying to achieve is that this input would not take from your human experience, it would be barely noticeable, because you today, you can see if someone's using the phone at the dinner table, or on the park or on the tram like it's, it's distracting from human connection. What we want to do instead is we want to be focused on presence focused on whatever's happening outside you and just adding the virtual elements that you really need that you really want to be in there. And then inputting
discreetly Jamin hinted at this, but where can people go to learn more about you and your efforts? They're a part six.
So the best way is to Google for six, you will find our website you will find our socials, some other ways would be to reach out that LinkedIn or Twitter to me or to Jalen. So both are perfectly sound channels. And, I mean, we're active on all them so, so just shoot us a message or contact us through our website.
Excellent. Thank you both so much for conversation. Thank you.
Thank you, Jason, we'd love to
before you go, I'm gonna tell you about the next episode, and I speak with Maryam savour. Marian is the general manager and head of business for the AR headsets group at Niantic. There the goal is to enable everyday adventures in real life social interaction through location based augmented reality. We get into Marion's background in law her work with early stage startups and and its progress and vision for their AR hardware efforts. I think you'll really enjoy the conversation and please consider contributing to this podcast@patreon.com slash the AR show. Thanks for listening