The AR Show: Nils Pihl (Auki Labs) on Augmented Reality as an Extension of Human Language
6:59PM May 23, 2022
Speakers:
Jason McDowall
Nils Pihl
Keywords:
ar
positioning
people
augmented reality
building
meme
shared
behavior
world
experiences
gps
called
mimetic
position
labs
create
technology
investors
big
host
Welcome to the AR show where I dive deep into augmented reality with a focus on the technology, the use cases and the people behind them. I'm your host Jason McDowall. Today's conversation is with Nils Pihl. Mills is the CEO and co founder of Auki. Labs, a company solving the limitations of GPS to accurately position virtual content, particularly for shared experiences. Nils and the team at Auki are building advanced peer to peer positioning protocols, and AR cloud infrastructure to enable a new era of spatial computing. Prior to founding Auki Labs, Nils was a behavioral engineer who studied meme theory and was a practitioner of mimetic Engineering. He also is a serial entrepreneur bringing forward some key lessons learned into his current endeavors. In this conversation, Nils shares his perspective on AR as a natural and inevitable extension of human language,
I think augmented reality is a subset of language. Or if we flip this around, I think language is the oldest augmented reality technology we have. What do I mean by that? Let's say that we are walking through a forest, right, and we come across a fallen tree. And I tell you look at that amazing couch, by the act of saying, look at this amazing couch, I have now most likely made you perceive this tree differently. You now see the sitting Enos of this tree, I've painted the environment with an emotional context that now makes you see the whole world differently. And I think the impulse towards language is a very deep human impulse on the level of, you know, food and sexual reproduction. And all of these things, I think, the desire to annotate the world and add depth and meaning to objects, and transmit memes to each other, like arrive at an inter subjectivity, where we perceive the world the same way. It's such a deep human desire that I kind of feel that augmented reality, the way we talk about it today is an inevitable technology like in any timeline, where humanity survives, they invent augmented reality. We go
on to discuss meme theory and explore Aukey labs unique positioning technology. As a reminder, you can find the show notes for this and other episodes at our website, V ar show.com. That's th e AR sh o w.com. Let's dive in. How did you discover tabletop war games? So
I was working at a company normal nine to five, and one of my colleagues brought in some Warhammer miniatures one day that he had painted himself, and he was so excited. And I was very fascinated, you know, because he is a grown man with his painted little figurines, so excited about them, inviting everyone at the office to play with him. And he challenged me to a game. I was like, ah, but I don't have any of my own stuff. And it's like, it's okay, I'll play you in three months, go to the store, go get an army. So I did, I went to the store, I went to the Warhammer store. And I completely fell in love with the hobby, but also with the company that makes it I felt like wow, this is like peak capitalism in a way, you know, because they're selling me the miniatures, they're selling me the paint, they're selling me the brushes, they're selling me the rule books, the lore books, all of this stuff. And I got very, very fascinated with them as a company. And I got very, very fascinated with the hobby, these tabletop board games, they're played with a measuring tape and with dice, and the rules are very arcane, and hard to follow. And I realized that 2 million people a month watch these games on YouTube. And I started thinking, like, Wouldn't it be super cool if we could like, watch this the way we watch an American football game, you know, with, with AR. And that's, that's how I discovered tabletop Wargaming. And how I ended up down the path wanting to make augmented reality things because I wanted to make tabletop Wargaming into a proper like esport
proper esport. So here, you're thinking about AR from the perspective of like, we see the superimposed, you know, first downline in American football or whatever, have all these image overlays for us home viewers. Yeah, to really deeply, more deeply appreciate what's going on in the game. Yeah,
I think one of the things that makes it difficult to understand sports for me, it's I often don't understand the rules. I don't know what's going on. And, you know, there's this behavioral expert called Kathy Sierra, who refers to it as high resolution, where if you understand the rules of something, the whole thing appears differently to you an example that she used this you know, if you know how to see the Big Dipper in the sky, you can't unsee the Big Dipper. The night sky just looks different. And if you understand the rules of football, you see some Think completely different from someone that does not understand the rules. And as someone who's very interested in behavior myself, and also very interested in language, I thought like, well, it's super fascinating how hard it is for me to communicate what is happening in a tabletop game. And how effective a tool language tool, augmented reality would be. If we could show things like, here's the aura, here's the range, here are the things that this character is capable of doing that, you know, as a player, I can intuitively see them. But as a non player, I don't see it at all,
is a huge difference between those two things. It's almost as if we're living in two different worlds, we're seeing the world through two completely different lenses, when we're able to hold that understanding in our brain. This fascination that you have with behavior, and language ultimately led you to study a concept called meme theory. Yeah, can you describe what meme theory is, and what it teaches us about the world?
So meme theory is the idea that culture and behavior information, they are subject to the laws of natural selection. So in the 70s, there was this biologist Richard Dawkins, who was writing up a popular science book called The Selfish Gene, where he was explaining the concept of the Neo Darwinian synthesis, which is, you know, a lot of big words to describe a very simple concept, actually, that natural selection does not happen at the level of species or tribes or even individuals. The Neo Darwinian synthesis teaches us that evolution happens at the level of individual genes, which becomes a very powerful lens when you start looking at normal evolution. And in one of the chapters of the book, he had this thought experiment, you know, what if we could think of Culture and Information as the same way? And what if we could invent like, a gene, analog, let's call it a meme, right? And so he put out this concept of meme to explore how information culture behaviors might be subject to natural selection in the ways that you get survival of the fittest, you know, so the fittest in terms of a gene, very easy to understand, you know, which gene makes the most copies of itself across a population. And the same thing would apply for a meme, you know, which behaviors makes the most copies of itself. So everything from language to fashion to architecture are different kinds of memes. The word meme is a meme itself, right? And the way t shirts, look, all of these things are memes. And when when I started studying this, I got very, very fascinated with the predictive power that mimetics has, because you can start looking at things like how observable is a certain meme, because a meme cannot be copied if it cannot be observed. Right? So how observable is this thing? How easy is it to replicate this behavior with high fidelity, which, you know, kind of maps on to what is the mutation rate of a certain gene? Right? And then there's a third component that is quite unique for mimetics, and does not quite apply to genetics, which is this idea of self healing, self normalization. So, for example, if we play a game of telephone, do you know what telephone is? Do you have to have that in America?
Sure, absolutely. Get rid of a bunch of kids in a row and start with one phrase at one end and see what comes out the other end as they whispered to each other, right, you
get a much worse result in telephone if you try to play it in a language that you don't speak. Because that your self normalization routines don't kick in. Like, if I tell you that, like hey, we're gonna play a game of telephone with numbers. And I tell you 99, there's a good chance that you'll be able to repair that to 99. But if I say it in French, Catskill, Vandy snuff, right? It doesn't help you that you know, it's a number if you don't speak French, you know, the what are you going to do? So, this concept of self normalization is very, very interesting. So that when you then start looking at different behaviors in terms of how observable is it, how replicable is it? And does it have any capacity to repair itself when it mutates, you can start making very interesting predictions about which kind of behaviors will flourish in certain environments, but also, you know, you can start looking at environments and start thinking what kind of memes may actually spontaneously emerge out of this kind of environment. An example that I like from normal evolution, you know, if you arrive at Earth, as a space alien, and you don't know anything about Earth, specifically, but you understand biology and you understand natural selection, then you come across a tree, right? A fruit tree, then you can very easily guessed that there must be some kind of organism here that either flies up in the tree to eat the fruit or climbs up the tree to eat the fruit. Or there's something like a giraffe that's really tall, but can you fruit, but there's going to be something around here that can eat this fruit or the fruit wouldn't be there. And you can often make similar kinds of mimetic predictions, you know, because there is a certain kind of reward available here, then there will already have, or soon will evolve a behavior that captures that reward. So yeah, that's what meme theories.
That's truly fascinating. Just to extend on this, this notion that the existence of a reward is the foundation upon which some behavior will be generated and potentially replicable and ultimately becomes one of these these memes. That's the kind of the course concept. So if you can create the right sort of incentives, or if the right sort of rewards happen to exist in the eddies of other maneuvers by other larger bodies in our existence, we're gonna see lots of new behaviors kind of building up around it.
Yeah, if there's a reward, and then something will evolve. And then you can make educated guesses about which thing will win based on again, how observable is this particular meme? How replicable is this particular meme and how self normalizing is this particular meme. So if you're, you know, looking at four different memes, and one of them scores very high on all of those, then you can be pretty certain that that's the one that's going to come out on top,
fascinating, as you say this, the concept that comes to mind is, is the area around sneaker heads, whatever reason, this is not something that I myself am particularly infatuated with. But I have just an observer of the deep level of passion that people who collect unique sneakers have for this particular type of art or object to the PC utility. And I would not have imagined that this sort of whole trading environment, this pre purchasing environment, there's all sorts of behaviors around this reward this incentive that I didn't even realize existed until now I see all this emergent behavior, is that the sort of thing that kind of would be able to predict with mimetics,
we can certainly look at something like this sneakerhead phenomenon. And, you know, like, if we saw early on in, you know, sneaker head is that, you know, part of the mean is that you're not supposed to tell anyone that you're collecting sneakers, that's not going to be a very fit mean, because it will reduce its observability. Right? So that's the kind of analysis that you that you would apply to see like, Okay, this kind of behavior, how will people get exposed to it? Is there a way to get people exposed to it? And if they get exposed to it, how hard is it for them to copy what's going on? Right? And buying sneakers is actually quite easy to replicate, you know, it's a lot easier to buy a sneaker than to build a model railway, for example. Right. And it's, you know, somewhat self normalizing as well, because, you know, sneakers are sneakers, if you end up buying the wrong kind of thing, you'll you'll get feedback pretty quickly that that's not a sneaker
very true. So you've studied this, and you have this passion also, for augmented reality, how to bring the concept of the Warhammer tabletop game, enhance that experience with augmented reality. As you kind of mix these things together, how do you use meme theory to help us better understand the struggles of wearable AR displays of wearable AR like in the form of, of Google Glass that came out in 2013?
So one of the things that I realized was holding tabletop Wargaming back was that it was so hard to explain to people that was going on what was going on, which made you know, it reduced the replicability of the behavior. You know, it's hard for me to learn to explain to someone else what's happening. It's just like, all of it's very complex, and there were not good tools for helping. But I also started realizing, as I started building AR tools for this, that there are some strong mimetic weaknesses, like very strong mimetic weaknesses of AR itself, because AR today is kind of not shareable. Right? Why is this the case? Well, the electronic devices that we use to render augmented reality, only have a very rudimentary understanding of where they are in the world, right? So sure, you can use your slam to generate the Pokemon or whatever on the table in front of you. But an other device that's right next to you, does not know that that table is there, it does not know where you are, you know, it can query the GPS and find out roughly that it's in the same house, but you need millimeters of precision to be able to play something together. And that means that you know, the state of the art today, you know, since I brought up Pokemon, we can talk about Niantic solution. For example, They recently put out this SDK called Lightship to help businesses make AR applications. And the way they solve shared AR is, you know, you have to stand next to each other shoulder to shoulder you have to point your cameras towards a shared object, and then you have to start slowly circling that object in the same direction. While you're doing this. To solve the positioning, both phones are creating a 3d map of what they're looking at. This gets uploaded into the cloud in real time where it's thrown into this machine learning tumble dryer that tries to figure out where these devices are relative to each other based on the reference object that we both see. Right now. In this process, you know, takes a good half minute, and you have to do it at the same time. So that means that in practice, shared AR is just not something that people do. And from a mimetics point of view, that I believe explains a lot why AR didn't take off back in 2012 2013, when people really started talking about AR, because everything that we do in AR is a single player experience. If we imagine we went back to the early 80s. And multiplayer gaming is not a thing yet. And like 3d games, it's not a thing yet. And we're going to create two alternate timelines, one where multiplayer gaming is invented, but 3d graphics is not. And another one where 3d graphics is invented, but multiplayer gaming, it's not. I think, like, in fact, I'm convinced that in the timeline where multiplayer is invented, gaming is still popular. But in the ultimate timeline, where there's only 3d graphics, but no multiplayer, I don't think gaming would be popular, because the graphics is just something that the studio's use to compete for which app you will get. But the fact that you're getting an app at all, is because you want to do this with other people, and you've been exposed to it. You've seen other people play, you've heard you know, your friends, were playing a game of the weekend, and you want to do the same thing. And I realized, as I was working on this tabletop AR thing that AR has this deep mimetic issue that you can't share AR experiences. So no wonder AR never took off. Everything we do in AR is the single player experience.
How do you apply mimetics to the sorts of things that are more utilitarian in their nature, that are meant to be single player experiences, whether it's to guide myself through the execution of some task, making something in my kitchen, or getting myself or getting my children to school, in the car, whatever happens to be some sort of more utilitarian or he's a task? How does mimetics apply in those sorts of experiences, as opposed to the shared experiences that you're describing with gaming?
So if we assume that the behaviors of behavior you have not yet learned, alright, and you need to adopt this new behavior, then mimetics helps us understand that we need to look at again, how will you be exposed to this behavior? Like what is the mechanism through which you will observe this? And as a function of how you observe it? Will you be given sufficient information to faithfully replicate the behavior? You know, how hard will it be for you to replicate the instructions, for example, that you're given? And finally, do you have some mental framework already some mimetic scaffolding that might help you repair poor information transmission, where even though maybe technically, there's insufficient information in the description, you still have sufficient information in your mind because of the cultural context and the mimetic scaffolding you have to assemble these instructions and replicate the behavior anyhow. So mimetics helps us look at things like tutorials or teaching experiences, and really analyze how is this behavior being transmitted from this, you know, offline source or, you know, nonhuman source if we're talking about technology? And how can this behavior be faithfully replicated inside the human host?
As you take where we are today, with augmented reality sorts of technologies and this evolution of, of 3d and everything that we're kind of goes into this concept of, of immersive? What do you kind of project going forward? Like? Where do you think some of the key changes in behavior or experience with visual information? How do you think that's going to evolve with the next handful of years?
So if I can be a little bit of a hippie here, I want to give you a somewhat psychedelic answer. I think augmented reality is a subset of language. Or if we flip this around, I think language is the oldest augmented reality technology we have. What do I mean by that? Let's say that we are walking through a forest, right, and we come across a fallen tree. And I tell you look at that amazing couch, by the act of saying, look at this amazing couch, I have now most likely made you perceive this tree differently. You now see the sitting Enos of this tree, I've painted the environment with an emotional context that now makes you see the whole world differently. And I think the impulse towards language is a very deep human impulse on the level of, you know, food and sexual reproduction. And all of these things, I think the desire to annotate the world and add depth and meaning to objects, and transmit memes to each other, like arrive at an inter subjectivity where we perceive the world the same way. It's such a deep human desire that I kind of feel that augmented reality, the way we talk about it today is an inevitable technology, like in any timeline, where humanity survives. They invent augmented reality. Like I kind of jokingly say, you know, in any timeline where the internet is invented, so it's Internet porn, like, it's just going to happen. And in any, in any scenario where we, you know, get to this point in technology, augmented reality will eventually be invented, because it's such a deep human drive. And I think, keeping that in mind that this is, you know, why I believe we do augmented reality, I think that the future of augmented reality is not necessarily around, you know, entertainment and chasing Pokemons, and things like that, I think those are the early gimmicks to make us play around with the technology. But what I think people really want to do is, annotate the world share information, arrived at intersubjectivity. And I have a little bit of a contrarian prediction about wearables here, where I think that if there was a wearable on the market today, that did not support 3d graphics. In fact, it didn't even support color, it can only render black and white. And all it can do is leave little text labels. But it can leave those text labels precisely positioned for you, and everyone else that wears those glasses, I think there would be a market for that, genuinely, I think those glasses would do much better than something like the spectacles that allows you, you know, to make some cool AR videos to post on Snapchat, but you can't share this with someone in the moment, you can only share it over Snapchat, you know, asynchronously. So I, you know, self serving Lee believe that unlocking positioning is what will unlock the potential of AR and I think what will happen when we when we do this when we solve positioning so that we can have shared inter subjective experiences is that people will start annotating the world more and creating forms of art and expression that are about arriving at Inter subjective experiences,
explain inter subjective experiences.
intersubjectivity is this idea that, as humans, we inherently want to perceive the world the same way. And we noticed this and things like if you're the only person in a crowd, you know, at a party, for example, that did not laugh at a joke. That feels weird, it feels so weird. In fact, that will laugh even though we don't get it. Right. And when you say like, oh, this person is really beautiful, and another person disagrees. This is not just like a simple disagreement, this feels weird, actually, we really want to have our perceptions validated by other people. I don't know why that is, but I can observe that like it is the fact that we want to get external validation of our perceptions, to the point where it is said, I have not experienced this for myself. But it is said that of people that sit in, like isolation, for example, that they will start talking to things like bugs and stuff in their in their prison cell, and ask them things related to their perception, like, Isn't it cold in here, because this is such a deep rooted human desire to make sure that what we subjectively experience maps on to what other people are subjectively experiencing. So this is the idea of inter subjectivity, your subjectivity and my subjectivity has some significant enough overlap, that none of us feel like we're having a psychosis.
It's so fascinating. It brings to mind Tom Hanks in the movie, castaway. He's talking to the volleyball Yeah, and James Wilson, that sort of concept. At first I was thinking, well, maybe this has to do with our desire to belong, to not be cast out, to not be left outside of the tribe, you know, left for dead, whatever it happens to be whatever the implication was, of being outside of the norm. But when you describe these scenarios, where even in complete isolation, we still have this desire to, to vocalize and receive validation, even if it's all coming in our own head. Is smoke even more fundamental maybe than that. That's fascinating.
Maybe like, I don't know what triggers it, you know, but I observe it in myself. It is unsettling when I perceive something that no one else perceives, you know, like, if you're the only person to hear a noise in the background, you will start questioning your sanity.
Indeed, as you think about this kind of this inevitability of augmented reality as an extension of language of extension of our desire to share experiences with each other, to validate those experience shared experiences with each other. You noted that one of the key missing pieces had to do with the colocation of this will be when both perceive the same thing in the same place at the same time. Niantic isn't quite there, you know there's a high friction to me Well to create the foundation to have that shared experience. And then if somebody were to come in after the fact after the initial calibration, they have to start the whole thing all over again, that's kind of the extra challenge. So that sort of approach. And GPS isn't, isn't good enough, not even close to being good enough to create any sort of shared experiences. So what's the alternative approach.
So I believe that any satellite based approach is inherently not going to work. So we can forget about GPS or any other satellite based approach, because they need line of sight. And I don't know about you, but I spend a lot of time indoors where there is no line of sight, right. And that means that we have to position ourselves off of something a little bit closer than a satellite in the sky, which means that we can now start talking about something like a peer to peer positioning system, for example, everyone does not agree with me here. In fact, there are a couple of big players, you know, stop me if you've heard any of these names, you know, Apple, Google, Microsoft, Snapchat, Facebook, they have invested very, very heavily over the last few years into this concept of digital twin, which is this idea that we will create a one to one replica of the world, store that in the cloud, so that any camera enabled device can you know, using its slam, you know, using what it can see cross reference, this shared memory of what the world looks like, and position itself, the device can position itself that way. And I am just not a believer that that's what's going to fix positioning. Because, you know, out of the more than 10 billion devices that are connected to the internet today, I think, not that many of them have cameras. And I think there are many, you know, billions of devices that don't have cameras that would also benefit from having precise location. And also, you know, it's nice to be able to position yourselves in the dark, and things like this, that's pretty hard for a camera to do. So my big bet is that positioning will ultimately be solved by peer to peer positioning. In fact, if I can, you know, be a little outrageous here, I want you to imagine that, you know, 20 years from now, when we're landing on Mars, right, just consider the fact that when, when the first drones or whatever land on Mars, we don't even have to talk about humans, these drones will be fundamentally lost, because there are no GPS satellites, flying over Mars. So they don't know anything about where they are. But if their clocks have been synchronized before arriving at Mars, then they can start pinging each other and do distance estimation during using time of flight. And once they figured out the distance to each other, whereas there's enough of them, they can start doing triangulation. And once they've started doing that, they can start, you know, at that point, maybe start building up a digital twin of Mars should they want to. But it seems to me that the way that you would bootstrap a new positioning system is with something like triangulation. And in fact, we do use triangulation on Earth today. Like if you go in Google Maps, it'll tell you hey, if you turn on Wi Fi, you'll get a better location. That's not because you're connecting with Wi Fi to the satellite, obviously, it's because secretly, both Apple and Google have been recording in the background, what Wi Fi stations we have access to. And using some simple heuristics based on you know, if you have this strong Wi Fi signal, then you must be within this many meters, they've been able to create a rough estimate of pretty much where every Wi Fi router in the world is. And then using that they can use that to triangulate you, but it doesn't triangulate you down to you know, something much better than the actual GPS outdoors. In fact, it's worse than GPS is outdoors, you know, it will position you you know, within half a meter to five meters of where you are indoors. half a meter is like ideal conditions. But with something like ultra wideband, which is something I believe very much in already in a single peer to peer connection today, like between your iPhone and your what do they call air tag, smart tag, I forget one of these little Ultra wideband things you can put in your your wallet, you can get a distance measurement, that's you know, plus minus three centimeters. And now that's that's pretty interesting. Because that also means, you know, if we start triangulating, and we have more nodes in this little network, we can start using probabilistic consensus algorithms to maybe get that down to sub centimeter position. And now we're talking. So what I think will happen both on Earth and later on, you know, the Moon and Mars is that we will replace satellite based positioning systems with peer to peer positioning systems that use whatever protocols they have available. You know, they do collaborative slam where they can they do Ultra wideband triangulation when they can, if they only have access to Wi Fi, that's what they do. But some kind of unified peer to peer positioning protocol, where devices help each other find the way
peer to peer positioning, using whatever radio technology They have access to ultimately, in order to create a shared understanding of where everything is.
Yeah, I mean, it doesn't even have to be radio. Technically, you know, you can imagine things like, you know, shining, infrared light on each other, whatever, you know, like on the surface of Mars, where there's no obstructions. There's all kinds of wonky things that you can do. But the, the idea is that, yeah, you position yourself not necessarily across of some specific point in the world, because most of the time, we're not trying to figure out where exactly in the world are we We're trying to figure out where I'm in relation to this other object I'm interested in, you know, I'm very rarely interested in my exact GPS coordinate, but I am interested in how many hundreds of meters I am from the restaurant I'm looking for.
Very true. You just kind of gave an example there. But what are the other types of experiences that you think take best advantage of this idea of relative location,
I used to live in Beijing for seven years, and Beijing has a population of 20 million people where the average commute is between one and two hours. And the reason that commute is between one and two hours is because there's a lot of traffic jams. In fact, I calculated one day that over 20 million hours are lost in traffic every day, right? And how do we solve a problem like 20 million hours, you know, just to put a scale on that that's, you know, it's on the same order of magnitude of how much time it took to build the pyramids. That's how much human productivity is lost every day in Beijing traffic. And if you want to envision a world where self driving cars communicate with each other to solve traffic jams, right, where they have to position themselves in a very fine grained way, and keep track of perhaps, you know, decimeters, or centimeters, or you know, whether this this, you know, less than a foot and you know, maybe even less than an inch of accuracy in their positioning, they won't be able to do that with GPS, you know, so today, a self driving car might, you know, ask a GPS to find out roughly where it is, and then use its cameras to find where the road is, and make sure that they're in the right lane. But that doesn't mean it has, you know, a very fine grained understanding of where it is. Or that it could communicate to another car where it is, right. So if we want to imagine a sci fi world where the cars in Beijing are self driving, and they're slipping through traffic, and there aren't even any red lights anymore, something like that would require a very, very precise positioning system, where the, the cars and the robots and the AR devices, whatever it is, can communicate with each other in real time about where they are.
It's a fantastic example, you are taking this perspective, this contrarian view on how we're going to build up an understanding of where we are, contrary into at least what the major players have been doing of late. And in building it out through Alki. Labs. Can you describe what the grand vision that you have for Aki Labs is,
what we're building at Aukey Labs is, right now what we're building is, you know, a collaborative slam engine, which is pretty cool. But what we're building out long term is this peer to peer positioning protocol that works on what we call decentralized, interoperable domains, which we're going to have to unpack all three of these words, right? So decentralized, this the idea that these things are self hosted. Why does that matter? Well, we'll get to that when we understand what a domain is. A domain is a collection of maps, digital, you know, the topography of your bedroom, for example. Now, why would you want the topography of your bedroom to begin with? Well, you might want the topography of your bedroom to be able to put pieces of AR art on your wall or something like this, you want things to be precisely positioned in your home with your future AR glasses. But you may not want that scan of your bedroom to be available to Apple and Google or Microsoft, and you certainly don't want it to be available to Facebook. So you might want to self host this. So you do want to have a digital twin of your bedroom. But you want to be able to self host it. But that doesn't mean that it should be an island that can't be accessed by anything else. Because for example, maybe you buy a Roomba, 10.0, whatever, a couple of years from now, right? And you want for that room, but to be able to know as soon as it connects to your Wi Fi to know what your apartment looks like. So you can give it permission, right? Or to you know, use some more industrial example, let's say that you have a warehouse and I have a warehouse, and we are going to get a drone delivery between the two of us. So a third party drone delivery company shows up at your warehouse and connects to your self hosted domain and says, Hey, I am the one that's here to pick up the package. And now your domain makes a call if you want to share the map with the drone so that it can figure out where to go itself. Or if you're going to do a bit of pathfinding computation and preserve privacy and just share it path for the drone to go like, you're gonna follow this path, right? And find the package. So the drone goes, finds the package flies off and comes over to my warehouse, where it you know, does a handshake with my domain and exchanges information with my domain. And my domain, again, either gives it access to the map or a selected part of the map, or even just a path to deliver. And this can, a positioning system like this can allow for something like a cup of coffee to be delivered to your windowsill, instead of you know, your street address.
I love this concept so much.
I'm glad we're very excited about it as well.
Particularly the the privacy preserving nature of the approach is really powerful. To put a lot of pieces in the sentence, there is a lot of concern that I personally hold around, what is the world look like if we are trying to create this sort of all these private spaces, and a level of understanding of location within those private spaces, and to make those interoperable and useful across some sort of broader set of utility when coordinating multiple devices as you're describing. So your protocol along with your technologies, the technologies are about understanding position relative position, but it's also about how to manage that local map and to share the relevant bits of data and that local map to a given question in the task at hand.
Yeah. And one cool thing that we think this kind of technology and peer to peer positioning, and also non camera based positioning can allow and you know, this is one of the things that we're prototyping over in our Hong Kong office right now, is a pair of AR glasses without a camera, a pair of AR glasses with no camera at all. Why does this matter? Well, you know, ever since the Google Glass, there have been stories of dudes getting punched in the face, because they show up with a pair of Google Glass because there's a camera on there. And you can't trust that people aren't recording, like had walking around with a camera on your face all the time is not very privacy preserving. So any kind of vision of AR glasses that relies on slam obviously relies on a camera. And that's going to make people very uncomfortable about wearing them in certain social gatherings or wearing them at home. Because you know, who knows where this data is going? And how can I trust that you're not, you know, recording my bedroom as I'm walking you through my apartment to show you how I live. So we really, really believe that the future of positioning cannot be based on cameras, because that's, that's that's a surveillance nightmare.
It is a surveillance nightmare. Do you think that was the number one reason why Google Glass was a failure?
I think the main reason that Google Glass was a failure is again, positioning. You know, like there's nothing really cool that you can do in okay, they weren't really AR glasses to be fair, you know that the some overlays their wearable display? Yeah, yeah, so wearable display. So maybe positioning is a little bit unfair to say that they have to solve that because it's a wearable display. So if you discount the fact that they're not really AR glasses, yes, the camera is a huge part of why they fail, people did not feel comfortable with people walking around with a camera on them. And you know, I have some VC friends that say like, oh, AR will never take off on this. AR glasses cost less than $100. And whatever. I actually don't think that's the problem. You know, smartphones had no problem taking off at a much higher price point computers had no problem taking off at a much higher price point. I think that there was an issue of privacy that made people feel uncomfortable, it wasn't clear what the utility of the thing is to begin with, you know, so the, the risk to reward ratio, there was not very favorable. And when it comes to the actual AR glasses on the market today, I think the fact that you can't have any kind of inter subjective experience with them also, again means to like, what's the utility? What am I doing with this? If all I'm doing with this is, you know, recording stuff for Snapchat in an awkward way with my face? What's the point?
So very true. So you have building out this set of API's SDKs. Around the sort of collaborative positioning collaborative slam? Who is it that you imagine will be your target customers? And what is it that they're going to be paying you like? What's the business model behind this offering?
So today, already, AR apps are using our unity SDK, which is in beta now to create shared AR experiences? Because obviously, I think almost anything that you can conceive of in AR is better if you can share it with other people. Like it's it's harder to think of examples that a of AR that would not benefit from being shareable than it is to think of things that would benefit from being shareable. But we also envision things like warehouse robots that need to navigate places or drones for first responders that need to map out collaboratively new spaces together any kind of position system that needs to be bootstrapped. You know, deep minds, underwater research stations, wherever do GPS will not help you out, you need better positioning. And even though indoor positioning is so bad right now, it's still a $20 billion industry. So I think there's a lot of applications for precise positioning. In fact, you know that the longer you think about the bigger the total addressable market becomes, I think, in the short term, our potential customers are people that want to do shared AR, and they're very likely to use it as software as a service. Because you know, most people don't want to host their own servers. But as we progress, even consumers will start doing things like self hosting their domains, because they, they want the apps that they use to have access to these high definition maps of their home, but they don't want those maps of their home to be in the possession even of us, right. They're not even about the labs, and certainly not Apple, Google, Facebook, etc. And, of course, we're hoping to make an impactful dent on things like robotics and self driving cars. This this problem with Beijing traffic is something that, you know, I've thought about a lot the first time I realized, like, Oh, my God, you know, so many pyramids are being lost every year, this is crazy. How do we fix this up? You know, everyone arrives at the same answer, like, oh, self driving cars is going to fix this, okay? But what the self driving cars need to be able to fix it, they need to be able to communicate their position, and their position and their speed with like a very high degree of accuracy. Because if if you can only tell me, you know, within 10 meters where you are, it's going to be very hard for me to avoid a collision with
you. Yeah, very true. Very true. So as you think about just kind of extend this business model concept, in the case of a robot, or a self driving car, is the business model that you work with those manufacturers and manufacturers of the subsystem that's responsible for understanding where it is relative to the rest of the world. And then there's a licensing fee that goes along with that,
yeah, they could choose to self host, you know, if they want to have their own data centers and their own network relays for, you know, doing collaborative slam, and setting up the handshakes for the ultra wideband triangulation, they can self host that. So again, the idea is the decentralized, interoperable domains. So even if you know, Tesla, for example, wanted to use our positioning system, we're building it in such a way that they can opt into being interoperable with other things just because they're self hosted. Doesn't mean that they're isolated from the rest of the world.
Yeah, I love that. So congratulations, by the way on the recently announced $13 million funding round, that should go a long ways.
Thank you very much.
This is kind of an interesting time in the in the equities market, as we kind of generally observe what's happening globally, geopolitically and economically at HRC. When did you start that race and what was most challenging about the process Rio,
we started the race pretty much exactly one year ago. So one week ago, it was one year since we invented the instant calibration. And when we invented the instant calibration, we pivoted away from making Warhammer apps, we called up one of our old investors, because we had raised I think, 100, or $150,000, something like this since 2019, to build out this, this tabletop application, and we called up one of our larger investors, I think it invested 50,000 of the total. He said, Hey, we just invented something that we think he's really, really big. We explained it to him. And it's like, we don't really know what to do with this yet. But it seems big. So he gave us the first $12,000 that he didn't give to us directly. He said, like I'm gonna give this to your patent attorney. So we filed a patent. And then we started thinking about what kind of investor would be the most interested in what we what we've invented. Because I've you know, this is not my first startup, I fundraise before it can be a very, very time consuming process. So I wanted to try something different. This time, I feel like I'm gonna get a shortlist of a couple of people that should really, really understand what it is we do. And one of those investors that I found was this investor called outlier ventures, who has as their hypothesis like their mandate is that they want to bring about the open metaverse. Right? So what is this mean? That right? It's a long conversation, but you know, they have this very specific vision of the future of the internet. And they had written a lot of blogs and thoughtful posts on how this might be built. And we read through all of this material, and we realized, hey, there's nothing in here about positioning, which you know, I will forgive them for it because I didn't know that positioning was an issue when I started, you know, the tabletop AR app, either. So we sent them a single line email, saying like, hey, we read your stuffs really cool. You forgot about positioning and sent them a video demo of how we did this instant calibration. And they immediately got back to us. And we worked out a deal and we went through their accelerator and Things started rolling in very, very quickly. From that point on, we got an investment from a Chinese VC called NGC. They were one of our first major investors, they put in 200,000, if I remember correctly, and started introducing us to a bunch of other VCs. So, fact is, after I contacted outlier ventures, I never did an outbound request to an investor again, because that the timing of where the market was that summer. And everyone's focus on Metaverse, and, you know, this is what's going to be happening in the future and our very simple story that like, hey, we have a problem that you didn't even know existed, positioning. And we have something that is so demonstrably better than anything out there on the market right now. The fastest close we did with a VC was two minutes.
Two minutes. Yep. And how much do they invest after two minutes?
Over $100,000. I don't remember the exact amount, but it was over $100,000. Right. They had looked they had been looking at the space before they saw the demos like yet we got it we're in, give us the round details. Amazing. So that that was very gratifying.
So let's go back to the core innovation, the core invention, that kind of kicked off this tremendous progress that you made last year in terms of building out the rest of the company around it. Instant collaboration, describe what happens in the video that was so compelling to these investors,
the very first demo was very, very simple. It was a little video of me standing on my street in Hong Kong, and I put up this little satellite that I bought from the Unity Asset Store, put the satellite up watching it an AR, and then I take up another phone, and I just scan a QR code on the first phone. And then immediately after scanning the QR code, the satellite appears on the second phone in the same place. So what the instant calibration does is it allows us to figure out where we are relative to each other by analyzing the content on the display. So as a recovering philosophy student, you know, I was looking at positioning, and I was looking at digital twin and all of this and I was just thinking, Do we really need to know where in the world we are? Is that really the information we need? Wouldn't it be enough just to know where we are relative to each other? It should be? So then we started thinking about how can we establish where we are relative to each other. And we came up with a technique where we analyze some content, I don't want to dive into it too deeply because we're still patent pending. But we figured out a clever signature that we could put on a digital display that would allow us to figure out over a real time network connection where we are in relation to each other. And even our first you know, very poorly put together prototype, I managed to do that calibration in under three seconds. And three seconds, it's a lot shorter than 30. So that made it easy for people to get it. And we then we made a slightly more advanced version of the demo, you know, because we had gotten our first $100,000. And we made a version of the demo where we had a little pet. I think it was a tiger or a deer. It wasn't some kind of wild animal that that you could that you could see. And by just scanning the display by scanning a QR code, actually on the first device, that animal would appear in the exact same place on the other device. And for all the VCs that are more focused on business use case system technology, you know, because not everyone is a deep tech investor, some investors are really looking for, you know, how is this going to be commercialized? What is the killer app? Some people say like, Oh, we love this virtual pet idea. And eventually we got cheeky enough that we said, Yeah, would you write a check for that? And the first person that we asked that, she said, Yeah, you know what we would, and that was also outlier ventures, actually. So outlier ventures invested both in a new daughter company that we named madalas. That makes AR pets and AR toys in general, we have some some other things coming out. But apart from the the AR pet, but also, you know, the underlying tech company Aukey labs. So the demo that we raised the most money with was this very simple demo where we could place a little animated animal somewhere, you know, without doing any pre scanning, just whip up the phone, play something, take out another phone, scan the QR code, detritus
that's very compelling. And what is the level of precision that the phones have? Like how accurately position is this between the two phones?
So with the current implementation, you can get issues because of the network. But we are fixing that with the next release of the SDK. But in ideal network conditions, now you get sub five millimeter
position, sub five millimeter. Yes, that's very impressive.
So our typical demo will will have one or two millimeters of precision, unless you have a poor internet connection where all of a sudden you're off by 90 degrees or some Think weird, but we figured out what the issue is there. We're fixing it
very good. And when a device doesn't have a camera, like you're describing this world in which AR glasses for social acceptance reasons should not have cameras, but yet should still enable shared, co located, collaborative sort of experiences that you're talking about here? What is the approach to have them have that understanding of where they are?
You need two things. One is you need the displacement, which is, you know, what is your XYZ coordinate? But you also need the rotation so that you have a full the technical term, I suppose, right, your xyz and your rotation. And the approach that we're taking is that we're solving the displacement purely with Ultra wideband triangulation, right. So we figure out your x YC, just with Ultra wideband, you are triangulating off of two other stationary notes in your home, right? Because this is for home or office use. And then the rotation is fixed with a mix of an accelerometer and a magnetometer that allows us to figure out how is the device rotated? So displacement plus rotation gives us the posts. And that allows us to know, yeah, where you are in the room and where you're looking?
Yeah, fantastic. So now you have based on the demos you've created to two companies, ultimately, they got funded. One is the core technology company. And the other is metal Studios, which is building virtual pets and toys that leverage that technology. Is that right? Yeah, yeah, that's right. What are the major primary goals with the new funding?
So with madalas, the goal is we ar is a janky term, like it's not the I don't want that to be the meme that stays in the history books, right. So what we want is that we hope that 20 years from now, just like, you know, searching for something, it's called Googling something, an AR object is a matter of a subject. That's the goal of masterless. And the way to get there is to have toys and free play things that you can just explore and play with things that you will think of as objects, rather than games. Right? This is the mimetic engineering coming in, right, we want to create madalas objects so that we can create the language for describing AR not with the, you know, abbreviation AR, because that's kind of lame. So Nautilus is working on three products right now, only one has been disclosed so far, which is the the madalas pets, which is already in private beta. So we have a couple of 100 people that have joined us on our Patreon community and give us a monthly subscription fee, half of which every month goes to preserving wildlife in the real world. So one of the things that we really want to do with madalas is help drive awareness about you know, we're sharing this planet with actual real beings and Aukey. Of course, you know, we're on the humble mission to replace the GPS, the humble
mission to replace the GPS. Going back to this notion of mimetics and imeem theory and the shareability that transmissivity of these ideas, how is it that you plan to motivate users to help you grow the company?
One of the cool things that we learned from outlier ventures this Metaverse fund is they're very big believers in web three, and like blockchain technology, and after having worked with them for a couple of months, they helped us see that if we introduce a token, like a cryptographically, protected representation of some kind of value within our ecosystem, so we don't need to think like Bitcoin or anything like this, you can think of it kinda like AWS credits, but you know that we can't just print more of them willy nilly, right. So if we make the app developers that use the app, either directly or indirectly pay for the service using this token, then we can use the same token also, to help reward people that are helping host this decentralized architecture. Because one of the things that's really important when doing positioning is you want to have really, really good ping times. And that means we can't have one big data center in North America and one big data center in Europe, we want micro data centers, you know, in as many places as possible, so that traffic can always be routed to the best possible ping time. And to do that, that's not something that we as an organization can set up. So we realized we could do this actually, by leaving it up to the free market to figure out where do these micro nodes need to beat these micro data centers? Can we create a way where we reward them automatically without having to negotiate with us, you know, you just you host an alkie server, you start receiving traffic and based on how much traffic you received, you just get paid out in this token. And that token, of course, you can then pawn off to an app developer because the app developer needs the tokens to use the positioning service or you know, to a self driving car company or whatever. So it creates this nice little circular economy. And it also made us realize that One of the things that we can use this for is we can offer people that have domains to actually share data for compensation, either to rent out data, or to sell data for a digital twin. So one of the things that we want to do as a company is have a domain ourselves that is like an open domain, where we offer other people rewards for contributing data to that domain. So that when you go around and you navigate by eyesight, so to say, you know, you navigate by slam, that you can always in a pinch, use Aukey labs, this digital twin, which has been collaboratively built by its users,
this presumably being the public data source, the public dataset, effectively, yeah, of the world. And in here, we're going back to this notion that the users then are helping to, they have the set of rewards going back to something you talked about at the beginning, right, if the wrong set of rewards exists, then behavior will emerge to to be able to capture those rewards, assuming the rewards are are meaningful and desired. And the sort of behaviors you're hoping to drive with this are to have a lot of little microservices around who are able to host local maps are able to provide local positioning. Yes, within, you know, every little point on the planet.
Yes, solving, positioning is not something that Aukey Labs is going to do. It's something that we as a society are going to do. And Alki Labs is just trying to build the protocol that allows for us to collaborate in such a way.
Yeah, fantastic. As you kind of look out, he noted that the major players are taking a very different approach to creating the solving this positioning problem. What do you think are the biggest technical barriers are the biggest industry barriers to allowing you to see the growth that you want to see it's really making this the de facto standard,
as happy as I am with the 13 million that we've raced into our killer app so far, and the 7 million that we've raced into masterless that's very little money compared to what even a small player like Niantic has you know, Niantic most recent race was 300 million. So we're outgunned financially, which means we are outgunned in our ability to do marketing, for sure. And the best technical solution does not always win. And you know, as a medicine system, I'm quite aware of that it doesn't necessarily matter that we have the best solution, because it's efficient solution can still can still win if it has other resources at its disposal, which they do. And of course, there's always the risk that one of these big companies, you know, just straight up steals our technology, you know, if I, if I can, quote Elon Musk, you know, a patent is just a lottery ticket to a lawsuit. You know, like, even if we get all of these patterns, like, what are we gonna do with them. But one of the things that I hope will be in our favor is that we have this collaborative, nice circular economy where the value generated by the protocol is something that is owned and shared by the people that use the protocol. I, you know, I think the 10,000 people working together to build a digital twin on purpose is going to be a lot better than, you know, Apple driving a lidar car around San Francisco, or Niantic tricking people to record things through PokemonGo. And I think it's going to be hard for especially Apple, Google, Microsoft, these big publicly listed companies to copy that model, because they're publicly listed and the value of what they're building needs to be accruing in their equity. So it may sound weird, but one of the things that made us really embrace this idea of having a token was that it creates a barrier to entry that keeps the big companies kind of away from us, because that's harder for them to replicate.
It's a very thoughtful approach, very thoughtful approach, want to touch back on something that you had noted earlier, that this is not your first rodeo. You're an experienced entrepreneur heading into the labs and and so as you've been going through this experience, you already know that you're taking a different tact with raising money and the everything was aligned, go heading into last summer in terms of the enthusiasm around the metaverse and in the story that you were telling how you had very selectively gone after going after a handful of investors who might really appreciate what you're doing. But other other other things around the culture or your leadership style or, or the development process that you're doing differently in the startup and you have in the past?
Yeah, definitely. I think with my, my previous startup, you know, we had a very talented team and we were in a good space. But we weren't in a good headspace. I think we got arrogant, you know, we were living in Beijing and the first term sheet that we ever received, I asked a friend who was the editor in chief at Asia's largest tech blog. It's like, what do you think of this term sheet and he said that this is the highest seed valuation of any seed company in Asia ever. You should take it, but we thought, hey, if that's our first term sheet, let's just turn this down and, you know, see what happens. But we definitely shouldn't have done that. Because one of the lessons we learned was, you never know Why you are worth so much to someone. And that particular investor, you know, had an inside scoop on a very, very big customer that was looking for a solution like ours, then if we had taken his investment, you know, at the time, we were one of the first three teams in the world using a technology called spark, and Tencent, this, you know, top 10 tech company, he knew that Tencent was about to set up a 10,000 node Spark cluster, which is huge, huge. Of course, he didn't tell us that. And so we didn't know why we were worth so much to him. So this time around, don't second guess, you know, like, if someone's, you know, if someone offers us money, I'm not going to be too too difficult about it. It's better to own a smaller piece of a bigger pie than a big piece of nothing at all, which was, you know, one of the learnings from the previous startup, but also decentralized organization more empower people to like, really understand what it is we're doing, why we're doing it, and give them the resources to solve the problem instead of trying to micromanage things. That's one of the things that allowed us to grow from two to almost 40 People in the span of a year on the alkie side, because every person that we've hired, you know, is empowered to really solve their task and even bring in colleagues to help them and investing in your staff, before you invest in your tech is a really good idea. In fact, one of the first hires I made this time around was an HR person. In fact, I went back to it because I was fired from that job that I was at when I was doing the I was invited to play Warhammer. And I went back to the HR person that was responsible for firing me and offered her a job. And she was the at the time, the best paid person in our company by far. Because I thought like I this time around, we're going to we need to build something that can scale up faster. And that means investing in people first,
besides investing in the HR person who's really experienced and knows what she's doing. How else are you investing in your team,
we are making sure that we spend a lot of time teaching each other about what we do and making sure that everyone understands like the full concept of what we're doing. So even though our tech stack is very, very modularized, it's very easy to just like pick one piece of it and start being productive on the first day because like there's a small piece that you can work on, we try to make sure that everyone has a holistic understanding. And also everyone is a shareholder in the company. Everyone has equity. And everyone's success is tied together, like we win or lose this together.
Deep shared understanding, and not just ownership in words, but also in equity. Yes, fantastic. As you engage now with other entrepreneurs in the ecosystem, what is a key piece of advice that you commonly share,
if you're looking to fundraise or recruit, you're really playing the same game, it's about persuasion. And as you know, from a asthma medicine, test, persuasion is again about I have a behavior that I want to transmit to another person, the behavior being, you know, belief in this particular vision. So core to that, of course, is crafting a narrative that is not just persuasive, but reproducible, you know, that you have a good pitch, when the person that you've pitched can tell someone else over drinks, why what you're building is cool. And if they can't, you just you don't have a good pitch, right? So I don't use a pitch deck anymore, because people are not going to have my pitch deck when they're out drinking. Alright, so I don't use a pitch deck. I just tried to make sure that they understand why there's a problem, why it would be meaningful to solve it, and how we've done it, and try to structure the narrative in such a way that someone that's had a conversation with us can go out and tell someone else why this is cool. And that helps us get more people apply to work for us because there's word of mouth. And it also helps us. You know, as I said, we haven't done any outbound investor outreach, because our investors are very, very capable of telling other investors why they should invest in us. Because we've spent time you know, figuring out how do we tell the story?
That's really fantastic. Do you have an elevator pitch that you can share that has those three main points? Can you reiterate it right now, just as an example, for everybody listening? Sure.
So the elevator pitch is, let's say that we wanted to render something like an AR fairy, right here. And I wanted this to be visible on my device, but also on your device. Do you know why that's difficult today? Well, it turns out, it's difficult because GPS is not very precise. GPS might tell us that we're in this house, but it won't tell me that I'm right here. And you're right there. We need to know very, very precisely where we are in relation to each other to be able to agree where to render this ferry. And we have realized that this is a problem and that satellite We'll never be able to solve this problem because they need line of sight. So they will never work indoors. So we are building a peer to peer positioning system that will allow things like shared AR and delivery robots and self driving cars to find their place in the world, literally.
Well done. Thank you very much. I like it. Let's wrap up with a few lightning round questions. What commonly held belief about AR VR spatial computing? Do you disagree with besides GPS isn't good enough?
Yes, I disagree. That form factor is what's holding AR back. I disagree that rendering quality is what's holding AR back. And I also disagree that the future of AR is in entertainment. I think the future of AR is as the penultimate form of human communication before we achieve direct neural connections with each other. I think augmented reality is the closest thing we can get to a pure human language and seeing AR just as a way of creating fun gimmicky entertainment apps, then you haven't understood why people had such a powerful reaction to you know, Magic Leap CGI demo video back in 2013. People want this Tech because they know deep down, you know, in their embodied knowledge, they know how powerful it is as a language. So yeah, those are some things I disagree about.
Besides the one you're building, what tool or service do you wish existed in the AR market?
So the AR market is very lucky that it has unity already, you know, depending on who you ask, somewhere between 75 and 95% of all AR content is generated on unity, which is fantastic. But Unity is still for making apps. And since I believe that AR is the next form of human language, I wish there were better tools for just creating AR content. And I think that's we're pretty far away there. Because the way you know, the computers and the Internet work now it's very based around apps. But maybe an open Metaverse will allow for the kind of creation tools that will allow people to communicate with each other more efficiently without being programmers or even unity experts.
That's a good one. What book have you read recently that you found to be deeply insightful, profound?
I can I take a podcast? Yeah, absolutely. I listened to a 10 hour podcast by Dan Carlin Hardcore History.
Love him bout.
He's amazing. I listened to his death throes of the Roman Republic about, I guess, ultimately, you know, why was Julius Caesar. And I've listened to that more than once now. Because it's such a powerful lens to view the world. And it really resonates with me, as someone you know, is very interested in memes, because it tells us if I can simplify this 10 hour podcast into just a few sentences. Julius Caesar spent a lot of time analyzing power. And he looked around in the world. And he found the three most powerful people. There was Pompey, who was a powerful warrior, undefeated, Surely that is the kind of power. But there was also Cicero, a great orator, a salesman, if you will, that could persuade people of almost anything. And surely, that's power, but also there was crisis may be the richest person that has ever lived, right? And surely, that's power. But at the end of the day, the person that ended up being the most powerful was the spiritual leader, the meme Smith Ceaser that could bring these people together and align their intense towards the creation of a better Rome.
That's a very brilliant summary.
Thank you very much.
And I am looking forward I've not actually I've listened to many of his series. He's truly exceptional, and I haven't I haven't listened to that one yet. But I will definitely do so phenomenal. It's amazing. If you could sit down and have coffee with your 25 year old self, what advice would you share with 25 year old Neil's?
There are many ways of being greedy. Just because you don't care about money, doesn't mean you're not greedy. If you really want to embrace not being a greedy person, you have to let go of your ego. Also
at 25. What did that look like for you? This greediness
at 25? I was you know, starting out, in a way the adventure with my previous startup and we were getting a lot of very positive attention and And I was soaking that attention up a lot. And I think I was hogging too much of the attention and the glory and didn't allow for my colleagues and comrades to really take ownership in the socio economic success,
ultimately a humbling experience, then you learned a lot of personal lessons from that startup.
Definitely, yeah. No, I wouldn't be here today, if I hadn't failed so spectacularly with that.
Any closing thoughts you'd like
to share? I would encourage everyone to just sit with the thought that language is augmented reality, seeing that this is an AR podcast, right? And we're all interested in augmented reality here. I would encourage everyone to just sit with the thought for a little bit, that augmented reality is maybe the oldest technological project that we have. And ponder its inevitability.
I was listening to I don't know who, which which book it was, but something about kind of this shared human psychology. And one of the most powerful inventions that was described was shared fiction, was this concept that you and I could agree that something even though it doesn't physically exist in the world, is a thing that we are going to both agree that is a thing. It could be the line that is the border between our two countries, it could be the contract terms, right? Some some agreement negotiation we have that we're gonna abide by these particular set of rules, the Constitution of the United States as an example, this notion of shared fiction, it sounds very tied to this idea that you're describing around augmented reality being an extension of human language.
Yeah, for sure. Augmented reality is, you know, human information and emotional context overlaid on on the world and all of these things that you just listed. It's that
where can people go to learn more about you and your work at Alki labs,
you can definitely go to Aukey labs.com. But you can also go to our cheeky ELAC paper dot out givers.com Where we outline at great length, what we're building and why we're building it and while also taking a few jabs at some of the other players in the market.
Awesome. Neil's thank you so much for this conversation.
Thank you very much, Jason. It's been a pleasure being here.
Before you go, I'm gonna tell you about the next episode, and I speak with Paul Travers, founder and CEO of Vuzix, a leading supplier of smart glasses and augmented reality products for both the enterprise and consumer markets. We discuss some of the key use cases and dig into the recently unveiled Vuzix Shield. These smart glasses feature micro LED displays and updated waveguide optics. We'll also talk about where musics goes from here. I think you really enjoyed the conversation. Please follow or subscribe to the podcast you don't miss this or other great episodes. Until next time