The AR Show: Stefan Alexander (North) on Applying the Lessons of Focals v1 to v2 and the Argument for Laser Scanning Displays (Part 2)
8:57PM Jun 15, 2020
Speakers:
Jason McDowall
Stefan Alexander
Keywords:
glasses
people
technology
display
augmented reality
overlays
real
smart
user
gen
optics
micro
trade offs
bit
facial recognition
important
north
led
wear
games
Welcome to the AR show right dive deep into augmented reality with a focus on the technology and uses of smart glasses and the people behind them. I'm your host Jason McDowall. today's conversation is with Devin Alexander. Stephen is the VP of advanced r&d at North, a company taking a human centric approach to creating AR smart glass hardware and experiences that are both useful and respectful. After earning a Master's of Science in Electrical and Computer Engineering, Stephen spent more than a decade working with OLED display technologies at igneous innovation. From there he joined north to build in lead the team that developed the technology behind focus one dot o in the forthcoming two Dotto hardware. In this the second part of our conversation, Stephen explores the key takeaways from the launch and use of the focals one dot o classes. Stephen shares his perspective on how making smart glasses is about trade offs.
Really great design isn't about technology or component magic. It's not about a component that just magically is 10 dimes better. This is actually about understanding what the user cares about doesn't and all of the relationships down to first principles around how you can trade off one thing for another one. And this this deep level of scientific understanding and making trade offs
definitely goes on to to some of the key advancements they're making for version two, which is expected later this year in 2020. He also shares his take on laser beam scanning versus micro LED displays for smart glasses, his thoughts about and real and the real light product, and key challenges around privacy and consent when everybody's wearing cameras and microphones. As a reminder, you can find the show notes for this and other episodes at our website, the AR show calm. Let's dive back in. So you've noted that as you've been studying this group of really passionate users, that there is a group of things that people loved, and some things that they might want it to be different. How has your perspective changed since releasing version One Dotto.
So I think the the big things that people really liked were, how fast and quick the interactions were, how bright the display was, how it was visible in a wide variety of distant circumstances, kind of how subtle it was, and how easy it was to be able to deal with very short amounts of information without kind of breaking you out of the context of what you're in. So that that kind of led us to believe, you know, we should we change the whole paradigm and make applications more immersive to people want to be able to do more with the glasses, more complex things. Do they want to be able to compose emails, do they want to be able to, you know, do some content creation? Do they want to, you know, read a book on the glasses? The answer is very much No, they loved the short, quick, subtle interactions. But of course, everybody wanted more. Everybody wanted kind of more messaging services. They wanted more sports. scores, they wanted more control over everything. So that's everybody has a certain workflow that they like and certain applications that they like to use. And this person likes OneNote. And this person likes Evernote, and, you know, they, they want it to fit in with their workflow. So it wasn't necessarily
wanting to do more with the glasses. They just wanted to make sure that they could keep doing the short, quick interaction. So that that was good, because it kind of validated that particular product direction and lets us you know, focus on a certain area. I think the thing is that people often said they wanted improved. So the first one is, there's a lot of people who wanted to buy it that just couldn't get into a store to buy it. The other thing is people wanted to be able to show their friends and family the glasses. So this is this is something we didn't necessarily think about but or let's say we thought about it initially but didn't realize it. was so important to people. If people love their product, they love interacting with these classes, they think it's so neat that they can do this. And they want to be able to show how easy it is to message somebody else. And so they're talking to somebody about it. So he's like, let me see, I want to see this, this is so exciting. And the person gives their glasses to somebody else. And they can't see anything because it's not fit for them. So it's not just the difficulty in going into a store. It's the difficulty of people being able, like our most passionate users not being able to show their best friends and their their closest family members, exactly how excited they were about the glasses because they couldn't actually see them. And they would have to go into a store and have some kind of, you know, custom fit demo to even be able to see it, which is which is a big barrier to them showing how excited they were with the product. So that was one of the things is relaxing this fit requirement, is there a way with some new technology that we could kind of, you know, still have approximately the same product and the same vision but really just eliminate this fit thing. The goal of this going in was just, if you can fit it on your head, like if it physically fits on your head, you should be able to fit the display is that is that actually possible? How? What would we have to do to get that done. So that was, that was probably the main thing that led to generation two. I think the other big thing is, people love the small bits of very focused information on the screen, they didn't want it to look like their phone. They didn't want that amount of information. But having graphics syntax that were a little bit sharper and clearer and being able to put smaller fonts on there. It's you know, especially for messages just being able to scroll through more of the message and see more of it on the screen at the same time. Or if you're making a presentation to be able to see more of your speaker's notes and not just, you know, six or seven lines of speakers notes at a time. I think that that was another big request is like don't really make the screen any bigger. I like the screen but can you make it sharper and can I do Put more text on there in about the same size. And so that was another big request. And it was great because people were using it, they they loved what it was for. They just kind of wanted a little bit more with it. And then the other thing, of course, was the weight of the glasses. So we, we produced what we thought was, you know, the lightest pair of glasses look pretty good. template arms were were close to regular glasses. I think people could tell they were smart glasses, but it wasn't. People didn't feel silly wearing them. They felt like they could wear these all day. But the weight was more of an issue than the size actually. So we were at 67 grams. And it was this is only, you know, prescription glasses, let's say should be, they could easily be up to 45 grams, like my glasses are 44 grams and I my regular eyeglasses and I wear them all the time. So we thought that the 67 grams should be fine. And for some people, it was And for other people after Not, not after an hour or two, but let's say after four or five, six hours, it started feeling a little heavier. And I think I think this is, this is surprising because people actually wanted to wear this for that amount of time. I think a lot of other Air Products and headsets people aren't trying to wear them throughout the day. But these people were getting value out of it and they loved it. But they said, You know, I, some people were fine with it. And some people it was hard for them to wear it the whole day. And we definitely got this got this feedback that like if you could just make it just a little bit lighter. That would be great. And I think that that led us to kind of a new understanding of the weight target that we actually needed, which we feel is right about 50 grams right now. We think that's that's pretty important.
See, you described a few different things there. I box, how how easy it is to see the image. Yep, behind that lens. You talked about increasing the clarity which you also placed the resolution of the display and field of view was fine. I've been able to see more and more clearly within that same field of view was important. And the third bucket You know, it was weight 67 grams is already pretty amazing for a head one piece of technology. But your new your new target is 50 grams as you look forward. Yeah. So a couple of those related to the display technology that you spent so much time innovating on in the first version, what is it you're doing differently within that display to enable larger iBooks and higher resolution, while also I guess, solving for the other one, which is these are diametrically opposed. also reducing the weight.
That's right. Yeah. So our goals going into this, we're and you know, this is this is me constantly communicating to the company and the whole team that making smart glasses is about trade offs. It's not about magic technology. It's about understanding what the user wants, and understanding what maybe the user doesn't need that much. And so you know, trading off those things. aren't important and applying them to things that are important. And I really, really strongly believe in this, you know, system oriented design, let's get the whole system together. Let's look what the trade offs are in really great design isn't about technology or component magic, it's not about a component that just magically is 10 times better. This is actually about understanding what the user cares about doesn't and all of the relationships down to first principles around how you can trade off one thing for another one. And this this deep level of scientific understanding and making trade offs. And then I think I sort of undermined my whole argument by their team came up with something that really was 50% of the size of the gen one and increase the resolution by about eight or 10 times and increase the eye box by about 20 times and it was slightly more manufacturable kind of a bit of a lower manufacturing costs too. And then so this is a Anyway, but usually you don't get giant gains like that, I think the, we found a nice combination of new technologies that kind of gave us everything there. But I Anyway, after delivering that, then I just said, Okay, this isn't gonna happen again. But at least you know, in the future, it's going to be trade offs again, too. But every once in a while you do. I think the team is great, they really understood the technologies deeply. And we kind of came up with a way to really, really enhance what we were doing for Gen one. And because of the success of this, that's, that's really what led to us kind of stopping Gen one as quickly as we did, because we learned so much about what the users want for it. And we came up with something that was really so much better in every way without actually really any trade offs there. And so, we kind of decided that, especially in these early phases, we just we just need to kind of go all in on what we believe is the future which is this generation to technology.
You You really have a step function improvement in The overall display technology that that you are embedding into the version two of the product, you noted almost half the size for that. Yep, system that subcomponent, which has great effect overall effect on the overall glasses at eight to 10 times improved resolution. And it's more manufacturable. Yeah. It's pretty amazing, pretty amazing breakthrough. Is it still fundamentally laser beam scanning technology.
So I think we're probably still maybe a few months away from revealing kind of exactly what the technology pieces there are in the gen two, I think I can say that we, we really, fundamentally, in order to do this, especially in this early stage, we really feel like we have to develop our own projection technology. And we have to develop our own kind of, you know, combiner and prescription lens technology that are all integrated together. And so there are a lot of things that were the same as generation one and there Some things that were different, but I think we, you know, with, with a greater understanding access to better technology also access to just more more capital in order to do this technology development, we, we kind of kept the best parts of what it is and kind of threw away some of the other parts. So it is a new and different projection technology, but it's also based on some of the same fundamental principles. And I'll say that even even the lens itself, it's not what we did in Gen one, but it actually is still based on a lot of the same principles there too. So I'd, you know, I'd say I'd say it's an evolution. But you know, we're definitely using some different components and techniques in there too. That's, that's probably the extent of what I can say now.
Can't wait to see and learn more within the set of new technologies that you're adding to the glasses. Will there'll be new use cases enabled as well?
Yes, definitely. And I think that let's just Let's say you know, I kind of was talking about the kind of, you know, notifications kind of from apps and people that were coming in. And then there's also the our homescreen, that kind of guesses what you need, right when it pops up. And then that that third aspect is the, the lenses. So kind of our applications that you'll see kind of scroll through and launch an application and be able to do something. The real enhancement that we felt was so important for Gen two is actually this this home screen itself. So the ability for the classes to be able to predict exactly what you want to do. And we have a whole bunch of new sensors and techniques and hardware in order to do this a lot better. And I can't I can't really talk a lot more about that yet. But I think over the next over the next, let's say six to 10 months, as we start to talk more and more about Gen two, when we talk about the use cases, this is what we'll be talking about. And we think that the when when we got it right with the homescreen. And users clicked once, and they saw exactly what they needed come up right on the screen. And then it went away. That was such a magical experience. And so that was what we really focused the new applications on for the generation two, because we love that it still does great notifications and heads up messaging and using your, you know, having your calendar and your next meeting is just as magical on your glasses. People that just don't really understand how you're able to stay. So on top of your day, when people are using it here in the office, it's a that's really neat, but the thing I'm most excited for is really these new homescreen experiences where we're kind of guessing what you want. And that's, that's those are a lot of fun.
That's really amazing. you'd notice early on the vision for the company is to create the pair of AR glasses that we all imagine. version one was really Pretty much about taking the functionality of the smartwatch and moving it to a hands free display on your face. So you have kind of similar set of functionality there. Are you in version two? We're going to be taking that next step in terms of being able to visualize that digital information within the real world, or is it still about information being independent maybe of being superimposed on on the real world?
So yeah, I think I think that's a great question. And it really comes down to maybe what I'll talk about is kind of our long term vision for smart glasses, which is not necessarily directly related to maybe what's happening are Gen two and Gen three, we just consider those kind of, you know, steps along the eventual path to the long term vision of smart glasses, but I think the the best, the best way to put this is there is something incredible that happens when you can have your experience of the world in front of you enhanced by computer science. generated information. One way to do this certainly is to do wide field if you 3d perfectly registered overlays, like we see in a lot of AR concept videos, but we think there's another really great way to do it that's completely compatible with a smart glasses form factor. And to be able to give the glasses and the display and the whole computing system, a real awareness of what's in front of you, what's around you, your environment, kind of who you're with what you're doing. And this is when the glasses become, I think, a really, really magical experience. I think this is this is the real potential of augmented reality. It's not just overlays, but the whole purpose of why people are even interested in overlays in the first place is because what if what if the computing system can understand what you're seeing and what you want to do and it can enhance that experience and If you take that down to the fundamental piece of what people want to do with that, is the best way to do it, a headset with some overlays, or is the best way to do it, maybe some other methods that we think we have to give that awareness of your environment and that connection with your environment. But in such a small form factor, that you feel completely comfortable wearing it the entire day. And then you can just have these magical experiences pop up, when maybe you don't even expect them.
That's, that's really interesting. So here, I'm just going to read between the lines you need either acknowledge or deny. But here you're talking about this notion that the context is the most important. What you do with that context is also extremely magical. But you don't necessarily have to directly superimpose the digital information on the physical world to have a magical moment for that information to be highly useful and relevant in that moment. And so to capture that There's an implication also that there is more sensing and more smarts going on behind the scenes, whether the sensing is outward facing in terms of some type of camera system or inward facing, looking at some sort of my biometrics or your eyes, taking into account location, or take into account other contextually relevant things like your calendar, which would assume maybe you're actually at the meeting we're supposed to be at. Some other things might be leveraged in order to generate that sort of magical moment, we have the full context of what's happening to deliver that information to the user. Is that fair?
Yeah, yeah, I in fact, I actually think you have that exactly. Right. In terms of what we find so exciting about smart glasses, what we think this incredible potential is for augmented reality. I think we've, you know, there's a lot of things that we have developed an understanding of over the last six years in working at this, but one thing that we really started with, even from the very beginning, even I remember our very first brainstorming session six years ago, this context and sense is so important. This is the most important thing we can do with smart glasses. This is something that's so unique about this form factor. And I think it's, it's very hard to do it in a way. That's not gimmicky. And that's actually useful. But when you do come up with really useful interaction techniques based on context, it's it is an incredible experience. And I think this is, this is what really motivated us more than anything else to create this hardware is to give this kind of context,
being able to give that sort of context implies a really deep understanding of the user, him or herself. How does North approach the challenging topic of privacy of making sure that the data you're collecting is ultimately used for the users benefit? And not to the detriment?
Yeah, this this is a great question. And we have a lot of internal debates on like a very, very healthy but also very passionate internal debates, when we're talking about features on Very different aspects of, you know, privacy, social comfort consent in the use of information. And this isn't just about taking information and we're going to sell it for advertising, are we not going to sell it for advertising? And I mean, the answer to that is definitely no. But it's just it's much, much more nuanced than something like this. It's actually about
what, you know, what do we do with people's information? Where is it stored? Who owns it? And if you're with other people, and, you know, like, we're seeing some of the same kind of questions with kind of prevalence of, you know, voice assistance in the home, when you go into somebody else's home, you know, who who owns your voice recording as the homeowner should you be, like required to disclose that the person's voice might be recorded? What if you have, you know, a Nest Cam, looking at your living room or the entrance? Does the person know that their image is being recorded? How long is it being recorded for what if the person doesn't want to have their voice? Recorded when they're in the house, do you go and mute all the rolexes? When they're coming in? Do you even talk about it? So I think this is a issue right now in the home itself, with a kind of more prevalent, very useful IoT devices. But what about when it starts happening in the real world with smart glasses and wearables? And again, not talking about any specific features, the kind of way that I think we, we like to do this is let's just figure out what the long term path is. Let's figure out what our values are. Let's figure out what we believe. And then each generation as we come to some of these questions, we know how to answer them we have we have kind of guiding principles. So you know, what do we do when glasses can have an analyze audio recordings all around you, you can feed microphone data directly into neural networks. You can, you know, voiceprint people if you wanted to, you could, you know, analyze their speech and record it as text. You could analyze just the The type of environment that you're in if you're in a conversation or not in a conversation, where is this data stored? Where is it processed? How is it tagged? How is it eliminated? How like, you know, how do you get consent? Or do you need to for the people around you, if you're recording their voice, and you can play it back later? versus if you're just a voice printing them? versus if you're just trying to detect if you're in a conversation or not? Do those need different levels of consent? another great question is, you know, where is that data being processed? If it's all on the device? Or if it's in the cloud? Do you need different levels of consent for something like that? images are exactly the same way. What if you had a camera? What if it was always recording? What if you're, you know, what do you do with facial recognition? Do you ban all facial recognition? Would you only recognize you know, certain people that you know, what if it's all on the device, what if people can opt in or opt out to the city So, I think, like the answers to this are, are challenging. But these are the kinds of discussions that I think people need to have not about, you know, do we allow or banned microphones? Or do we allow our banned facial recognition? But what, what is the, what is the conversation that we need to have? And what are the factors that go into it? And doesn't matter where the database is kept and where the data is processed? Doesn't matter if a human can ever look at these audio recordings or images, Does that even make a difference? And the things that do make a difference now, how do you deal with each of these individual situations and I think this is where, you know, we definitely have some very strongly held beliefs and values. And I think we are starting to know where some of our boundaries are here, where I think this provides good value to the user and it's a good trade off. And this one, it provides better value, but this is a line we're not willing to cross as a company. Maybe other companies will buy We're not willing to cross this. And I think we have to develop that position, you have to know it. And then you have to build it into your products and communicate it. So I expect that this is something to to some degree that we're going to do with the generation two product and the features that are going to come out. But I think we want to talk about it in the context of what we're launching, and why we believe that what we're launching is is okay, but I think it's, I think it's a very nuanced discussion. Again, facial recognition, just just to pull it one example that's been in the news a lot lately is not just one thing, it's not a universally good or a bad thing. It's just something that, you know, people this this is I think this is a great analogy to people do facial recognition in their brains. If there's somebody you know, and you've met somebody and you create an image of their face, in your head, and you match that with their name, which you remember, when you go and if you're running randomly see that person two years later and you bump into them at a restaurant, you might recognize them. People don't think that's creepy at all. They expect that we're used to that. Let's say there's somebody who, you know, a friend of you showed a picture of, and you learned a lot about them, but the person had never met you. And you went up to that same restaurant, and you said, Oh, hey, john, how you doing? I know you, and you knew some things about them. That person would think that interaction was probably a little creepy. Like, why do you know this about me? I don't know you? I don't I don't think I've met you before. And I think that like, you know, this facial recognition is the same way where you can have interactions that are very safe, and you can have interactions that are really creepy. And it's kind of, you know, the same and analogies to real life there. So these are anyway, I know, I'm probably not actually giving you any real answers. I'm giving you the processes. We used to think about it, but I think it's because the real answers are really complicated.
Yeah. And ultimately very important, and I think that the complexity of the answers dissuades people, maybe from thinking too deeply about it at this stage in the market, but they shouldn't. It's absolutely I think essential. The sorts of questions that you are asking yourselves internally about how to deal with this information in all the ways you described, is critical for the types of companies that I think we want to succeed in this market weaving generally society as a whole. Because there are implications if abused, this data can be really used both individually and more broadly, to our huge detriment. So it's a important set of complex questions for sure. So we actually met a few weeks back at the photonics West Conference. And at that conference, there was a lot of talk about micro led micro displays. at North you've chosen in v1, at least to use laser laser beam scanning, and you know that in Gen two, we can expect evolution of some of the same technologies that you're using. Then you have your own background in Oh led which is related, but not quite the same. Same sort of technology. How do you think about the trade offs between laser beam scanning and micro led?
So I think one of the real advantages that we have in really developing our own display technology is understanding how it works in a system. So we would never right now go and say, these are our requirements for a display technology. And if a display can do this, then now that display is good and can produce a good system. Each display is different. Each display technology is very different. And I think in order to understand and evaluate how it's going to work, because all the system's in kind of a are so heavily integrated, you really have to be able to put this type of display in a system in a full system model, you know, electrical, optical, mechanical, thermal, and you need to design and optimize the rest of the system around the particular advantages of that display. Text And then also to work around its disadvantages. So we couldn't just drop a micro LED display into our technology, because in the V one like, the hologram wouldn't work, the splitter wouldn't work, the display system wouldn't be optimized for it, you know it, nothing else in the system would would do it, you really have to kind of start from scratch. So what we started to do a while ago, when we evaluated display technologies, we created really detailed full system models. So there system designers, and we would say, let's, let's put a micro LED display in here. And let's see how it works. And let's change a lot of the parameters. And let's see what everybody's making. And let's design a really optimized micro led micro display system. And then let's compare it to a really optimized laser display systems. And then let's compare this to maybe some DLP systems. And one thing that we found out is for smart glasses, the really, the really important thing is to have everything really small And when things are small, it drives very certain constraints on the temple arm and the optics and the type of combiner technology you're using. And we saw some really hard limits there. And then when you put micro led in those limits, it doesn't compare very favorably for small displays. So I can kind of explain a little bit here. What you need to do in smart glasses is get a tremendous amount of light into a very, very small area. And one thing that lasers are outstanding at is getting a lot of light into a very focused area. And for the optics people they kind of call this a tone do but this is kind of optical principle where when you have very small, very highly focused sources of light, then they can end up in kind of really small sizes, and pick sizes, whereas when you have kind of larger and and kind of more broad sources of light like out of an LED, they cannot go into a small spaces. So when you want to have really, really small spaces, lasers kind of have an inherent advantage. And what we found with with micro LED is it just for a very small glasses, they're just very, very inefficient. And even some of the next generation next generation two or three micro led micro displays that some companies are talking about where they can control and focus the light a little bit more. This light has to be focused almost to the equivalent point of a laser in order for most of the light to make it into the smart glasses into the user's eye. So there's there's almost a kind of fundamental issue related to some optical principles around why for very, very small glasses and we're talking incredibly small like like what you'd want to see In a normal pair of eyeglasses, where micro led just doesn't fit, and even even kind of future theoretical, this is on our five or 10 year roadmap kind of magical micro LEDs that nobody can even really quite make yet. Even those don't really compare very favorably to laser when they get really small. This equation changes a lot if you want to make something bigger. So if you want to make something bigger, that's more like a headset, I think this is I think this is fine and micro led might compare well there. And it certainly is better than OLED because it's brighter, it's kind of better than l cos because it's more kind of immune to temperature and you don't need any illuminators. But when compared to laser, if you prioritize size like we do, it just it just keeps coming up short right now.
And so this it's really about in your mind, the coupling between the display system and whatever optics you're going to be using to actually get that light into the users eye. That that interface point is so small, it needs To be small for a pair of glasses that are truly all day wearable as the ones that you're focused on building there at North, it's that coupling interface that is the thing that micro LED, suffers at, at least in its current everything that you've seen so far in terms of morality.
That's right. Yeah, if you look at the display specs on their own, they do seem great, although when you when you try to couple them into very small areas, you realize that the, the efficiency is just very, very poor, you can get them up to 510 million nets, and which is which is very, very bright, you know, at the limit of what people think, you know, micro LEDs might be able to do today. And it still is, you know, produces a very, very dim pair of glasses when when you actually design the entire system.
And it produces a dim pair of glasses because the optics in the stream are very inefficient, that are throwing away those millions of nets, ultimately, in their inefficiencies in order to get the light to the eyes.
That's right. Yeah, you either need huge optics in order to catch all of the light. Or if you have very tiny optics like we require, then they catch a very small amount of light. And the only way to catch a lot of light with very, very small optics is by using lasers, because just inherently they're they're very good at that.
Yeah. Yeah. So kind of shifting gears a little bit somebody, a company that is directionally similar to what north is doing. They're trying to create a pair of all day wearable glasses, that's smaller form factor, but if chosen a different technology path to get there was some implications on the finished product. And that company is in real, and they're in real light product that's supposed to be shipping later this year. How do you distinguish Norse efforts from unreal?
So it's a great question. I actually think it looks like a really neat product, but I would still put it in the category of a headset, like a HoloLens or like Magic Leap one. And I think the reason is because if you look at the photos of it, it's actually very stylish Lee designed and it looks like a pair of glasses. And one of the things we learned is the breadth of the requirements for all day where it's necessary to look like a pair of glasses, but there's actually a lot more to it than just looking like regular glasses. And I think in the case of n real specifically, one of the things that we see I think there's kind of maybe three major issues. One is the eye relief. So this is the distance from the eye to the lens and just out of necessity of the architecture. They sit a little bit far from your face. So they look like regular glasses when they're on the table. But when you put them on your face becomes clear that they're actually sitting out fairly far which doesn't look very normal. I think the cord is also another one of them. That is a bit I you know, I kind of believe that's it's tough to wear that all day. It gets In a way, it looks a little bit funny. And the weight is another one too. And I think that's amazing what they've packed into. They're very nice display technology and kind of full overlays on their nice field of view. But if you're over 50 grams, as we found out, you just can't wear them all day. So I think that based on what we know about the requirements and having a lot of products in the field, I think it's a good product. But I think it's a product that people are going to put on when they want an augmented reality experience. And as soon as they're done having that interaction and experience, I think they're going to take them off. And I think we know what it takes to wear something all day. And although it's close, they're still wet. I think it's actually a pretty big kind of, you know, technology and hardware gap between what's required to wear all day and what they're actually delivering. So I'd put it in the category of a great very light, small low cost headset i think i think they made a lot Really nice product trade offs. But I really don't think based on what we understand people are going to wear this all day. I think they're going to wear it when they want an experience and they're going to they're going to take it right off again.
Definitely. It'll be really interesting. I think from a, as a consumer in this market, I'm excited to see both products. I'm curious to see how people will use the North focus versus how they're going to be using the Unreal light. And my sense is that I actually had a chance to chat with ci Xu here recently. And they view those glasses as a second screen for the devices that you might have already around you, or specifically the smartphone, but kind of this notion that you have it as a second screen. It is tethered, as you noted, and it sits further off the face and they've made other trade offs. And so they're definitely it's not that the same product by any means. And because it's not the same, I'm really curious to see how people ultimately use that and what the lessons are, that might be both companies might end up applying from the other as you continue to write and move forward. But as you kind of Think about the market more broadly. What's your biggest concern? Is it? Is it in real? Or if it's not them who or what in this market really does make you nervous.
So it's kind of reminds me of this kind of classic phrase about competition. I'm probably getting the wording slightly wrong here. But, you know, on one level, coke competes with Pepsi, but at another level, competes with water. And I think if we're Coke, I don't think we really have a Pepsi right now. And I think that that's the case for most of these products. I think it's such an interesting time in this whole augmented reality space, because everybody's trying to do something slightly different. And it's it's not clear that we're actually all competitors. Yet, at this point, I think for sure some products are competing. But you know, I actually, if I were to view the, the competition that we probably think is most appropriate in terms of what our users are going to be deciding what we really need to do. I think it's actually our ROI. Water in that analogy, and I think that water is eyeglasses, I think when people are looking at Should I get North's glasses, I found out about these smart glasses, I think they're really neat. They're not making a decision of North versus real, I think they're making a decision of North versus a regular pair of eyeglasses. And is the functionality is the benefit of all of these smart things that the glasses can do enough to overcome the fact that maybe I don't get, you know, we might come with a bunch of sizes and styles and colors, but it's going to be, you know, a more limited selection, and you go to the eyeglasses store, and you can get 1000 you know, so many different types. And I think that I think what we're seeing is when people go and buy a pair of eyeglasses, do they want regular glasses or do they want smart glasses and I think that's the major decision that people that you know, that the mass market is going to be having as they're taking a look at our glasses. So that's why one of the reasons why we're just focused on building an incredibly comfortable, stylish pair of glasses. And that's Yeah, again, I think I think that's where I see our biggest competition is in the short term. And I think maybe, you know, another adjacent thing is maybe this kind of like I don't know what to call them, like half smart glasses. So you know, kind of like the the Amazon sunglasses with Alexa or maybe snap spectacles. Those are really neat. They're kind of stylish, they have a smart functionality. They're much, much less expensive. There's obviously no display in them. But if you want some smart functionality, and you kind of want a bit of style, maybe maybe the Bose glasses too, which are, you know, kind of offer an audio only type experience. So I think that those are probably a little bit closer to the things where you know what we're hoping some of the customers are going to be comparing our glasses to and that's why our effort is kind of really focused around. You know that the smart experiences definitely have to be great, but they also have to be at the end of the day are really stylish, comfortable pair of glasses.
Then is what people are judging this utility against? Is it an upgrade over the glasses that I'm already wearing? Whether that be sunglasses or corrective eyewear? Exactly, yeah, yeah. Let's wrap up with a few lightning round questions here. What commonly held belief about spatial computing? Do you disagree with?
I think that somehow wide field of view 3d overlays, has been tied together so closely with augmented reality, that it's kind of become the definition of augmented reality. And I think that I don't I don't quite think that's right. I think overlays are really a really cool, I think they're an incredible visual experience. I think they're both not really necessary and not sufficient for augmented reality. So I mean, you can have great augmented reality without overlays, and I think that we have with sensors and contacts and pretty incredible ways to do augmented reality. And I think that even if you just I only had overlays, I still don't think that's enough to really, truly enhance the experience of the world in front of you. I think I think you do need context, I think you need more. And one of the things I actually think you need is all day where I think we really believe there's something so magical about getting a notification based on some bit of data or some something in front of you or something around you, or being able to just click once and having the glasses guess what you want to do. And when we get that, right, that's, that's amazing. And that low friction only happens when you're wearing it all the time. So I actually think that there's, there's, you know, overlays are really great. And I definitely think people should be focusing on them. We are to from a research perspective, but I think people are really missing some of the incredible other things that we need to do, which is having these really low friction contextual links. experiences. And I wish there was a lot more discussion on that. And a little less discussion on on the overlays.
Yeah, that's a great one. Besides the one you're building, what tool or service Do you wish existed in the AR market.
So this is this is kind of like purely selfish coming kind of from my personal life, but I am. In my spare time, whatever spare time I have, I'm actually a board game designer. So I design board and card games, and I have a few published games. And I just love tabletop gaming, especially kind of Euro style or kind of designer style games. And one thing that I've always thought would be incredible is to integrate smart glasses with board games itself. So you can actually do you know, I love face to face gaming, I love the tactile feel of real components. I love the kind of like transparency of the board game mechanics, but sometimes there's a lot of, you know, adding and scoring and kind of housekeeping associated with the game. And if you could do that automatically and also have, you know, hidden trader mechanics or people, you know, secretly colluding against other people, or, you know, all kinds of neat integrated game mechanics where, you know, the game could be competing against the players sending secret messages to the players that would, you know, make them do things. You could do so many neat experiences by having these, you know, kind of private, subtle augmented reality experiences, blended in with the mechanics of the board games, and I really, really want to see something like that.
Yeah, it'd be amazing. board games are a big hit in my family. We're currently playing Terraforming Mars, which is a new addition to the house and it is complex in media and juicy and every single game that we play, it plays out very differently than the previous one. And it's a lot of fun. And I love this idea of integrating AR into the game mechanics, whether it's just simply for user convenience, we should be handy, or to add the actual mechanic add an extra layer of artistry and intrigue to the game is really enticing as well. Have you been following tilt five and what they're doing there?
Yes. Yeah, for a long time back from the very first cast day, our Kickstarter, which happened right about the time that I started the smart glasses work, I think I think it's so I think it's so exciting. I really I really love the approach. Good technology. I do. I do really like that tech and the focus of AR tabletop gaming together is, is great. So I hope that's very successful.
Yeah, that's cool. What book Have you been reading recently? Or what book Have you read recently that you found to be deeply insightful or profound?
So I think the thing to me, it's the thing that kind of impacted me most and changed so many viewpoints. And it was actually this book. And this is this is a pretty popular book from a few years ago, called sapiens, which is kind of a bit of a bit of a history of humanity, but kind of what makes Humans, humans essentially. And that book had so many ideas that gave me new frameworks for looking at humanity. So the part that really spoke to me was that bit about the kind of the different ways that humans view reality. So there's like an objective reality, like something like gravity that doesn't really depend on people believing it or not. Or something like, you know, subjective reality that, like fish is delicious, or fish is terrible. It's really only about people's beliefs. But there's this whole idea of inter subjective reality. So this is something like a currency or even a country or a company, where if one person stops believing in it, that doesn't really matter. But if everyone stops believing in it, then it just stops existing. And it's so interesting that this ability to believe in these inter subjective realities, and to actually for humans, we, we believe in them so strongly, that they become as real as objective reality. Something like The dollar or something like Apple is as real to us as gravity or just a rock that's in front of us. But it's actually just a fabrication that exists in a bunch of different people's minds. And understanding how you can have these shared stories is so, so useful. I mean, I think it's really applicable to even this view of augmented reality, like, what is augmented reality? What is the relationship of overlays, if everyone believes that augmented reality is overlays? And that actually becomes reality for people and how do you how do you take what is objective? And how do you take what is maybe inter subjective? And how do you try to change people's minds or like, what what is real and what just exists in people's minds and maybe I am having it come across as a little more philosophical than it actually is. But it really helped me understand what makes humans unique and the power of telling stories and the power of people Having a shared vision that they're working towards that they feel is really real. And I think it just put a lot of pieces into place for me, but the way that the world works,
yeah, shared fictions have been instrumental in us being able to move beyond small groups of a handful of people into these much larger groups, organizations, however you split them up, whether it be corporate entity wise, or municipality or government wise, country wise, in order to accomplish much bigger and better things we could otherwise it's been probably one of the best intentions that we don't really talk much about these shared passions. Yeah. If you could sit down and have coffee with your 25 year old self, what advice would you share with 25 year old Stephen?
I think, I think the most important thing and it'll it'll start off as a little cliche, but just just, you know, don't be afraid to make mistakes. But I think that more important than that, it's surround yourself with people who are also not afraid of making mistakes, who are wanting to go after the truth and what is actually real, and are willing to kind of throw away any preconceived notion that they have, you know, who will both respect? When you say, look, this just doesn't work, and we should abandon this. And I think I was wrong. And we shouldn't do this anymore. I know I said that we should pursue this, but we shouldn't. And to be in a group of people that respect that. And also act like that is such an incredible freeing experience, and you just get so much great work done when everybody's after the truth, and nobody is really afraid of making mistakes, and you have that trust among each other. That's, I think, you know, I've been in times in my career, when I've had that and I've been at times when I haven't had that. And I think to me, it's, it's the most important thing, it's most directly related to whether I'm able to do good work and whether I enjoy my work. And I would say just really, really double down On on that particular aspect and both myself and others.
Yeah, that's a great one. Any closing thoughts you'd like to share?
Just that I, I'm really excited to see what the next five to 10 years looks like in augmented reality. We're in this very, for somebody like me super exciting stage right now, where everything is possible. And if you come up with a great idea and a great technology, it can actually define and change what's going to happen. And I think that the ability, there's a lot of really great, interesting creative work going on in the smartphone industry right now. But you can't have a small company of a few hundred people and come up with a really cool idea and make a piece of hardware that's going to change the smartphone industry right now. It's too well established. It's too big, too many big players and the formulas of what works are already very well established. So I think that this is so easy. Citing right now, both because the possibilities are seemingly endless right now, and a great company can come up with a great idea. And it's like, it's kind of like a new frontier where anything can be successful and I could just can't wait to see what's gonna happen.
Yeah, this is really a fun and exciting time to be in this market. Where can people go to learn more about you and your efforts at North?
I think the best way is to go to our website at buy North COMM And there is ways that they can sign up for information or see information. So as we kind of trickle stuff out, over the next several months about the gen two product, that's where they'll be able to go to see it first. Fantastic,
Stephen, thanks so much for this conversation.
Thank you. It's a lot of fun.
Before you go, I'm going to tell you about the next episode. As you know, South by Southwest was cancelled this year, but my guests was so passionate about their topic. We got the panel together anyway. The panel is spanning realities with music From Childish Gambino is augmented reality dancing to marshmallows fortnight concert in virtual reality to the mixed reality experience of Tornante immersive and spatial computing is closing the gap between the real and the virtual when it comes to music and art. The panelists include Amy la mayor, managing partner at W xR fund, Tony Parisi, Global Head of AR VR ad innovation at Unity. Rebecca Barkin, VP of immersive experiences at Magic Leap, and Eric waggly Ardo, creative director and founder at end Paul calm. This is a fun panel. Please subscribe to the podcast you don't miss this or other great episodes. Until next time,