The AR Show: Cayden Pierce (H2O Smart Glasses Community) on the Power of Contextual Search and Open Source Projects
12:44AM Feb 11, 2023
Speakers:
Jason McDowall
Keywords:
glasses
smart
people
cases
rv
overlaid
run
pull
community
contextual
conversation
technology
hardware
sensors
build
information
talking
app
search engine
wearable
Welcome to the AR show where I dive deep into augmented reality with a focus on the technology, the use cases and the people behind them. I'm your host Jason McDowall. Today's conversation is with Caitlin Pierce Kaden describes himself as a transhumanist hacker working to enhance our intelligence using AI smart glasses and eventually neurotech Kaden posts regularly about the current state of smart glasses and is actively developing open source Smart Glass hardware and middleware solutions. To support these efforts and to facilitate insights between end users and smart glass makers. He recently started the h2o smart glasses community, Caden is also working on a contextual search engine at MX labs to help deliver meaningful utility to the smart glasses of the future. In this conversation, we dive deep into potential use cases, current smart glasses, hardware and the potential of contextual search engines.
Your speaking of contextual search engine or an implicit interface. What there's the potential to do with smart glasses is to listen to the conversation, to identify knowledge gaps, to generate the query and search for it. And to overlay that relevant information on your vision in the amount of time that it takes you to even realize that you don't know something or that you need to go look it up. It's already been overlaid on your vision so you can stay in the conversation. Or you can continue thinking about your problem, or basically stay in the context that you originally in without having to switch out of it.
King goes on to describe his open source hardware solution as well as the thinking and work behind a middleware solution he calls the wearable intelligence system. Kaden has developed a broad and deep understanding of the coming age of smart glasses, I think you'll really enjoy the conversation. As a reminder, you can find the show notes for this and other episodes at our website, the AR show.com And please support the podcast@patreon.com slash the AR show. Let's dive in. I heard you had a memorable surfing trip to the west coast of British Columbia in Canada. What happened there?
Yeah, so I went on a surf trip to Tofino BC with some of my van life friends. I was living in an RV at the time and I had some friends who also lived in their vehicles. And we went to Tofino to serve. And one of my friends who'd been there before knew of this really, really long road on a First Nations Indian reserve where basically hundreds or realistically, maybe 1000s of people living in their vans and cars and RVs would would park to go catch the surf in town. And so we've parked there at one of the first spots right at the very front of the road. We set up all our stuff. And then we took a car into town every day to go surfing. And on the very last day, when we were hanging out on the beach bumming surfing, we got a call from one of the friends we'd met on that road. And she was frantic, she was freaking out. And she said, you know, the cops are here. All the First Nations are are here and they've shut down the road. They're kicking everybody out. There's this exodus. And we kind of didn't believe her because probably because we wanted to keep surfing. But later in the day we went back and indeed there was barriers that the police had set up in different places. And there was just van after van car RV after RV pulling out literally hundreds of them in a in a convoy pulling out of this road that had been there for years. And we're pulling it in our car hoping we can just get to our our vehicles and vans to get out of there and some cop pulls up and he stops us says hey, we're shutting this whole place down, you know, you can't go in there. And my RV is 100 feet away. I just pointed to it. And I said, we just got to get our stuff and go and and I realized there's a barricade just past my RV. And just before my RV, but nothing in between. And the guy looked back at the barricade behind us and looked at the barricade in front of us and looked at my RV. And he just gave me a smirk and a thumbs up and he drove away. So we went and parked in the RV. And I saw we just we sat there as hundreds or maybe even 1000s of vehicles pulled out of this road. And we were the only ones out of that entire place that got to stay because we happen to be in the 100 feet of road between the two barricades
each happen to get lucky and be in the right place at the right time for that that parking. Yeah, exactly. Yeah, it was kind of fun. Amazing. So you were out there on the road as RV. Was this some sort of cross country road trip? What was the motivation for jumping in the RV?
Yeah, it was a cross country road trip, which I know you are familiar with yourself. I am yeah. I wanted to kind of go on an adventure as I was coming to the end of my undergrad. But I also wanted to continue to do hardware research. So I figured out a way to do both of those together. So I started in Toronto with an RV and I drove across Canada to Vancouver where I ended up spending the summer and surfing and doing research at the same time.
Amazing so it was like a hacker inspired road trip of sorts.
Yeah, exactly. I was trying to solve the problem that I wanted to adventure and do research at the same time. And I'd started reading about kind of Van life and how people do this in their vehicle to inexpensive way to See some of the world. But I didn't think there's going to be enough room to, you know, actually continue to do hardware, you sort of need a static lab to do hardware. And so I got an RV, and I kind of stripped out all the things I didn't need the extra bed, the kitchen table and everything. And I replaced it with desks and engineering equipment and soldering, setup and all the things I needed to keep doing research. And it was also kind of a exploration in wearables themselves. I think a lot about how our technology is extending ourselves and the things we use the most we attach to ourselves, we wear them our phones, our sensors are you know, you have a backpack with your food and water and every laptop, everything you need. And I started thinking about, what about the infrastructure around me. And so this RV, I started to think of as a wearable itself a wearable wearables lab, I kind of called it. So I wanted to take all the technology I relied on, and make it all into a single wearables kind of a single unit that is a person and everything that they rely on to exist, but with modern technology,
that's really incredible. On some level, the project, the major discovery was in the creation of the RV itself, and its ability to be a rolling lab hardware lab. But which by itself is very impressive. But beyond that, were there were there specific projects you were excited to work on or actively working on during that trip?
Yeah, absolutely. It is, it is funny kind of the meta analysis of you know, the, the lab itself becomes the project. And maybe I could just one day build labs that helped me build labs. But yeah, I wanted to, I mean, I wanted to keep working on the projects that I was already working on. And so the lab allowed me to keep doing exactly the same things that I would have done. And so the things that I've worked on in the period of time, that that summer, and this past summer that I also returned to the RV, where some of the smart glasses work that I had been doing. So the wearable intelligent system, which was a software framework, which is a software framework for smart glasses, the open source smart glasses, which is a open source, hardware device, smart glasses, device, and as well as brain Jam, which is a pair of headphones, which have brain stimulation in them to kind of create a new type of musical experience. So and and, of course, a bunch of other research projects that I would add that I was working on overtime, but those were the major projects that kind of were born and lived in that RV lab.
That's incredible. I'd love to touch on each one of those is going to go into this conversation. But maybe we can go back a little bit. And you can share where this passion for tinkering and exploration and, and wearables and smart glasses really originated for you.
Yeah, well, the tinkering side of things I think is, is pretty natural. And a lot of people in the space share the kind of, you know, you're eight years old, and taking things apart and trying to figure out how they work, which is just some kind of innate personality traits, I think that some people share of curiosity, I spent a lot of time growing up doing that, and on, you know, howstuffworks.com, and just just trying to figure out how things work with the innate interest, which is pretty common, I think. And then later on, when I was in, kind of like, my first year of university, I was really into self improvement. So you know, exercise nutrition working out and, and also the types of the areas of self improvement that are for mental cognitive improvement. So you know, thinking about sleep, and scheduling and breaks, and, you know, light therapy and things like that to kind of maximize mental performance. And at the same time, kind of, you know, doing what everyone does, when they're that age of questioning things and figuring out the why and getting some kind of, you know, motivation, values and goals figured out. And I do remember one day very specifically, I was working out and had some idea about a project I was working on, or a book I was reading, and I went and wrote it down in a note on my phone, which was a very common practice for me still is, but I kind of met, analyze that situation, and started to think about the fact that this note that I was taking was really an extension of my memory. And there was no real fine line between where my memory in my gray matter stopped and where the memory in my phone began, that it was really an extension. And I realized that well, you know, I'm using technology to extend my mind. And so I could kind of combine this self improvement with this love of innovative technology, to extend my intelligence and we are the sensing thinking beings, we can kind of bootstrap ourselves almost we can improve our sensing, and we can improve our cognitive capacity using technology and tools. And so I immediately went and kind of looked for this and realized, oh, you know, I did not invent this idea and found the first thing I found was, as we may think, By event of our bush where he describes his Memex system, which just kind of blew my mind at the time, and gave me this, this huge excitement about this possibility of improving our fundamental position that we're in, and capabilities, and that still to today is why it's kind of what motivates me. And I just see smart glasses as the platform that is next up as the most promising technology to do. So I think that there's other technologies that have done that smartphones, etc. And there's other technologies that will like neural interfaces. But smart glasses are in the sweet spot where we're going to be seeing that as the next platform to do so.
And really appreciate and enjoy that perspective. These technologies that we create for ourselves to extend our r&d abilities, to communicate, collaborate to pontificate and all the rest to experience. Just thinking through as you're describing some of these earliest explorations that we've done as humans really starts with language and the evolution of our language. And then the tools to extend that language either to transmit over longer and longer distances or to store for longer periods of time, or, or some combination of those things, everything from writing to the the beginnings of the data technologies that we think of today with signaling towers and semaphores, and Telegraph's and all the rest, just constantly trying to find tools to extend our ability to communicate and to think and to remember and to access data. Absolutely. But really, it all kind of originates with a thought in my mind, and being able to translate somehow transport that thought to your mind. So you can appreciate and understand it either. So that we can have a shared understanding, to build a relationship or so that we can have a shared understanding so we can work on something together. Whatever it happens to be all begins with a thought.
Yeah, definitely. I mean, I think that is a good point. And it's extending to a lot of different areas, our technology, our kind of current user interface paradigm is incredibly explicit. So you have a thought, and you know, you know, what you want to do with your phone, for example, you know, what you want to achieve, you know exactly what steps you would take to do to make that happen. But you have to pull it out of your pocket and put your password in and pull up the app and enter the information and look through and find the next thing, I think we're going to be moving more and more to implicit or contextual interfaces, where as soon as you have that thought, or maybe even a second, before you do have that thought, your device is already predicting that that thoughts going to be had. And you might just have to press one button, it's already figured out, what you want to do is input the information. It's pulled up the interface, you just have to say yes. Whereas now you have to do all of these explicit steps. Because you have that thought and you know what you want to do end to end? It's just how can we get the device to either figure that out better? Or predict it before you even need to do it yourself? What do you think
is necessary to bring that sort of experience to reality?
Well, there's a few aspects to it. And it depends on what use case we're talking about. Because some things are easier than others. But you absolutely need an understanding of the user's context. So you need to know where they are, and what they're doing, and what they're talking about, in order to have any kind of chance of guessing what they want to do next. If you don't know any of that information, then you have no idea what you're going to do. And then you need a predictive system that can take all of that input. And that has recorded all of your previous activity all the previous times you've interacted with your device, and can say, given this context, they're likely to want to do this thing. And then we also need a UI, a different type of interface, that is suggesting things almost that is not just responding to exactly what you're inputting to it. But that is pulling things up based on what it thinks you might want to be able to do. And so I think that's going to even require on the low on the lower level is going to require a different way for programs to work. Because they're going to need to be able to talk to each other in a way that's different than before, you know. So you kind of are going to have somewhat of an AI layer on top that's looking at your context, figuring out what you might want to do. And then it has to kind of be able to pull up the app and input the information that you want. And so there's, it's you know, right now, it's just assuming the user is you know, if I'm walking out of the airport, and I have a my bike with me that I just flew with. I want my device to read my Airbnb and figure out where I need to go and schedule me an Uber XL because I knows I have the bike and I just hit yes. And so It needs to be able to have access to my applications like my, my email, it needs to be able to open and read. And it needs to be able to open Uber and it needs to be able to input that information. So I think fundamentally, the kind of the operating systems are going to have to change as well, to allow for that,
that puts a lot of emphasis on privacy, beyond the plumbing elements that are necessary to pull that off the trust level necessary that we would have to have and that sort of system is is high, is accessing all of our data. Yeah,
I agree. When I think like, the example I gave is probably one that requires much more invasive data than some of the others, and might require more like a cloud all encompassing connection than some others. For example, you know, if you're running a contextual or an implicit search engine, that is listening to your conversation, yes, you might have to give it access to transcription. But you can have a system which is dropping all of the audio, and only holds a buffer of your conversation for 30 seconds or something, and then drops all of that too. But it's listening to your conversation, listening to what's being discussed, and pulling up relevant information that would be useful to you in that moment. And it doesn't necessarily require access to every single one of your apps, and your email and everything like that, in order to provide a decent experience, at least in terms of public knowledge, like searching the web, to pull in that kind of information.
Is that a project you've been working on?
Yes, that is pretty much exactly the the main focus that I've been working on what I'm convinced is kind of a killer use case for smart glasses, and something that I really want myself, I think, in a while we're talking about, you know, extended mind and improving our cognitive capabilities. I think search engines are the best example of a technology which is improving our cognitive capabilities, every single person, no matter their, their occupation, what they do, they're using a search engine every single day. And we use it multiple times a day. And it has fundamentally changed the way that that we think about things they used to just be this, this kind of giant gap, where if you didn't know something, you just have to deal with that. Or you'd have to go through a long process of going from a query to actually getting an answer. And modern search engines have automated the step from query to answer. And so the whole kind of pipeline of a search engine is you're doing some kind of cognitive task, you're thinking something through you're trying to solve a problem. And you come across a knowledge gap, something, some bit of knowledge that you're missing some answer to a question, that if you had that answer, you'd better be able to do that cognitive task or solve that problem. And so right now we identify the knowledge gap, we kind of formulate a query or a question that will fill it in. And then we switch context over to a search engine, to look up that query and get our answer and then go forward. And I think you're speaking of contextual search engine, or an implicit interface, what there's the potential to do with smart glasses is to listen to the conversation, to identify knowledge gaps, to generate the query and search for it. And to overlay that relevant information on your vision, in the amount of time that it takes you to even realize that you don't know something or that you need to go look it up. It's already been overlaid on your vision. So you can stay in the conversation. Or you can continue thinking about your problem, or basically stay in the context that you originally in without having to switch out of it.
There's a lot that goes into this, that the potential is, is tremendous. Because you're right, we're constantly adding things up, in fact, is even as I engaged with my family, this morning, we had a conversation about the word draw. And it's multiple definitions. And we were we were all brainstorming all the variations on this very simple word that we were used all the time, we have four different definitions. And one of those definitions is 100 variations on the definition of like to pull something out of something to draw out, draw a card, that sort of thing. Anyway, these things for us, at least as a family come up, even in conversation with each other around the dinner table of some interesting topic that we want to go and explore together. Anyway, it's pervasive in our work lives in our personal lives, this idea of looking things up. And you noted that this kind of notion with a glasses is that there is an opportunity because the glasses have a couple of elements in them that give it a leg up. One has the ability to gain more context, with the microphone always their potential with other sensors that are they're gathering information to understand this sort of context. At some level, just a microphone is enough to provide a lot of potential value. And the other side of it is the output the ability to potentially visually represent or display the information that has been created. As you kind of think through the full problem set here. What do you think are the hardest pieces in making something like what you just described happen?
Yeah, There's a lot of hard problems to solve to make this work well. So if we were thinking about, you know what it might look like to use it, and then think about where that might fail. Some examples of how a contextual search engine could be used in daily life would be, for example, if somebody said a word that you don't know, you can instantly have a definition of that word overlaid on your vision. Or if you meet somebody new and they say they're originally from a place, a country that you've never heard of before, you can have a world map overlaid on your vision that zooms into the country they're from, to show you where it is, along with maybe a bit of contextual of textual information rather, about the culture or the language or the demographics of that place. Or maybe you're in a conversation, and somebody mentions a politician that you've heard of, but you don't really know their platform very well, it could pull up the Wikipedia page for that politician show you a quick bio that summarized down to explain their platform, as well as a picture of their headshot to kind of jog your memory of any time, you might have seen them before on the news, or anywhere. So these are just a few of the possibilities in the public knowledge space for searching the web. But at the same time, the exact same sensing and intelligence pipeline, which allows for this contextual search engine on public knowledge, if it's applied to your private knowledge, it's immediately a memory augmentation or a memory extension tool, it's going to be able to run on things like your internet search history, or your email, or really importantly, because it's on smart glasses, your previous conversations and the transcripts of those conversations. So if a couple of weeks ago, you read a paper, and now you're describing it to a friend of yours, as soon as you start talking about that paper, the system can be searching through your internet search history, or your email or the things that you've read, find that paper and pull up a summarized version of it, to jog your own memory of the parts of it that you don't currently remember. And it can also give you an easy Share button to send it to the person that you're speaking with, so that they can have access to that information. And in the future, when we're seeing larger adoption of smart glasses, they could see it immediately overlaid on their vision. And so you kind of have this shared extended information space that this is pulling in. Or, for example, maybe a few days ago, a colleague of yours told you about a new service they're using for DevOps, and you want it to start using it and look into it. But you can't remember the name. As soon as you start talking about that thing, or you ask a natural language query about it, the system can search through your previous conversations, find the name of that thing and give it to you immediately, as soon as you need it. So take that exact same system. And you know, you're talking to your girlfriend in the morning, and she says, I brought the dog for a walk. And it pulls up the Wikipedia page for canine and you say, oh, that's what a dog is, you know, that is a false positive, or it's a piece of information that is not useful to you. It was correct, you know, the system found a concept that was mentioned and pulled up a definition for it, or the definition of the word that was said. But you didn't actually need that in the moment. And so I think the biggest problem, if we're focused in on this actual use case, the software app, that's a contextual search engine is false positives, it's providing too much information or information you don't need. If we show you a bunch of things overlaid on your vision that you already know about, you start to ignore it, and you start to get annoyed by it. And so we need to only pick the things that are actually relevant and valuable to you. So that you actually pay attention to it, you actually get value out of it. I mean, other problems that exist that are of course, which will always be the answer right now is like, we need hardware, we need a device, a pair of smart glasses that you can wear all day that are comfortable, physically and socially and visually, that we'll be able to run these types of use cases that we don't have that yet that doesn't yet exist. And another problem is the sensing technology. So being able to transcribe the conversation, we have really good ASR automated speech recognition systems, but they need a good signal in order to be able to run and so the glasses or whatever device that you're capturing the conversation with, needs to have a good enough microphone setup so that it can hear what I'm saying. And more importantly, you can hear what the person that that I'm speaking to is saying, because if we don't actually have a transcript, then nothing else matters after that. That's a
healthy list. As you're speaking, I was reminded of Clippy, I don't know if you're old enough to know who Clippy was.
Who doesn't know Clippy.
But that that experience was in similarly inspired this idea that man, these computers are complicated, but they're so powerful. What if we just have a little bit of AI that sits in them? They can anticipate our every need. And it's suffered mightily from this notion of, of too many false positives. Yeah. overly helpful to the point that it was annoying and people turn it, turn it off. Yes, went away. Now, in addition to the hardware and the other things you described there, all this work is being done through MX labs. Is that, is that right? Yeah. So
the contextual search engine project is being built by MX labs.
And in addition to your your work at MX labs, you also are starting your own community, your community focused on the set of challenges within this wearable glasses, smart glasses sort of space. What was the what's the motivation to start a new community? What's missing from the AWS are the VR AR associations of the world? Yeah,
we started the h2o smart glasses community just a few months ago, and the motivation was kind of a reaction to people reaching out to me directly, I've started putting out some kind of making a bit more public, some of the work I was doing on smart glasses with some YouTube videos and some posts. And I started getting people reaching out consumers reaching out users reaching out early adopters reaching out with all kinds of very, very similar questions about, you know, what they could do with glasses, what glasses they should buy, and a lot of the time use cases that they had in mind that they really wanted to do that they thought smart glasses could solve for them. But they didn't really know what glasses could do that, if any, and how they could go about this. And at the same time, I realized there's there's been a problem that I've seen for quite some time, which is, there's somewhat of a disconnect between those making the technology and those thinking about the use cases and their requirements for what they're actually going to do. You know, there's a classic story about how the Google Glass engineers used to just screen copy the the display onto their computer and write apps with the with the glasses on their desk, and never put them on. And there's a whole different team that was actually defining what they were going to do. And, you know, I've seen that in terms of, you know, I have bought or used a lot of smart glasses, and trying to find, you know, what works for me. And there's, there's a lot of these kinds of raging problems with them that only show up when you wear them for like, eight hours a day for three days in a row, that you start to notice pain points, bugs, whatever it is that the people who made them would have noticed, but they didn't because they didn't wear them. And so the idea behind the h2o smartglasses community is to create an information pipeline and connect together the end users, the consumers, who are, you know, either have smart glasses now or who are in line to be the first to adopt smart glasses, and the people who are making them so that we can get around that, that problem. And I think it's a it's a very timely thing, because we're, it's kind of like starting a smartphone community in 2006, or something like that, because we're just getting to the point where that is going to be incredibly important, you know, the the use case side of things, because the technology is getting to the point where people will start being able to adopt it into their lives. And so yeah, that that has been the impetus is a real focus on use cases that are not just cool demos, but things that people are really going to adopt into their daily lives and creating this connection between the people making them and the people using them. It sounds powerful, and highly useful. Just on the last point you mentioned. Like I think, the AW e VR, AR a they're very awesome communities, and they're doing incredible stuff. And I don't think that there's, it's really a competition, especially because we're open, you know, there's it's free to join and always be free to join our community. So people aren't, you know, don't have to make a decision between one and the other. And I think it's going to be very complimentary, because they're very much focused on the industry aspect of the community connecting different members who are within the industry. And so I think that they'll only complement each other.
Very nice. There's definitely a gap there. And I really liked this focus just because I myself, my background has really been on this problem set that you're describing, which is how do you how do you develop something that people love using? Yeah, and that requires, of course, a deep understanding of the technology, but even more importantly, a deep understanding of the users and who they are and how they go about their lives and what problem that particular piece of technology is actually going to solve for them. And so that's wonderful in terms of establishing a new community what what What are the challenges in standing up each to community?
Yeah, well, this is the first community that I've started that is of this nature, I'd say, we've had the team open smartglasses community, which was a kind that I grew very organically because it was just, you know, very much focused on the software and the use cases. And so people showed up just because they wanted to use these things. Whereas in the h2o smart glasses, community is a different type of audience where we're trying to find people who are interested in smart glasses interested in buying smart glasses and adopting them. And so I think that one of the big challenges has been just figuring out how to create an environment that stimulates organic and authentic engagement, that doesn't require myself or our team holding it up. So I'd love a place where people will have a conversation about what use cases they're, you know, they're using, or what glasses that they're using, or adopting, without us having to, you know, kind of constantly lead that because it's easy to make a newsletter, and that's valuable. But it's, it's much harder to create a place where people are kind of meeting each other, or people are putting out their own ideas, without any kind of push from us. And so we've found, I think the meetups have been the most effective way to do that, because people are, you know, to synchronous communication, people show up, they're dedicating their time, they're sitting there in front of their computer with us all. And so it's very much a time where people are highly engaged. So that's one of the best ways I found for people to actually talk about and show off and demo, you know, the real use cases that they're currently using or talk about what they're, you know, trying to adopt.
So this starts organically. And it builds based on people's interest in the sort of content that you are creating, or the sorts of products internally that you're making available for people to come and consume. And maybe we can dive a little bit of that you notice that webinar is a big part of that. And what's the focus of the webinars, what's the, the format you're following, they're the webinars,
they meet up, start with 30 or 40 minutes of presentations and q&a. So you know, we have a host who leads that usually, that's me. And we have one to four presenters, usually two people. And there's always at least one presenter, who has a product, which is available to consumers today, either an app that you can download, that can run on smart glasses, or a hardware device, somebody can buy, or even just, you know, some have had an interesting use case, which, you know, you could use a stock pair of smart glasses to run right now. But that's kind of a requirement for everyone we have is somebody has to be presenting and talking about a use case that if if people want could run tomorrow, if they got the requisite hardware, and then we're also inviting people who are, you know, users and talking about how they're using the glasses, or people who are developing something that might not be available right now, but is pretty consumer focused. And they tell us all about what they're working on. You know, we've had, we've had different we had l Michelle talked about her health and wellness, or therapy, smart glasses use cases. So they take Snapchat Spectacles, the camera version, and just record their life in a point of view. And then they they kind of come together with a therapist and view that video later. And it's really using it for kind of a mental health and healing aspect, which is, you know, from my background is not something I would have ever thought of, for smart glasses. But you know, she's running these workshops at universities. And you know, people are completely having breakthroughs, where they realize how they treat other people or how they're treated or, you know, because they can kind of view their point of view footage from a more objective perspective after it happens. So there's one use case, which is just, you know, just you could buy these glasses and run it No, no special apps required. We've had transcription companies who are doing, you know, captions for the deaf or intelligence tools we have an are coming up our next meetup, we have active look who you've had on the podcast, talking about the end go to, which are the sports sports glasses. So this big focus on things that are available today that you can actually get value from using smart glasses. And then the last 20 minutes or so, is an open discussion, slash free for all and you know, anybody who wants can, can talk we don't you know, you don't have to raise your hand. It's just a it's just an open discussion. Depending on how big the meetup is. We might do breakout rooms, but it's just a way for people to connect with each other and show off what they're doing. One Guy Gil, who joined a couple of meetups ago, he was actually joining the meetup on his Enrile air. And so when we had the open discussion, he, he said, Hey, I'm calling from my smart glasses like, wait, what? What do you mean? And so he shared a video of himself. And he was joining our smart glasses meetup talking about smart glasses from his smart glasses. And he told us all about, you know, he wears them about two hours every single day for media consumption. So I think it's just a, it's kind of a melting pot to get everybody together, who's using these things? And say, you know, how is this bringing value to you? And the people who are creating them able to come and say, look at these valuable things we've made that you could use your glasses for?
That sounds awesome, surely awesome. You kind of talk through a couple of these use cases, including this first person point of view on somebody's interaction with others, how they're being treated, or how they're treating others, as an unexpected value use case out if there's something like this. I know that one of the pieces of content that you've spent a lot of time researching and documenting on on the community site itself. Is this kind of categorization of use cases themselves? Yeah. As you kind of reflect through the conversations or through your own writings, what do you think it'd be the top couple of categories for the way that people will use these most commonly, over the next few years?
I think, the contextual search engine, I got a plug at the very front, I'm very convinced that this is going to be really the the way that we use search engines now times 10, I think it's going to be a way that we fundamentally upgrade how we think and how we converse, and how we solve problems. So I think that's going to be one of the top ones. And I'd lumped into that category memory augmentation. So memory, extending our memory, you know, the way that we use notes or a personal knowledge management system today, that same type of contextual search engine, but running on your personal knowledge, your personal conversations, your personal internet history, or email is so powerful in terms of not just filling in information that you didn't know, but reminding you of all the things that you do know, but that you're not able to recall exactly right in this moment, but that are relevant to what's going on. Another one that I think is obvious, and probably right now is probably the most used in terms of consumer use cases, media consumption, a lot of people are talking to me saying that they're just watching movies and videos and playing games on their glasses. So it's not a mobile use case. Like they're using tethered devices, like a bird bath type tethered systems, usually, but they you know, they love it just for either the size, that they can bring it around with them everywhere. And the fact that it can just track their head so they can kind of lay wherever they want, and maybe lump in there, and the consumption is also creation. So using camera glasses point of view glasses, I think is going to catch on, because people love to communicate their point of view and their experience. Another area that is probably going to take some kind of some amount of adoption before it takes off. But I think what you said at the beginning of language, you know, no matter what technology we create, we use it to communicate, and I think the social aspect of smart glasses is going to be gigantic. And we I mean, I have a couple of ideas of how that's going to look but it's probably just scratching the surface of where we're gonna get to, like I imagine, you go to a networking event, and you're able to kind of see overlaid on people or above somebody you know, the business that they're working at and their position and everybody is probably going to be sharing a short list of areas they're interested in or what they're trying to get out of that event. And you're going to be communicating with each other automatically those lists and you might even see an aura or a color kind of around this person indicating how likely you are to match or how much value you get from communicating. I think those exact same use cases with a with a little twist would immediately convert into dating, if we're talking about the social use cases, right like see somebody's relationship status, see their sexual orientation, you know, see their maybe their if they choose to share it their affective emotional state in an aura around them. I think it's going to really enhance our communication with each other and of course, the classic, you know, captions and translation for communication, also health and fitness and wellness, both on the physical side of just you know, overlaying your biometrics your bio signals for when you're playing sports or exercising. It's a much better experience than pulling out your watch. You know, actually I use a smartwatch when I run fitness tracker, and I dislike that experience of of kind of stopping and looking at it or lifting my my wrist and slowing down when I'm when I'm really trying to go go quickly. And then I think also, it's going to open up mental health use cases that we don't even fully know yet but If you can be tracking things about your activities like where you spend your time, who you spend your time with, where you go, and how your, your stress follows that, you are going to be able to make better decisions to improve those metrics. And I think we're going to integrate brain sensors into the smart glasses in the future. And those are going to allow for a whole new area of mental health apps kind of like a Fitbit for your brain, where you're going to be able to understand your stress levels and your focus levels and your mental workload levels. And because the smart glasses have contextual information, we're going to be able to align that with where you were, who you were, with what you were doing, to help you make better decisions, to maximize your maximize your mental performance and minimize your stress levels. I mean, I could keep going on and on, I think navigation is going to be another big area. But so specifically on the consumer side, those are some of the areas that I think are going to be pretty well adopted.
That's amazing. Those all resonate with things that I've kind of reflected on or seen or imagined as being as being useful. It's a great list. One of the things you just mentioned. And you also mentioned earlier in the conversation is this notion of brain computer interfaces. And its potential benefit as relates to mental health, or other things like how do you imagine ultimately, that brain computer interfaces can be best leveraged? Is it about control? Is that about, you know, getting information back into the brain? By control? I mean, not, of course, Cyborg esque mind control, but, you know, using the brain to help drive interface and into tell the system, the computer system, what your intent is? What do you think that that fits in this grand puzzle?
Yeah, well, that is a giant area that could be discussed in multiple different ways. I think the if we're talking about the area that I know the most about, and that I think might be the closest to being realized in the near future is definitely on the non invasive sides. The types of use cases that able bodied people with a non invasive brain computer interface might get value from in the near future, are some of the things that I that I just touched on, like you're working at your desk for so many hours. And you're going to eventually start to lose focus, and stop being able to perform at your best until you go take a break or maybe get a bite to eat kind of clear your brain maybe meditate. And then when you come back, you're going to be focused again, you know, we've, the literature shows that. But it's hard to know when to take that break, it's hard to know when you're actually dropping off a lot of the time you don't realize till it's way too late. And you've, you would have been better to have taken a break an hour and a half ago. And you basically got nothing done in the last while and it just caused you stress. We all do that all the time. I would love a device that was running in the background, and was invisible to me until it said, you need to take a break now. And before I go down that rabbit hole of being unproductive for an hour, I could listen to that advice, go take the break and then come back feeling refreshed. And the brain computer interface or brain sensor is able to do that by sensing your focus levels and sensing your mental workload levels and sensing your stress levels and predicting when the best time is for you to take a break. Another I think really powerful use case here is on the stress side of things. So we do already have some level of understanding of stress with a smartwatch or something like that. But you can drastically increase the accuracy of these types of measurements with you incorporate a brain signal because there's so much more information there. And you know, you can combine it with a body signal to increase the accuracy. And with smart glasses. Now you're understanding information about what you're doing, what you're working on, where you are, where you're going, when did you eat? Who were you talking to, and so correlating all of these things, who I'm with where I was and what I was doing with my stress levels, and then providing an interface to users to help them make better decisions, so that they can decrease their stress overall over time. And so I think that is a incredibly valuable use case. And so if if we're thinking about this kind of area of use cases that are more realizable, in the near future, we have to have some kind of form factor that has some kind of spatial distribution on your head that you wear all day. Right now, there's some headbands form factors that are kind of the closest to that. But I don't think that consumers are going to adopt that as an all day thing. But I do think that over the next few years, we're going to be seeing everybody's going to be wearing smart glasses and smart glasses have Pretty decent spatial distribution on your head, you've got your forehead, your temples, temporal, you know, your ears, which we can now access with optical and electrical brain sensors. And so I think that those types of use cases, which to me are clearly valuable, are going to be able to be realized with smart glasses as kind of the best form factor with which to put them on somebody's head. And then in the long run, I think our interface to our computers is going to be probably based on sort of semantic decoding. So I don't think we're going to in the near future be controlling much with our brain signals, it's going to be a lot more implicit. But in the further future, we might become more explicit where we're decoding what you're thinking about the content of what you're thinking about, and using that as input to the system. So instead of you having to, say a voice command, you kind of just are thinking about something and I'm not even talking about necessarily subvocalization, I'm thinking about, we're actually decoding in meaning space, so that we go straight from, like, you're saying the thought, but you have some intention, not a sentence in your head that says, I need an Uber, but just an intention to get somewhere. And it decodes that and uses that as the input to figure out what to do next.
I imagined that's a little bit ways away.
Last piece. Absolutely. It's very far. Yeah.
That sounds pretty magical. Yes. One of the things you also mentioned is that you have an opportunity to play with a lot of different pieces of hardware, a lot of different smart glasses that are out there today. And you share videos, reviews, kind of commentary on your your impressions of these devices, you know, sometimes you wear them for hours at a time days in a row, to really have a strong understanding of where they are today, and maybe some about where they currently fall short. Did you kind of think through the stuff that's available today? How do you categorize the various types of smartglasses that we currently have available, or that we can expect reasonably expect over the next few years,
the categorization that I've made has been purely reactionary, to what exists, because I do believe that there's some holy grail, I don't, I don't even know if it's possible. I don't know how long it will take. But I do think that we can imagine one pair of glasses that are, you know, the Tony Stark, Edith glasses, they're slim, they're sexy, and they do absolutely everything that we could ever want them to do. And so probably as time goes on, there's going to be fewer categories as the technology gets better. And so the reason why we have multiple categories now is because there's a whole bunch of areas in which we have to make trade offs right now. And so companies are seeing this list of trade offs. And they're thinking about what use cases do we want to achieve with the glasses, and then, you know, deciding what trade offs to make, to best align with the use cases. And so just looking at all the glasses that exists now, you can kind of see a trend of the sort of categories that people end up falling into, based on the trade offs that they've made based on, you know, weight, and power and tether, and sensors and, and things. So, I mean, the the first category I think of the one I'm most excited about, and that we've mostly been talking about are kind of the head up display all day wearable info glasses that are focused mostly on providing information. So it's not really mixed reality, where there's, you know, direct mixing of physical light input and the augmented input. It's more, it's about the context, but it's text information or light image information. And so the trade off there is there's, you know, a lower resolution, Lower Field of View, often monocular, often monochrome, less sensors, maybe no camera, maybe microcontroller, instead of a high power compute device. But all those decisions are made so that we can get a pair of glasses that you can wear all day comfortably. And they don't need, you know, advanced visual experience, because they're just providing you with snippets of information at a low frequency. Another area we've seen is audio glasses, so they don't have a display. They don't have a camera, their main thing is they have a microphone, and they have speakers. And the main things that's for is usually listening to music, listening, consuming some kind of content like a podcast, or an intelligent assistant. So you have an intelligent assistant on your head, you can ask it a question. And it can provide you with with output, but you know your answer. So also screen glasses, which are, you know, a lot all of these kind of fade into each other. But the screen glasses are basically that they're designed for the use case of using them as a screen so you're supposed to watch stuff. So they usually have like, no sensors, or few sensors. They don't necessarily have tracking, and they're almost always tethered. Because to get that Good visual experience with a higher resolution and higher field of view and stereo scopic binocular rather, you don't have the weight and power budget for the compute, and batteries. So they're almost always tethered to a phone or a laptop, when you get to that kind of category, there's camera glasses, which have been around for a long time, which is just the half a camera, usually not much else. And the main focus is content creation is is filming and, and pictures, the health and fitness area is maybe doesn't doesn't deserve its own category. Haven't figured it out yet. But devices that are tailor made for a specific sport, or for health and fitness. And they're expected that you only wear them while you're doing that thing. And there's there's also Mr. glasses, which are hardly glasses, even today, if they're if they're actually mixed reality, combining the digital content with the real world, they need, you know, multiple cameras, you know, pose tracking, good display, good visual experience, a lot of compute power with low latency, a lot of just power requirements, they're almost always tethered, or they're a headset today to achieve that kind of use case. And then the last is like kind of Ophthalmic glasses, which are all about, like helping you see better. So there's examples of, for the people who are blind, to be able to help them to at least see a little bit better with kind of like a magnification, or there's kind of these tinting glasses that will automatically tint for you or those you know, maybe like a magnification glasses or something like that, that can help you see better. And then there's specialty, which is just the been I throw everything else in, which is usually just the glasses that are that are kind of head up display info glasses, but they're designed for one specific use case. They're kind of like an appliance, and they can't do anything else. They're designed not to do anything else. But this is the current bins that I've that I've created and place things in. And there's a bunch of examples of glasses that exists today that fit into each of those.
That's pretty amazing. I do appreciate your comment that over time, there'll be fewer categories. Yeah, but today, there are many because there are so many trade offs, hard trade offs that are necessary in order to produce something that is wearable, how redefined wearable even its grossest sense. If you call a HoloLens, a wearable device, that is a bit of a stretch it is technically you can put on your head, I wouldn't call it glasses. It's not glasses, though, my glasses. You have this this hard set of trade offs now. And maybe over time as technology improves, which is not, unfortunately, merely a factor of Moore's law. And, you know, our ability to cram more transistors into in a piece of silicon, it's advancements in optics and advancements in battery and advancements in display and advancements and sensors. Yeah, don't follow Moore's
no Moore's Law of optics. Thanks, Carl. Yeah, yeah.
So we might anticipate over time that they become fewer, but but still today, it's many. And the progress is slow. And I think that one of the challenges that the committee faces, and I think it's a project that you're working on is that it's, there is a bunch of built up anticipation and expectation around what they could be the smartglasses. There's not a lot of opportunity for people to really tinker and begin to explore more intimately with what's possible and where it can be taken. Can you describe the sort of open source projects that you are working on through the HTO? Community? Both on the hardware side and the software side?
Yeah, sure. Absolutely. Well, I mean, there's there's two distinct open source projects, they actually they work together, but they are, there's a very hard line between them. So there's the open source smart glasses, which are a fully we just put out the version 1.0, right before ces a couple of weeks ago, and we wore them brought them to CES. And they're a fully functional pair of smart glasses that are fully open source. And so that is been built by a community of different people around the world over the course of the last about a year now, maybe a little over a year. And then the other project is the wearable intelligence system, which is the first is more on the side of who can get into the why. But it's not necessarily like a something that's going to be commercialized. But the wearable intelligence system is an open source software framework that is being used right now by by companies, and is I'm expecting to be adopted even more, because it allows people to create applications for smart glasses very easily, because it handles a bunch of the things that every smart glasses app has to do. But that is really hard and takes a lot of time. And it also allows you to make one application that can run on multiple different pairs of smart glasses. So specifically, we're targeting these these Head Up Display info, glasses, and some of the glass As we support now I like the views X blade and the INMO air and these open source smart glasses. And we're, we're currently putting in the active look and go glasses. And we're going to be putting in some of the the latest views X glasses soon. And so this is something that is more kind of production ready. And we're expecting to kind of be a core element of the software environment for smart glasses where people will be able to write whatever application they want. And this is sort of a middleware that will talk to the glasses from your phone. And it handles all of the hard parts, which is offloading data. So all the sensor data that the glasses collect gets sent to the wearable intelligence system, they get processed, transcription all runs locally, and everything that we need to display and to show on the glasses gets sent back from the wearable intelligence system to there. And the idea is that somebody else can write their own application that just talks to the wearable intelligence system. So they don't have to handle all of those, all of that hairy stuff, and supporting different glasses and everything, they can just focus on making a use case. No, I liked
it a lot. That's great. So two independent, but related projects, ultimately, one is to explore the hardware and make an open source project there. The goal is to allow people to build and tinker on the hardware and sensor side, I'm guessing.
Yeah, that's a big aspect of it, there's a few reasons. So one thing was we started off building the wearable intelligence system. And we wanted a pair of glasses that we could test some of the applications that we were building on for longer periods of time, and the glasses that, especially at the time we started the project, the glasses we were using weren't something that had the battery life, and the comfort to wear for a full day. And with the open source smart glasses, we needed a lot less technology, we didn't need a camera, we didn't need a very high resolution display, we didn't need a Android processor. And so we knew we'd be able to make the battery life much longer. And we figured that we'd be able to make the comfort better. And so on one hand, it was just like, what's a way that we can build kind of a demo device that can help us to kind of show off and test out some of these apps that we're building. On the other side, it was exactly what you're saying the opportunity to kind of test out some ideas and get more community involvement and just allow people to learn more about how smart glasses work. For me, I had a number of ideas on the hardware side of things to try for smart glasses that I didn't see people doing. And, you know, I didn't, I didn't want to go full all in and building, you know, a hardware company to for with commercial consumer glasses at that time or now. But I wanted to try these things out things like wrapping the batteries behind the ears, so that you could for the weight distribution, but also to kind of hide mass, but increase the amount of power that you have things like putting a microphone array along the front frame of the glasses, and things that I think a few companies are actually changing to now which is getting rid of them the microprocessor and switching to a microcontroller. So instead of running a full operating system, you're just running a real time operating system very low power to try to explore how if we could get this to be a lot lower power. So as a way to explore then finally it was, so I was working in research labs working on smart glasses and reading papers of people doing smart glasses research and realizing every time they do a project that has any form of custom hardware, they go build a pair of smart glasses from scratch, and they suck right there. As you'd expect, if you spent two weeks building glasses, with a team of academics, they would, you know, hurt people's head and they look weird, and they die and they wouldn't work very well, which is confounding the study. And so another aspect of this is figuring, okay, well, we build one pair one time, well, that's fully open source, and has the aspects that you need, because it's open source to be something you can change, then if somebody wants to do a research project that requires some custom hardware with their smart glasses, they can start with our MIT licensed, fully working smart glasses designed and then make their small change and build it at a pretty low cost. So these were kind of the reasons and because it seems like it'd be super fun, which it has been
so cool. And then and then on the software side, what's what's your ambition, on the would you call it the intelligence, the wearable intelligence
system? Well, at first, it's evolved a lot. And it's actually going through another level of evolution as we discover what's needed. So it started out as myself just wanting to test use cases for smart glasses that a lot of people talked about, but that didn't actually exist. You know, there's not, there's no app you could go download on the blade or the Moverio or whatever I was using at the time to try these things out. Even really simple stuff. Like I just wanted to overlay transcription on my isn't because I was reading papers about how students were remembering lectures better if they had live captions during the lecture, that, Oh, that'd be great. Let me let me try that when I'm learning things or talking to people and see if I can remember. And then I realized, I sat down and nobody else is actually writing smartglasses apps yet. And people talk about all the things that you need to do to make this ecosystem work in the software side, like offloading all the data to your phone and running all the AI and processing on your phone, and then sending it all back, etc, etc, you know, transcription and voice command and UI and things. And none of that had been solved. And so I ended up kind of building a whole bunch of that. And then I was using that framework that I built to test out these many different use cases that I've been experimenting with over the last few years. And so I started to realize, oh, this is, you know, valuable in itself as a software framework for other people trying to make these these kinds of applications. And then the aspect that we've been kind of realizing more recently is the use case side is great. And everything you know, there's there's translation and live captions, and memory tools, and visual search, and intelligent assistant and Wolfram Alpha. And like, literally at all, you could download it right now and run this on, on a blade or in Mo, or pretty much any Android smart glasses. But the value and we have been talking to hardware companies that we're partnering with to port and support with our software is the fact that everybody wants to write an app for smart glasses. And the type of glasses that I think are going to be adopted over the next couple years are going to be microcontroller based, they're not going to be running the apps on the glasses, the apps are gonna run on your phone, and you're going to send data to the glasses. But the way that the current architecture works is you can't have six different apps that are all connected to your glasses at the same time. So I mean, we're getting kind of technical here. But this is a technical solution. And so what we're kind of doing is making a version of this, which is really stripped down to this, this basic feature, which is connect to the glasses and handle that stream in between and handle that voice command UI, and maybe some other interaction methods handle the transcription, which is something that pretty much every app needs, and then be that middleman so that other applications can talk to the glasses. So you know, you write you I want to run six different apps on my smart glasses, they can all run through the wearable intelligence system. And to consumer, all it's gonna look like is your smart glasses app where you know, you open it, you connect, it might have a few settings, and then it runs in the background forever. And you can just download regular Android apps. And they'll just start working just like any other app, but they're displaying on your glasses.
It sounds incredibly useful. As somebody who's in this industry, exploring problem sets around this, it sounds incredibly useful and valuable. So that's amazing. As you kind of look out over the next five years project out in the future, and think about the work that you're doing here with the HTO smartglasses. Community. And where are you wanting to go? What do you see what do you want this issue? Or is this going to be in five years?
Yeah, I would love for it to be a even stronger channel between the people making glasses and users. So I think in five years, we're still going to be in a position where we're still on an adoption curve, we might be I think, in a in a good situation, we might be starting to get to a late majority adoption, then that might be optimistic, I don't know. But so we're still going to be able to stick to our current core value, which is pushing forward the adoption of smart glasses. And so I think in five years, that's going to be much stronger this this channel. And I think it's also because there's going to be a lot more people involved a lot more community members, we're going to be able to distill the information a lot more than now. So what I would like to see is a channel up from end users to industry, which we serve as a middleman to distill. So an example of what that might sort of look like is, we might so right now we have a list of over 50 use cases that we've open sourced and say, This is what you might do with smart glasses. What I'd love to see is kind of like a crowdsource list, which is ordered by how many people are requesting that app basically. And so I would love an open open data anybody can access it that is created by our 1000s or 10s of 1000s or hundreds of 1000s of members in our community saying what app that they're dying to have on their glasses that no one's built yet so that they can say exactly to the industry. You know, the the classic you know, like One, one customer email is 1000 customers or I butchered that. But whatever whatever that saying is, I would love for this to be able to be a way to just say to the industry like, this is what we need. And also, you know, kind of this is what we dislike, or this is our, these are our pain points to just be an information highway up. And then I'd like there to be an information highway down from the industry to the users which are distillation on that side is going to be one of trust, and one of kind of reviewing what's available. So you're still early on in the adoption cycle there, there's going to be a lot of people building experiences, building devices. And I would I would love there to be kind of a a core team at h2o smart glasses community that establishes trust with this community of users. And so the the companies can send us their glasses, they can give us access to their beta or whatever. And some of this, some members of our community will try those things out in their daily life and review them as users and the things that are actually good, then we will disseminate to the community for free, not as an advertising model, but as a, you know, as a value model. And so this is going to I think it's going to force the industry to make things that are better, because there's going to be a kind of a funnel where we say, Well, this one's the best, you know, they all sent us their version of translation. And we tried them. And this one is, is our recommended version. And so that would be I think, the the the downward information pipeline. And I think if we have this full duplex communication, it's going to increase adoption and improve the use case and experience of smart glasses.
I like it. Let's wrap the few in lightning round questions. All right, what commonly held belief about AR, especially this kind of smartglasses notion of AR Do you disagree with?
I think that I've gotten some people sending me mad emails that I use AR in the wrong way. Because there's there's a camp of believing that the reality in augmented reality or the space in spatial computing is limited only to the physical world around you. And I think if we start to accept that thoughts and information is part of reality and is in his real, then it makes perfect sense that our contextual search engine or other information based overlays are augmented reality, because the information space around us is reality. And we are augmenting that and enhancing that with these types of use cases.
Our reality is what we perceive that is the physical world as well as your own thoughts. Yeah. Besides the one you're building, what tool or service do you wish existed in this market?
I think it would be on the social side applications. And so I'm, I'm well aware that this is probably not timely to start working on right now. Because we're going to need some level of adoption of these things before it's useful. But I would love an experience of walking around at a networking event, or at a bar, or any kind of social interaction place. And having more information about the world that's going on around me, I think tools that push us to be in person are good. We've built all these tools that make it easy to not be in person. And that's awesome. But I would like to see not just technology, which is getting getting our non physical interactions to be close to as good as physical. I'd like to see our physical interactions be upgraded to something that's never happened before. And so I'd love to you know, we talked about the networking, and we talked about the dating applications. And I think that's just scratching the surface. I'd love to see somebody build a compelling use case in that domain.
Nice. What book have you read recently that you found to be deeply insightful or profound?
Quite a few. I've been reading a lot lately. I think I'd put out Rainbow's End is a really good book that I'm sure you and a lot of viewers have read. It's all about AR AR future. And one of the things I really liked about it was its recognition of the kind of how intelligence is a lot about recognizing what tool to use to solve the problem, as opposed to just being really good at solving problems without other tools. And so they're, you know, living in this AR future where the types of tools they're using are different, and they're actually running on their sensory experience. They're running on their reality, but they're still constantly pulling up tools to use to solve their current problems. And I did like the use of kind of micro interactions to interact with their wearables. I think there's been not enough work placed on human output and computer input when it comes to smart glasses, a capacitive touch sensor and The Art of your glasses is not a good experience. And so since reading that I've been thinking about it even more, and we're even talking to some companies who are working on kind of risk based gesture interaction, to incorporate that into the wearable intelligence system, because I think it's incredibly a need that we have to have some kind of mobile, always available control.
That's a great one. If you can sit down and have coffee, with your 25 year old self, and I realized this question, as I'm asking him, I may not be relevant yet. I'm not sure we hit 25. So maybe we'll go back in time, if we go back to 18 year olds off,
oh, I was gonna say that I'd asked my 25 year old self, what crypto to buy, and when to sell it.
Let's go back in time them back to your yourself entering your undergrad experience. What advice would you share?
Well, I almost want to say the the capo thing is that I wouldn't meet with him because I wouldn't want to change the path. Because it's been pretty fun. I might say to experiment even more. And even more intensely, that when you experiment, especially when you experiment publicly in in real life, not just when you're trying something in the lab. But when you make something and bring it out into the real world and try it. It's magical, how all these opportunities just show up. I've had that a million times, say even we went to CES with these open source smart glasses, which obviously weren't a consumer ready product. And companies with much better consumer ready products weren't even demoing, because they weren't ready to show it. And you know, the amount of opportunities that that came from just kind of going out there with your new tech that you're trying out that might not work perfectly yet, are immense. And it makes things a lot more fun and interesting when you're kind of living in the future. So I would say, get out of the lab more, and bring it into the real
world. So good. Experiment more, get out of the lab, interact with the real world. Yeah. That's great. Any closing thoughts you'd like to share?
Only a giant thank you for, of course, for having me on the show. But, uh, thank you, Jason, for running the AR show. And I think that it is uniquely valuable because the information that you're getting out of people in these podcasts is not available elsewhere. I can't go buy a book, or read a website to get the information. The stuff is not released. They don't write the answers to the questions that you ask. And so it's infinitely valuable to me and so many people in the community. So thanks for the show. It's my favorite podcast.
I appreciate that very much. Thank you. Where can people go to learn more about you and your efforts, there is shield community.
For myself, you can go to my website because it has a link tree at the very top. So it's Caden pierce.com. And you can find all the links there. I'm active on YouTube and LinkedIn. So feel free to reach out and the h2o smart glasses community is smart glasses dot community that is the domain smart glasses dot community. You go on there hit the big Join Now button. And that's how you can stay up to date with what we're doing. And definitely join the meetups is the best way to become involved because everybody is there in real life. And you can meet everyone and talk and learn about what we're doing.
Fantastic. Ken, thank you so much for this conversation.
Thanks, Jason.
Before you go, I'm going to tell you about the next episode. In it. I speak with David John. David is the CEO and founder of Visscher, a company creating a smart glasses solution for gaming and streaming. David's background is in human computer interaction and design. He worked at Google on Google Glass. He was the chief designer and head of AR at rocket rocket. He built a startup to develop advanced AR positioning, and now he's heading the efforts adventure. We talked about his background, his current efforts to create a gaming and streaming solution and some of the unique elements that set it apart. I think you'll really enjoy the conversation and please consider contributing to this podcast@patreon.com slash the AR show. Thanks for listening