The AR Show: Julia Brown (MindX) Shines New Light on Brain-Computer Interfaces
6:19PM Mar 8, 2021
Speakers:
Jason McDowall
Julia Brown
Keywords:
brain computer interface
signal
challenges
technology
brain
world
spatial computing
sensor
lab
opportunity
device
people
create
generally
working
called
company
read
noted
index
Welcome to the AR show ride dive deep into augmented reality with a focus on the technology and uses of smart glasses and the people behind them. I'm your host Jason McDowall.
today's conversation is with Julia Brown. Julia is the co founder and CEO of mine dex, the creator of a novel brain computer interface that combines neuro technology augmented reality and artificial intelligence to create a look and think interface for next generation spatial computing applications. Prior to starting mind x, Julia co founded epi watch a digital health spin out from john hopkins the develop a seizure detection and condition management platform for wearable devices in partnership with Apple. She was also a founding member of the john Hopkins Medicine Technology Innovation Center, where she oversaw a team of engineers and entrepreneurs to create more than 25 novel digital solutions that improve patient care, many of which were spun out into independent startup companies. Julie has an academic background in computational biology, engineering and Human Centered Design. In this conversation, we get into the potential of brain computer interfaces and their contribution to private personal computing. When wearing AR glasses.
I think the immediate problem that we see brain computer interfaces solving is specifically targeted in this area of private personal computing, we're making computing again something that you're doing to support your interactions with another person or to support your interactions with the machine, or in general, the real world around you, as opposed to kind of bringing you away from it.
Julie goes on to describe alternative approaches for brain computer interfaces, including those using EMG. And eg. And of course, we dig into what Julia and her team are creating at mind x, which utilizes what they call holographic near infrared spectroscopy. We walk through where they are in bringing their product to market, and what will be possible in the coming years. Julia also touches on lessons learned in working with academic research labs when bringing cutting edge innovation to market. As a reminder, you can find the show notes for this and other episodes at our website, the AR show calm. Let's dive in. Julie, you had an opportunity at the beginning of my index to really start things off with a bang, you an opportunity to fly out to Caltech and give a demo of kind of the nascent mind excess brain control interface technology. Can you tell us that story?
Yeah, absolutely. So this was very exciting for us had just started the company about two months ago, filled with a lot of excitement, and also trepidation, as they were, you know, 13 hours a day of trying to figure out how to, you know, get a company together, how to manifest the vision appropriately, and also just how to get the day to day operations together. We had, basically, maybe a couple of days before finished up a first, you know, kind of very rough version of the software that could take signals from the brain, decode those very accurately and very, very quickly, and use them to control your interactions with a computer. So the kind of classical scenario and the justification behind a lot of brain computer interface development comes from, you know, looking at the plight of individuals, you know, who have severe mobility impairments, or you know, for whom a lot of the mobility and independence that we all have, really is much more challenging things like Skyping, her daughter, ordering food online, reading on a Kindle, things that we very much take for granted, becoming much more challenging that scenario. So we're very excited to have this opportunity to go and meet with some of the world's most prominent researchers over at Caltech and UCSF, and show them what we could do with our technology. And that was kind of the whole setup behind the visit.
And so two months in brand new coming, how do you even get the demo? How do you even get the invitation to go out and showcase what you were working on?
You know, so certainly we are standing on the shoulders of giants, as typically any, you know, tech startup is on other people's, you know, scientific contributions. So there was there were conversations the year prior between both the folks at Facebook's edge field group, which is their neuro technology focused and under the radar division, then also called Building eight. And now it's kind of encompassed within their reality labs group. But there were conversations there, Facebook was very interested in investing in a non invasive brain computer interface product. And there was a lot of conversation with my co founder, Jeff Lang, who's founded DARPA's biological Technologies Office and ran that for about 20 years, myself and a few other folks who had been kind of working around this intersection of brain computer interfaces and the broad world of mobile computing. So it was the kind of culmination of a lot of conversations and with kind of promise on the end of it that if you could demonstrate that you're not completely full of BS, then there'd be you know, the potential for a collaboration.
How did that first demo go?
It was wonderful. It was honestly it was so exciting. I you know, it was almost like I was building
Like childish excitement afterwards, you know, and I feel like you write a lot of software, and it does great things. But there's these moments where you see someone use something you've built, and it, you know, really does something that excites them or helps them. And it just really, is really meaningful to me. And I think I continue to love that about both software development and entrepreneurship more generally, it's just that problem solving for someone. So basically hooks the whole system up, and it worked pretty much out of the box, not only did it generally work, she was able to go through a very short, maybe 32nd kind of onboarding, calibration, the woman who was our volunteer participant, but it worked better than they had expected, the actual command signal was faster than her ability to tell us that she, you know, to verbalize the things she was trying to do. And there's I had recorded the whole thing. And there's kind of like some goo noises in the background as all the Reacher researchers are standing around, because that was, you know, kind of an exciting moment. And I feel like this sort of experiences is so rare for an early team working on a first sort of demo to actually have things go. So well. Usually, there are Gremlins in the works and Murphy's Law rears its ugly head, where did the company go from here after such a strong showing, you know, in front of all these researchers, how did it go from there, I'm glad you're bringing that up. Because, you know, certainly when we flew back out, three months later, the system did not work right out of the box. And we spent several days debugging and trying to get that, you know, to fix those issues, and then we're able to get, you know, a pretty good performance by the end of that trip. So certainly, I think the broader truth about software engineering and about startup life, is that you have to put in a ton of work all the time, and in order to be prepared to get lucky, and it won't always happen. But that you can really capitalize on these moments that will propel you to that, you know, kind of next stage and continue to really feed I think both the inspiration and the, you know, life of the company as you move through those early first few years. So certainly, we continue to have that mindset, you know, to work, work our butts off as often as we can, just knowing that there are a lot of opportunities in the future. Yeah. So I really want to get to dig into the underlying technology that you're working with. But maybe we can set the stage by painting a broader picture. Because, you know, we're talking here about neuro technology more generally. But more specifically to sort of this notion of brain computer interface. As you laid out, there's a lot of opportunity, suddenly, with people who have some sort of physical disability, to change lives in many ways. But within, you know, the community that we're talking here today, the opportunity is even as broader. It's really anybody who's interacting with spatial computing, we still haven't solved this, how do we tell the computer what we wanted to do? How do we tell the glasses what we want to do with the VR Ray, what we wanted to do, this notion of control is still still a challenge. And we're very used to physical input, right? We have buttons, we got analog inputs with mouse or touch screens. How do you think about the general challenge of control within spatial computing? Right, so I think probably the solution at the end of the day will be multimodal not that there will be just one control modality to rule them all. But that with the continual improvement of sensors that already exist, and new sensors will continue to optimize better ways of interacting with computing systems, ubiquitous computing, whatever you want to call it. For a new class of devices. I think the immediate problem that we see brain computer interfaces solving is specifically targeted in this area of private personal computing, where of making computing again, something that you're doing to support your interactions with another person or to support your interactions with the machine, or in general, the real world around you, as opposed to kind of bringing you away from it. And that's something we certainly, you know, try to infuse in, you know, in everything that we're doing as a company. So bring the reader interface has the added benefit of not requiring you to move your arms around, like gesture recognition, which is a popular control modality, you also have your hands available to you, which again, with gesture, you could certainly hold something like a joystick, but still, again, makes it a little bit more challenging to function in the real world. Another popular option is certainly voice control. And we've really seen that come a long way in the last decade. But that also requires you to you know, yell at your face or yell into the air via Alexa or Siri or the interpreter of your choice. And that again, would make this conversation we're having right now very uncomfortable. And it certainly makes privacy a bit of a challenge and then more social or public sphere. There's a few other options you know, sort of use gaze, eye or gaze based controls have the challenge around the what's called the Midas touch.
problem, which is anything your gaze lingers on, which is normally what's used as a substitute for a selection or a click, is really becomes clickable. So there's a lot of fatiguing interaction there to have to stare at everything that you possibly want to interact with. So, you know, in summation, very computer interface allows you to silently pull up information to silently, you know, take a note, take a picture, do a lot of the things that we're often doing when we, you know, glanced down at our phones very quickly, in order to supplement our experiences with the world around us.
Yeah, so multimodal is definitely ultimately part of the answer that is kind of part of what you're talking through. But there, there are definitely challenges with every one of these other modalities. It seems when I, as I survey enterprise users, and many of the more popular enterprise devices, the fact that these AR glasses, for example, are hands free is such a huge benefit to the system to be able to you put your hands on the problem. And so we need some sort of input mechanism that allows us to be hands free and beyond that problem. And voice has been a very common, very popular way of communicating to these devices. But you're absolutely right, maybe in a work setting, it's okay. But certainly, for a consumer use case, voice is not at all private. I can imagine that in a world in which we're all trying to Yammer at our headworn devices, push click whatever left, right, it's going to be a coffin of chaos. I think
one of the elements around voice and I should probably should have mentioned this earlier is kind of the challenge of navigating voice based control. So it tends to be very unidirectional. Once you've said something that commands then generates a new sub set of commands and backing up, you know, if you will, is a fundamental challenge of how of using that as a primary navigation system for whatever your operating system is. So I think one area there to where we see voice, as you know, certainly becoming having a very important role. And a lot of the, you know, computing interactions, there are some places where it is challenging for both the human I think and for the computing system.
Yeah, you're absolutely right voices is terrible for navigation, fundamentally, the other challenge that voice interface has is you have to memorize the entire interface, in some ways, right? If there aren't visual affordances in your view, then you have to likely interface with Alexa or Siri today, you have to over time, we hope that they understand our general intent from our natural language. But computers are still not that good. Even as humans interpreting other humans we struggle, truly understanding each other as, as they're talking to each other. Anyway, even kind of within this notion of of brain control interfaces, this notion that we are beginning to attach sensors to our body that maybe aren't just looking at photons, you know, the cameras that are looking at our fingers are dancing in front of our face, even things that are looking at electrical pulses flowing through our body. There are a number of other solutions out there. I remember years ago with thalmic Labs back when North was called thermic. Labs. Their first product to market was the Myo armband. Right? And then and then you you'd mentioned Facebook, what is now Facebook reality lab, but they their excitement in the sort of general class of technology that you were working on, didn't certainly stop with your demo around mind x, it was extended ultimately to control labs, right? Where they ended up acquiring that company for a billion dollars for the work that they were doing around with EMG electromyography. What are these kind of other technology approaches to trying to interpret more passively what it is that our brain is intending us to do? What it How does like EMG work in these other things? Yeah,
I think EMG electromyography, as you mentioned, is a great technology, I think there's a lot of great use cases for it. And in particular, when Facebook acquired control labs, it's a potential great substitute for holding, you know, instead of having to hold a VR controller in your hand, to wear an armband that basically just interprets that same gesture you do but just takes away that one piece. So I think both from a kind of consumer comfort and education perspective, that's a very easy transition. And also a very constrained use case, because that's where I think the right fit for something like EMG really is. So EMG is measuring peripheral motor movements signals from your muscles. So as those signals come from the brain and are propagated throughout the peripheral nervous system, that's what it's able to pick up on. So it is certainly not doing any cognitive signal reading, you could not use it to really determine intent in the pure sense of that, as you were thinking about it, what it definitely can measure is that instantaneous use around a planned motor movement, which makes replacing something you might do with a controller, again, a very easy fit. It tends to be very noisy signal outside of a very, very specific use case because you're always moving your arms or your hands. For example, I'm moving on right now, even though I'm talking to my to my webcam, you generally have a lot of muscle activation that's happening even if you're sitting pretty perfectly still. So there's a lot there to parse out. And it's much easier if you already know what the user is supposed to be doing, as opposed to open ended nerve signal decoding, for example. But yeah, I do think it's a, it's a great potential option. We do, actually, our whole software system functions with both the Mio band from thalmic Labs, which is the precursor to control labs, as you mentioned, move Dre also has an EMG band. And we've used those as very developer friendly brain computer interface substitutes. While we've been working on our own hardware.
I noticed recently that along those lines, but not the same sort of technology has a company named Nick's mind, I remember trying this demo at one point at one of the shows, and is basically a thing you strapped your head in, it puts these little claws into the back of your skull, in attempt to get some sort of skin contact. And I believe what they're trying to do is to interpret the signals going into your visual cortex, since that's where the visual cortex sits in the back of her head, and interpret my super basic understanding of what they're doing is they're basically showing these unique bark 2d barcodes on the screen over over superimposed on top of what you're actually actually trying to see in terms of graphics. And they're trying to interpret are you staring at barcode A versus barcode B on the screen? and using that to kind of activate? Can you describe kind of how that technology works? as well?
that's a that's a very good question. And definitely exciting to see that next minds dev kits have come out, I won't speak for their specific version of the technology. But I'll definitely speak to what it is, you know, at its core, which is eg, or electroencephalography. And that is recording electrical signals through the skull, which acts as a volume metric conductor of that signal, which really has posed a serious challenge for getting a good signal to noise ratio out of that signal previously. So their general approach, as I understand it is similar to what a lot of other companies who are using an eg base technology have done, which is to leverage what's also called p 300. signal, but to leverage a visual stimulus, and use that kind of within the interface in order to understand what the brain might be doing at that point in time. So I will say I think eg, you know, Eg has been around for over 100 years, is a really fundamentally useful technology for a lot of healthcare purposes. And certainly, invasive eg electrodes are now widely used both for healthcare No, and for research, the biggest challenge around the eg, and certainly why we did not find it suitable for the solutions we're working on in my index is that you really cannot decode a fast, really high resolution neural signal in the timescale of that's necessary for replacing your clicking with a mouse or clicking, you know, something on your phone, you can get really interesting information around someone's attention, you can even get a pretty reliable continuous control signal. So for example, it kind of measuring the strength with which you are continuously focusing on one element, we could use that potentially for driving a wheel chair forward, you can imagine it that's also a demo that is occasionally done driving the robot forward. The challenge there, again, is anything beyond that, where we want to be able to have a multiple inputs that are happening within a few milliseconds, Eg is really just not a suitable option.
So I'm going to attempt to summarize and you can correct me here, but the challenges, right, because fundamentally, the bigger problem is, how do we interpret intent, we're wearing these devices, we're not sitting in front of a computer with a keyboard and a mouse or a trackpad, not sitting in front of a touchscreen, we are being presented with usually visual information and or audio information. And we don't have a really great way of telling the system, what we mean what we really want to happen in this moment. And there's challenges certainly around holding a controller because it means you're not then using your hands to interact with the real world, the challenges around using gesture base control using a camera, because you have to hold your hands out in front of you gets tiresome. And there's some accuracy challenges. And there are challenges around both EMG because as you noted, it's not really intent to interpret email, you don't have to necessarily move your muscle, you still have to intend to move a muscle you have to send that signal from your brain to that muscle. So you can map that sort of intent to move a muscle to some some sort of action. And then we have eg, which is struggles from a number of things. But collectively, maybe all of these it's about all these sort of brain computer interfaces around getting a clear signal around it with enough high resolution with low enough latency, you can do something meaningful. what's the alternative? Yes.
So the alternative is, is my next The alternative is to have a ideally non invasive way to get at that signal that anyone who's building any eg based or EMG based solution wants to get which is a millisecond and millimeter so in both time and spatial resolution level recording of a population of neurons firing. So Specifically, the reason we'd want to record a population of neurons and not an individual neuron is that the way that information is encoded in the brain, you're not going to get, for example, you know, a command a word, whether that's a motor representation, or a language representation from a single neuron, it's actually connections of many, many neurons and the way that information flows between both them and populations of other neurons across the brain. And that's a really exciting area of science and something that nobody really fully understands how information is encoded across all the neurons in your brain. But what we do know is that there are specific regions that focus on specific types of tasks, you know, you have the region of the brain that we're focusing on. My next is decoding from the temporal lobe. So there's some really interesting areas where there's overlap of both speech, cortex and motor. So we can like experiment a bit with whether it's more comfortable for a user to say, to command an Arab face by thinking, you know, thinking about moving something like a joystick, up, down, left, right, I think about using like a D pad, for example, or thinking, you know, left click right click, for example, as their way whether that's a semantic language, representation or motor. So again, so just to kind of specify when you're looking at all the invasive options that have existed previously, that's kind of where you go if you want to get really good resolution neural data. So if you really want to know what someone is thinking more or less, you have to go through the skull, that was the kind of the fundamental limitation, if you want to create a two way communication between the brain and a device, then you're probably going to need something more similar to what Elon Musk over at neuro like is building over in trouble. And you're going to need something that has a much broader area of coverage and a much more specific, you know, neuron level potentially ability to record certainly also has desires to put information back into the brain, which we're not doing next. So our sweet spot is basically we want to make it so that you don't have to get brain surgery, in order to have a brain computer interface that is useful in the way that we kind of dreamed about from a sci fi perspective, as something that can understand your immediate desires and commands and can translate those effectively for a computing system.
Certainly, the Elon Musk captures the world's imagination with many of his projects. But I remember that the last demo he had done for neural link was with a bunch of pigs, right? He was getting these leads strapped these sensors onto the the snout of a pig, maybe I'm forgetting the demo. But anyway, is this you know, this is pretty amazing ways of digging these electrodes deep into the fleshy part of our four brains in this case, in order to send information both ways. And even then there's there's resolution challenges the amount of bandwidth that goes back and forth. But you know, this really exciting path to do it. But as you noted, you had to go through the skull, punching a hole in someone's brain, which is perfectly suitable for somebody who has, you know, suffered some tragedy that this is the best option to regain some capabilities in our lives. But for the average human, they're not going to subject themselves necessarily to popping their skull open in having a bunch of implants put in, when you're talking about the sort of non invasive approaches is the one that is approachable, right? It's the one that's the sort of thing that, that Aboriginal people have an opportunity to engage within a willingness to ultimately engage with. But in what you're doing, you know, we talked about some of the many challenges that these other techniques have EMG and eg, what is the specific mechanism that you're using in order to interpret these signals?
So yeah, so the holy grail is to be able to get the resolution you can get with an invasive system without having to have brain surgery. So the kind of breakthrough that we've had on the research side to allow that to happen, for the first time ever came in what has been, you know, really a DARPA funded effort over the course of 15 years into both invasive and this kind of future, this non invasive option. And, you know, it has been exploring with our partners at the Johns Hopkins Applied Physics labs, different options for using coherent light, an optical imaging system, basically, to see through the skull and to be able to pick up that that high resolution neural signal. Briefly, the approach that we're using is called we're calling an H nirs internally, but that stands for holographic near infrared spectroscopy. And it uses a coherent light source to illuminate a small area of the brain a few square millimeters, and measures both the reflected light and the being that's been shot into the skull using an imaging array. So this measurement is sensitive to very small changes in the morphology of the underlying tissue. Since tissue conformation changes how light is scattered. It's well known that neurons undergo a conformational change when they're activated due to swelling of the membranes and shrinking associated with ionic flux at the population level. The ag Get signals associated with those changes, or what's called the event related optical signal or erupts for our second acronym. Historically, arrows has been very difficult to measure through the skull because of the limited spatial resolution associated with incoherent light based approaches. And that's kind of where the research had started, you know, for years prior. But we've been able to combine coherent optical imaging, and digital homography, which is a processing approach to overcome those limitations and provide, ideally the world's first reliable wearable, non invasive measurements of neural activity at the millimeter and millisecond level.
Amazing. And so this is not only about the light source itself, that you're projecting into the skull, it's also about being able to the sensor side of that to be able to read the signals that are that are bouncing back, is that fair is there's something unique, both about how you're projecting light, as well as the sensor technologies to read it, of course, the software that goes along with it, I'm sure it's hugely unique. But on the hardware side of that, there's some things extra special about the sensor as well,
we're using largely off the shelf hardware components, in our solution, we've done some custom optimizations, to those to overcome a few challenges both in are a core challenge has been to localize the volume of tissue appropriately. So basically, you know, you're shining light through a medium, and that is going to hit a bunch of different things, it's gonna hit the skin, it's gonna hit the bone of the skull is going to hit interstitial space, it's going to hit the dura, and then it'll be in the cortex. And then there's a whole depth of cortex beneath that. So what we wanted to do when we're basically capturing those semi ballistic photons, you know, as they're, as they're being reflected back out off of those different types of tissue, is to categorize you know, what level of tissue we're in, so that we can basically find the volume of interest, which is, you know, kind of the top layer of the cortex, and be able to differentiate that against all the other noise that exists within that system. So that's something we probably spent the probably spent six months last year on a lot of different tweaks around both the configuration of the system, the you know, specifics around how you know, lensing and a variety of different things that are kind of like add ons to a, you know, an optical based system. And or know that I would say that's optimizations of the laser itself. But certainly, there's a lot we've had to learn over the last year, and a lot we will be learning in this upcoming year, as we work on optimizing that hardware further,
that sounds really amazing. So based on this really novel way of being able to see see into the brain to see what's happening within a group of neurons, and then sense that right with this sort of optical sensor that you've created, what are you able to do? What are you able to accomplish that we weren't able to accomplish with EG or EMG.
So what we've been able to accomplish, and I'll tell you a little bit about kind of the, our COVID experimental design for the first kind of successful demonstration of the technology we've built with humans. So if the participant is seated, or the user or me, is seated, and the sensor is just positioned to the side of the person's head, so just resting against it, there's a computer monitor a few feet in front of that person, and it's just black. at random intervals, there is an audio visual output on that monitor. So for example, a clip from Wreckit, Ralph, or clip from the Godfather, and sometimes the visual and audio are matched, and sometimes they're not matched. And basically, what we were able to record in that first demonstration is a neural response to a visual audio stimulus that is contemporaneous with the onset of that stimulus. And when I say contemporaneous, I mean within the millisecond level. So again, Eg The challenge is that you have to, you know, integrate over, you know, a second of data, in order to count for all the noise that comes through with that signal. And we really want to be able to get that millisecond level response out of the brain. So that was what we were able to show with our first demonstration, which is very exciting. So from there, in the optimizations we've been making, we're looking at, like, what is the full, you know, kind of range of signal that we can get out of the brain with this system, focusing on both speech decoding, and on motor cortex decoding as well.
Here the opportunity is to with, as noted before, low latency and high relatively high resolution to decode both motor intent as well as speech intent without having to move your body or to say anything out loud. With the sort of capability, how are we going to translate that ultimately into something that's useful for the user? What are we gonna be able to do with that?
Right? So what we're moving into now is doing what you're referencing, you know, so how do we make that useful? You know, very cool that we could do it quickly, but what does that mean? Okay, so what we are demonstrate Now, with the you know, now that we've kind of accomplished that checkbox of we can do the thing that was the huge scientific blocker had never been done before, non invasively is to take some paradigms that we've used in invasive models. So with existing what's called ecog, electrocorticography arrays, basically something where you're putting a sensor into someone's head, and taking those kind of gold standard paradigms for imagined speech, decoding, and demonstrating those with our system as well. So in this scenario, you have a participant who is we have a couple different versions of this, but generally speaking, is prompted to think of a word. And then, you know, basically, we decode that word out of their brain. And we're specifically instructing them to imagine saying the word because there is actually a really interesting thing, which also some folks at UCSF have been working on at a change lab, kind of the two big imagine speech decoding Centers of Excellence would be Eddie Chang lab at UCSF, and Nathan krones. Lab at Johns Hopkins. And Nathan and I have worked together for a number of years now. So as some research has shown, if you kind of follow this space, there's a really interesting element you can decode, which is basically, the individual phoneme pronunciation of a word. So think the word umbrella, when you think of saying the word umbrella out loud, there's a motor representation of your tongue moving to create those individual phonemes of the word umbrella, which is kind of crazy. And that's generally had very good results for us in terms of being able to identify a thinking about saying umbrella against all the other noise and processing that your brain is doing, generally speaking,
yes, that makes sense, because you noted that because you're looking at this part of the brain or the up against the temple, that was both the speech area and some motor area. So if you're able to get both of those signals on top of each other, that are related to each other, it makes sense that I guess the signal stronger interpretation there.
Yeah. And it's a simplistic way of saying it, because there's still a lot we don't understand about how you how all of these representations work and how generalizable they are across individuals. So there are a few things, you know, there's some, fortunately, some scientific principles we're building on top of in other research that, you know, specifically help us highlight areas of speech cortex that we can get these types of signals out of, but where it's been, you know, we're learning new things every day about how much that varies for different individuals, how much you know, caffeination levels impact, that type of signal, a lot of really interesting little nuances and how everyone's brain works individually.
Yeah, that's pretty amazing. And so what are then the near term opportunities? How do you translate the sort of signal you're getting from the brain into something that a piece of software can then use in order to deliver value to the user.
So the second part outside of hardware that our team, you know, focuses a lot on, I'd say, adapting kind of bleeding edge technology into our solution is on the, you know, deep learning models to make sense of everything that we're pulling out of the brain. So that's certainly I'd say, a niche area of expertise, you know, neural signal decoding. And that's something our team has good experience with. And we have great advisors who've been very instructional, in the kind of underlying mathematical models. So Krishna Shenoy, for example, from Stanford is a brilliant individual and has been, you know, has been very, you know, big supporter in both guiding our approach and in general for the company. So how do you then take kind of these really amazing capabilities, mathematical models, and translate them into an opportunity that as a startup, you know, you can really sink your teeth into from a revenue perspective, really near term opportunity, where you can create the financial momentum, I guess, that maybe you need in order to ultimately realize the long term vision? Yeah, that was a great question. And definitely, I would say, we're a deep tech company. So our kind of path to revenue is a little bit longer than for a general software company where you can put an app on the App Store and you're done. Certainly a little bit more challenging, I would say along those lines as well. The kind of first step is is certainly around, you know, pilots with technology companies that create useful things that are already mounted on one's head, and or with eyeglass manufacturers who also have something useful that is mounted on a person's head. So one challenge for us and getting to market is on and certainly something we're working on a lot and 2021 is around that kind of form factor market fit for having this sensor deployed more broadly. Certainly a stumbling block for a lot of consumer hardware companies. is not quite getting that form factor and market fit correct soon enough. So we're playing a very, you know, I would say during very delicate dance around iterating on the product and then also trying to be to push it as hard as we can on having, you know, actual human humans in pilot partnerships, use that device. So we as we've talked about a little bit earlier, a line of sight use of this device is not for general purpose, broad scale decoding of every thought and you know, desire that you have, but is to let you put your headphones on or your you know, put your glasses on, go for a run and think volume up flying down next song, previous song as you're going for a run and never worry about taking your phone out of your pocket again. So that kind of little addition, similar to kind of in the control labs analogy, where you're thinking about what's the easiest next step for consumer that both lets us realistically validate the technology, and really focus on also some, you know, less sexy, but important aspects around manufacturability. And, you know, skill of the device and that sort of thing.
Yeah, one of the things that, I think is perhaps intentional, but perhaps a wonderful coincidence, is that because you're trying to interpret through the temple, which I imagine one of the benefits is the skulls a little bit thinner, maybe in this spot than in other parts of the brain. But it's the same place that we have the arms of our glasses, like it's in the exact same place. And so incorporating this type of technology as it evolves and gets smaller, physically smaller over time, talked about the challenges of getting the right form factor, but it's already gonna be, you know, properly co located with other stuff that's going to be on our head notionally in this, this merging area of spatial computing. So you have a great opportunity to not add extra necessarily places of attachments of sensors on our body, using the form factors that have already being developed, and incorporating additional capabilities, which is really exciting. For the amazing,
yeah, that placement of the brain computer interface sensor in the arm of a pair of glasses is was definitely intentional from day one of the company formation. And certainly, you know, the first paper I think ever wrote on this topic was in 2014 2015, and, you know, had the idea that it would be really cool. But definitely augmented reality, virtual reality headsets were nowhere near I think we had the first version Oculus with a camera duct taped to the outside of it with like, a helmet. And that was not really, you know, I would say, very marketable form factor. So certainly, I think there is still a ways to go on the consumer smart glasses, path to Market. But we have, I think, great opportunities currently in terms of partnerships in terms of desire from, you know, large tech companies in the US and abroad. And so certainly, that as our primary form factor has been intentional from the start,
that's great. And as you look out longer term, where's my index going? What is this big, long term vision you have for the company,
our motivation, fundamentally, is very similar to our vision for my index, which is a world where technology, you know, enables the best in all of us. And that is backed up by the business model, and by the purpose of that device itself. So I think in the long term, you know, we would like to be, you know, a provider of a lot of wide variety of technology products that are guided by neuro technology and ethical AI systems that are designed to solve, you know, real problems, and to do so with a lot of user education, and focus on, you know, kind of the the fundamental aspects of communication with a vision that I think is a little different from, you know, kind of how smartphones have evolved, for example, which is more to distract than to supplement
to true, unfortunately, when we talk a lot about these sorts of amazing brain computer interface technologies, generally, you know, the general public who is passionate about the sort of the sort of technology, and often we hear that is not around the corner, we see some great demonstrations of what's possible. But it still feels like something that's a ways away, based on what you're doing at mind x, when do you think we as normal consumers might be able to put on our head in a consumer product sort of technology that you're creating? Yeah,
so I think there is definitely a, there's layers of complexity within, you know, brain computer interface development and what might not seem to the lay person as a fundamental transformative improvement, you know, may look very different to people who've been on the inside for a number of years. So I'd say, Well, you know, Elon Musk's demonstration, for example, wasn't you know, there's a pig, you know, there's been a lot of animal studies with brain computer interfaces over the last, you know, 20 years. So what I would say what they were able to demonstrate from the motor decoding perspective was honestly I was very impressed. And I think that we are making very quick progress on a number of different fronts. There, and, you know, generally, I think is a very promising indicator for the space as a whole, both, you know, well funded groups like Colonel and neuro link, I would say we are still many years away and probably is a good thing that we're many years away from being able to, you know, I'll use air quotes read someone's mind, the brain is incredibly complex. And there's so much that we still don't understand about, you know, even at like, the clinical psychology or computational psychology level, about what any of the words that we attached to the biological constructs of the brain what those actually mean. So a an area I'm like, very personally passionate about is this kind of, like computational psychology approach, and, you know, looking at mental health, and there's, I think, so much room for a lot of, I think, really important research, and, you know, and the creation of new drugs, devices, diagnostics, to help us better understand ourselves and the therapies that we create, along those lines. And I think, as we uncover a lot of like, what the brain is actually doing and what it means for us, there will certainly be new innovations that I think will, you know, overshadow the ones that we have now, I will say we're hoping to have a device that gives you a very magical moment where you can actually control something with your mind, you can, you know, look at a store and ask what, you know, when it opens just through thought, and, you know, the next couple years, and then beyond that, I think there's going to be, you know, maybe 10 years from now, we'll have some really exciting kind of the sci fi version of BCI. From their
amazing, the sort of approach you're talking about you said came from a fair amount of, of DARPA funding in concert, you know, in working with john hopkins on there applied physics lab or medical technology innovation center, that sort of thing. What's the relationship then, between your company and my index and the work that's being done there at the lab, I was
working at Hopkins prior to starting my index, my I would say, definitely, my career up until now has been not a straight trajectory has definitely been just kind of following my guiding passions and doing a bunch of other things, you know, in kind of getting there, you know, to the end of the day, I started a group at Johns Hopkins Medicine called the technology innovation center with a physicist, radiologist, and another computer scientist, maybe seven years ago, something around something around there. And so I wasn't too far out of my own academic training, you know, had been working in a social good venture group in DC. And basically, you know, just had a strong passion to solve big problems, you know, ideally with this great tooling that software engineering and computer science affords you. So our whole activity, basically, within that group was to partner with some of the world's leading clinicians, academics, even administrators, people who had great ideas for solving patient care, we're very close to the problem. And we would build some software to fix it, whether it was clinical decision support, ai solution, remote seizure detection, and monitoring, whatever it may be, build out the MVP test it if it had legs and a commercial, you know, a positive kind of commercial option, turn it into a startup or a licensing deal, at least to generate alternative revenue streams for the hospital. Yeah, and I loved absolutely loved doing that every day grew a good team, you know, like 30, engineers and designers. And it, you know, it was absolutely wonderful experience. And out of that came a few different startups, and one of which was my neck. So really was very fortunate, I think, to operate in this area where there are a lot of really brilliant people who really fundamentally are motivated by a desire to improve people's lives, and to work really hard to find novel solutions there. So that's certainly no exception for revolutionising prosthetics was this specific DARPA program that invested so much money and which we really want to be able to make real for people. And that's why my next I would consider really a translation effort of that technology.
It's super interesting, do you have an opportunity than here to ultimately work with a lab that's received the sort of this sort of public funding through DARPA ultra, in this case, companies like Tesla, for example, who has relationship with some University out of Nova Scotia and Canada on battery research, Microsoft, and Intel and Apple, all these major companies, they have these relationships with these University labs. But normally when you hear about them, and I'm very biased in this sort of information and receiving haphazardly in this way, but my perception is that working with these sorts of labs, is probably a lot easier for a very well established, well funded, sort of larger enterprise. One of the challenges of working with a lab as an early stage startup.
So certainly, I would not say that partnering with a great academic lab is going to solve all of your problems. academia is, you know, a beast of its own has its you know, its own pros and cons. Certainly its own culture, its own priorities, and your Krissy. So certainly I would caution also startups to be very careful in selecting partnerships. And to be very intentional about what you can and can't get out of those effectively, I think there's a great opportunity to leverage, you know, people who really think creatively and equitably about, you know, solutions, and then to be able to see if you can see the line of sight to, you know, the scientific risk has been retired, and it is just engineering between the lab and the market, you know, into the market. That's kind of the sweet spot for a startup. So I think, you know, ideally, this is something and I think this is something a lot of academic institutions are definitely positioning themselves towards in terms of funding in terms of the way their boards think about, you know, the future of the university, is to investing in translational efforts, you know, giving money to students who can help bring these technologies to bear in a very natural way, I would say I have had great experiences, and it certainly licensing out of academic organizations, we could do an entire podcast on the horrible pitfalls and opportunities there. But I'd say there's definitely a lot that can be learned out of those labs. And I think there's a great opportunity for startups to partner with that as well.
So while pitfalls exist, there's definitely some big opportunities here as well. So imagine that one of the challenges is, is the mindset like the academic timeline, academic mindset sometimes doesn't fit well, with a startup that is on very short, very short and resources. And often one of those resources is the time time to market right time to actually get something done. How do you overcome some of those disconnects? Maybe in the pace at which you need to move as a startup versus the pace at which the research or the lab is moving?
Yeah, that's a great question. I don't have a generalized solution, I'd say, relationship building is incredibly central to the success of almost any endeavor, and can really help move something along where all the money in the world wouldn't necessarily push it any further than it's going to go. You know, a lot of research labs aren't, you know, they're very independent, they have the, you know, autonomy to do whatever it is that they think is most important to them when they want to. And certainly, there's a lot of challenges in academia around, you know, kind of the grant writing machine. And, you know, we won't cover all of that. But I will say that having, you know, building good relationships with people, who is what you're working with, at the end of the day, you know, really goes a long way in helping make those kind of shared goals happen on time.
This, as you noted, you've been working with the labs there at john hopkins for for a number of years now on a number of different projects and indexes, as the one that you're bringing to market right now. But there's some prior experiences as well, where you had an opportunity to, you know, not only that a bunch of projects that stayed inside the lab, but working taking more of an operational role with some of the projects that that left the lab and found their way towards market. He described, you know, maybe one of those experiences.
Sure. So certainly there are a lot of really fantastic groups who have worked with over the years and applications built a lot of very cool technology, I would say the one that was, you know, kind of has been closest to my heart certainly was an effort that was also a collaboration with Apple called epi watch, which was for remote seizure detection and epilepsy condition monitoring. I'm certainly not the first person to say that there is, you know, something really fundamentally fulfilling about, you know, software engineering and entrepreneurship. And that you can you really have the ability to make impact on people's lives and in a way that I think, is really hard in in other industries. And so, certainly, it was a great experience to have both the support of apple and Johns Hopkins to focus in on you know, we have this new Apple Watch products, we want it to be awesome. Do you have any awesome ideas? And there are some researchers at Hopkins, who had been, you know, focusing on this type of technology for a long time and said, Absolutely, you know, we really think that there's a few algorithms, we could, you know, run on some sort of, you know, sensor suite that looks like this. And that is really what became epi watch, we basically went through the process of working with Apple to create also research kits, which is basically a way to do opt in patients for IRB approved research on the Apple Watch, which really makes it much easier to do science and to put solutions into people's hands much more quickly. So it was fantastic to meet with patients and their families. epilepsy is a really, really harsh condition. Not only is it physically stressful, it was very emotionally stressful for both the individual with epilepsy and for their family who, you know, just worry that at any moment, you know, a seizure might happen that could you know, result in a really terrible outcome. So being able to create some to both grant peace of mind as well as you know, do a lot more of the monitoring and alerting side of things around a seizure event was a fantastic experience.
And so through that experience with epi watch, it sounds sounds really incredible, what are the sorts of lessons that you kind of learned in bringing that technology to market as a product outside of the lab, that you now get to apply to my index?
Yes, I'd say one, health technology is really challenging, really, really challenging for startups, not just for people who want to build it, there is a huge regulatory burden and a long road to market from idea to, you know, MVP to actually being allowed to sell it to someone. And that is a non trivial barrier to overcome. The biggest kind of takeaway from epi watch was that there is this really interesting kind of gray area in the wearable monitoring space, which is that, you know, apple, in collaboration with the FDA basically made a new pathway for regulatory approval, which I think we got done, like 90 days, which is unheard of, for this type of device. And really, the FDA is, you know, I mean, it can be stressful and frustrating to a company who only has so much money and so much time, but it's really there to protect the consumer, and I think, you know, is very motivated to find good safety limits around a device that, you know, does what the consumer needs, as, you know, as quickly as is reasonably possible. So in this case, when you're doing, you know, monitoring and fitness and health education, for example, there's so much that can be done in terms of both, you know, kind of patient care and general health improvement, that doesn't require you kind of that class three, device approval. So that's something I've definitely taken into my index, which is what's kind of the overlap between a wellness device, a sensor, and a consumer ready technology? And how can you very quickly with a small amount of capital get to the point where you can really demonstrate usefulness most quickly, and then allow you to diversify into maybe a more specific healthcare advice from there.
Excellent. Let's wrap up with a few in lightning round questions. What commonly held belief about spatial computing? Do you disagree with
that? I will say in general, I mean, I'm definitely an optimist. So and also having, you know, done a number of different startups at this point, I tend to be more, you know, supportive than like a detractor. Generally speaking, I would say, it's pretty obvious that there's a lot left to do before we're gonna have some really useful functional consumer smart glasses. But I definitely I think that, you know, continual progress has been demonstrated. And I think there's still a lot of good, you know, momentum behind the efforts and the investment is there. So I think that that definitely will continue to have positive outcomes. I would say, if I could answer that a little differently would be that I really hate the question or the desire that everyone has for there to be a killer app for AR or VR, if I I hate that sentence that question so much, because I think it weirdly focuses on an outcome that can only be seen in retrospect, and ignores what is normally like an ecosystem sized progression towards something really incredible. So in a lot of the conversations I have with investors with, you know, other entrepreneurs with people in the spatial computing space is to focus on the really incredible engineering efforts that are building meaningful value and, you know, creating a more robust ecosystem that people can develop in. And also talking more generally speaking about partnerships within a space that is new and emerging, and where, you know, you can have those serendipitous interactions that can create something really meaningful out of that ecosystem. I think that's where, you know, most of the other technologies have really thrived. We look at early Silicon Valley, and that sort of thing is with that kind of exchange of ideas and that sort of thing. So I don't think that it's like there will be one person with the true idea for exactly how this can be solved. But it's really going to be a continual and focused effort on solving the big problems we all know exist.
It's a great one. I have to admit, I also agree that this notion of the killer app is is one that works really well, maybe for journalists who wants to kind of simplify that we haven't gotten there yet. But the reality is that the other devices that we have in our lives that are tend to be personal, computer oriented, productivity oriented. When they're when they're generalist tools. There's not one one killer app. There's not one thing we use the smartphone for today, for example, there might be killer capabilities, there might be aspects of the technology that enable it to have a place in our lives. And I think that the the notion around AR glasses smart glasses is the fact that they are hands free, that they are kind of out of your way you can still interact, physically manipulate the physical world, and they allow you to be heads up eyes on the world, and, you know, deliver information that's relevant to the moment relevant to the context of the situation. In my mind, those are killer capabilities. We haven't quite fully realized them yet. We haven't fully realized them, certainly from a consumer grade perspective, but those core capabilities will enable the set of useful apps that collectively will be killer apps will make it a part of our everyday lives for for the majority people, but a single killer app. I agree with you on that one.
I certainly given a different answer to that, to investors for which absolutely might exist the killer app,
there it is, there it is my next visit. Besides the one you're building, what tool or service Do you wish existed in spatial computing.
So my index is definitely creating a brain computer interface sensor, we're also creating software to make that sensor useful in the real world. So our software, which we call mind, oh s takes contextually relevant information in from the world around the wearer of the device, and uses that to inform our neural decoder. So in general, you're thinking a lot of things as you're walking around being yourself conscious and subconscious things, there's a lot of processing that is constantly happening in your brain. A challenge that exists for anyone who wants to do general purpose always on neural decoding, is to be able to parse the specific and most relevant signals for a command or an interaction with a computing system, or, you know, to control your IoT devices, for example, are one of our kind of core innovations on the software side is around one element of which we've called look and think, to replace pointing click, as basically, to use signals from the eyes to have a better sense of your intention around interacting with real world elements, we have a few different configurations of the system that we've, you know, built out over the last few years. And definitely, in the long term, I think will will retire a lot of those sensors, you know, eye tracking, and then potentially even, you know, the neural sensor will become less important, as we have better sensing modalities and more robust training data sets, to understand how you react to and interact with the world around you. So a lot of our kind of day to day is around, you know, here's a bunch of people wearing this headset, and using this brain computer interface, and we're gonna have them do a few different tasks and have them look at their light and think on and it turns on, and we're gonna have them, look at their, you know, their IoT enabled, blinds and think up and down these sorts of things. But as we make those more complicated, we're understanding how you react when you are asking a question, as opposed to doing a command, that sort of thing. So a lot of our hopes for the rest of the industry, things we don't want to make are focused largely around, you know, computer vision, and capabilities there. So I think there's definitely still, and it's something that has a lot of investment behind it, because there are large companies doing it. But there's definitely still, I think, some room for new technologies for creating high quality labeled data sets easily and the developer tooling around that that's something we've started actually building out a fair amount of ourselves, but you know, are always looking for, you know, potentially partners who are focusing on doing that, and ideally, doing that from an ethical perspective. Since I think there's a lot of opportunity there as well.
Yeah, great. Well, what book Have you read recently that you found to be deeply insightful or profound?
So I love I'm a huge book nerd. The books I read mostly nonfiction, philosophy, economics, and poetry, primarily. So I don't read I mean, I read articles, I am constantly keeping up with, you know, engineering journals, and scientific journals, that sort of thing. But I, I'm continually fascinated by you know, history, and by, you know, people who have spent, you know, devoted their lives to, you know, either understanding the world or creating, you know, systems of thought that we can use to better understand ourselves and our place in it. So, I'm reading a Russian book called The death of even Eliot, which is a kind of famous, that's actually a work of fiction, but it's a very interesting book and anti social, which is about online extremists. And kind of talking about, you know, kind of the situation we're currently facing and ideology and capital by Thomas picot.
tend to be like a few of the times. I see some light reading. Yeah, I'm
trying to think I wish that there was something like more relevant to our conversation that I had read more recently. Yeah, particularly this past year. You know, I know a lot of the philosophers provide a lot of comfort, I think, just for giving you some, you know, just sense of scope, you know, that every time has its challenges, and you know, they're not all that new. And a lot of the kind of core elements of how to be a good person how to, you know, interact effectively with others are kind of the same as they were so fun that encouraging. Yeah,
I am reminded, I think 2020 was definitely another one of those collectively moment moments, not that 2020 is a single moment. But this notion that so much of what we experience is not unique, right, that that history often often repeats itself. My brother in law, he is one of those people who likes to claim that things were so much better when we were younger. And I hear this this notion, I think, yeah, it was notion and I, I struggle with it a couple of dimensions. One, of course, we are memory is generally poor, we tend to remember the things that tend that are most recent for that stood out the most. And when we're children were pretty well sheltered. And so the things that tend to stand out the most are often heavy, happy moments, or maybe they're tragically, not so happy moments, but it's definitely a very constrained perspective on the world, one that our parents try to create for us in many ways. So to suggest the things when we were young and so much better, is an indication that we have a very small perspective of what what things were happening when we were younger, certainly, from a world perspective, or butter, butter scale. But I also love reading history. And this is just this is one of those things that consistently we as humanity struggle with many of the things we're struggling with right now. And it's been true in American society. It's been true, you know, collectively across many other societies, not just just not just the American society, the late 1800s, early 1900s was a really tumultuous time, also, that have a lot of several similarities to the sort of things that we've been struggling with right now. And we survive. And ideally, we're smart enough to go back and read about some of the challenges we faced before and the sort of decisions that were made and why they were made and the outcomes, we get the opportunity to see the outcomes. And then maybe that can inform us to make better decisions this time around, in order to help us overcome the challenges that we face, and collectively better experience for all of humanity has moved forward.
Yeah, I would say, and I don't have a, I don't have a solution for this yet. But something I've always wanted to make somehow and something that we kind of like touch on a little bit with our own implementation of AI for, you know, making small tasks easier, you know, removing the mundane interactions that you do with your computing system, to record something in relation to another thing, and then present that back to you in a way that is intuitive and easy to access. Like, I hate taking notes. And I'm taking notes all the time, and I can never find them. So we built like a more intuitive note taking app that just works with you know, a pair of AR smart glasses. Along those lines, I feel like there has to be a way to transmit knowledge between, you know, generations at a larger scale more effectively than we currently do. And I feel like from a young age, I remember I read The Count of Monte Cristo, when I was a kid was like my favorite book when I was, I don't know, maybe 10 or 11, or something like that. And there's a part in the beginning where the main character meets a priest in jail, and the priest tells him that there's 12 Greek philosophers. And if you read their works, you'll know everything you ever need to know. So you need to memorize them and remembering that I was like, Okay, cool. Great. So now I know. And that was not exactly the case. But there's an element there, which is just that there is there are these fundamental, there are some truths, there are some things I wish someone had told me earlier, when I was experiencing x, that it would be great to be able to translate through the generations more effectively or across larger cultural boundaries. So we've been playing a little bit, we have like a kind of unstructured times and that like every week with the team to think about things, we don't have any idea how to build from an engineering perspective yet, but what could that look like? If you were to be able to have a, you know, someone who's stressed out and in this circumstance, you know, what is the information that would maybe be most helpful to them in that scenario? And how, you know, what are all the other considerations kind of around that culturally and socially? So I'm very excited to see you know, potentially where like neuro technology enabled AI could take us in the future. optimistic also.
Yeah. How do we, as humans, download our, our ancestors, knowledge and wisdom into our brains, in less than 2025 or 50 100 years, wherever it takes us now to kind of really, truly get get our bearings about us and deep appreciation of how to, I should noted before how to interact with others effectively, how to interact with ourselves effectively to the betterment of everybody. Last one here. If you could sit down and have coffee with your 25 year old self, what advice would you share?
So, there's a lot of advice, certainly, and I think, pretty much everyone I would say from conversations throughout my life experiences some form of imposter syndrome. And certainly the outside world will tell you that that feeling is valid. more valid or less valid depending on who you are and what your experiences are, they've gotten you there. But I think that we all experience, particularly if you are in any way empathetic and reasonable human being experienced that kind of fear around whether or not you actually know what you're doing all the time. And certainly, I experienced that a lot in 25, which wasn't all that long ago, for me, to be honest. But I think, definitely, the thing I would tell myself then, and the thing I would also say, for, for younger entrepreneurs, is to, you know, to, certainly to, to trust yourself, and to really give it a shot, 100%, whatever it is you're doing. And, you know, failure is not the end by any stretch of the imagination, it doesn't matter if you, you know, run a fortune 500 company before you're 30, that doesn't really necessarily might not even really adhere to your own kind of core values about what you truly care about in the world. And the sooner you can kind of let go of that and do the thing you're good at the better. The second thing would be to be very careful about who to partner with, there's, it's gonna be a hard road. And there are people who will give you things maybe, to get there. And it can feel like you don't have any choice in the moment. Sometimes, you know, you're young, and you haven't made this decisions before. But the same ways that you, you know, judge and understand humans, your whole life prior to that are still important in making those decisions. And it's never a good idea to partner with the wrong person, because you will just drag that with you for you know, for a very long time. So definitely to pick good people and you know, to work hard and things will you know, things will work out.
Yeah, that's a hard one. That was a really hard one to get, right? This this notion of effectively judging others and their intentions, especially at any age, but especially when you're just kind of entering the working world. Yeah, that's good one. Any closing thoughts you'd like to share?
Yeah, I guess in closing, also reflect on the fact that it's Inauguration Day today. And regardless of where people sit on the political spectrum, I think there is a universe of opportunity in front of us, and to understand each other better, to understand ourselves better in relation to the world and to build technology solutions that enable us to do those things effectively. So I'm very excited about the future and what my index will do as part of that. And would we're always, you know, hiring and looking for new people who are excited about the same things that we are. So we'd love to hear from anyone who's interested in collaborating or even you know, just wants to have a conversation about starting their own neurotech company or any other company, or who might want to collaborate in the field. We are always welcome to making new friends,
where can they go to learn more about you and connect with you and find out about your efforts are at my index?
Sure, you can go to our website at my index.io. There you can also find the rest of our social media accounts. But you can also interact with me on Twitter, which is like through there as well. But also feel free to email me at Julia and my next I am.
Truly thanks so much for this conversation. That's it. Thank
you so much, Jason. It's always lovely to talk with you.
Before you go, I want to tell you about the next episode. In it I speak with Carl kutak. Carl is an author and speaker who writes in depth about the display and optics technologies in AR glasses at his blog at K gu talk.com. In this conversation, we dig into a number of topics including his take on HoloLens two, and laser scanning displays more generally, the latest and combiner optics and how the various display technology is mapped to the various combiner optics technologies. Karl reminds us why AR is so hard and he describes why he thinks it's impractical for some popular ideas about how we'll use them. I think you'll really enjoy the conversation. Please subscribe to the podcast so you don't miss this other great episodes. Until next time.