The AR Show: Neil Sarkar (AdHawk Microsystems) on Tracking the Eyes Without Cameras and Creating a Fitness Tracker for the Brain
10:21PM Jul 18, 2022
Speakers:
Jason McDowall
Neil Sarkar
Keywords:
eye tracking
eye
company
technology
glasses
chip
people
display
vr
tracking
big
afm
devices
mems
ar
data
market
customers
mems technology
detect
Welcome to the AR show where I dive deep into augmented reality with a focus on the technology, the use cases and the people behind them. I'm your host Jason McDowall. Today's conversation is with Neil Sarkar. Neil is the CEO and co founder of AdHawk Microsystems, a company creating the first camera free eye tracking solution that offers unprecedented speed, data quality and power efficiency. Prior to AdHawk, Niel was a co founder at ICSPI, which develops scanning probe instruments on a CMOS MEMS technology platform. They are commercializing the world's first single chip atomic force microscope. Previously, Neil attended the University of Waterloo, where he earned a bachelor's, master's and doctorate in electrical and computer engineering. In this conversation, we discussed micro electromechanical devices or MEMS technology and its applications, Neil shares his path to founding ad hoc in his experience, bringing ad hoc innovation to market.
In the early days, we were just a bunch of you know, grad students that were trying to sell a chip. And we thought of ourselves as a fabulous semiconductor company. And we figured these big OEMs, they'll be able to take our chip and build the system around it to do eye tracking. Turns out that was very naive. And you know, at the time, we were frustrated by the fact that like, Hey, we've solved the problem for you, why can't you guys use it? It turns out that if you want to sell any sort of reference design, you really need a full stack solution, you can't just be the chip, it has to be the embedded system, it can't just be the embedded system. It has to be applications that show how accurate you are. And surely you can do with it. It can't just be those applications, it has to be some games in unity that allow you to experience like how much better things get with eye tracking. And it doesn't even stop there, you got to have cloud based analytics so that you can actually improve your algorithms in the field, and also extract these things about your state of mind.
We go on to discuss how AdHawk is trying to positively impact people's lives, what's special about their approach, and how you can get your hands on their tech in the near future. As a reminder, you can find the show notes for this and other episodes at our website, thearshow.com. That's theARshow.com. Let's dive in.
Sometimes you're working at the cutting edge of technology, the demos we put together can be a little, I don't know, janky. And fragile, it read a situation like that.
Yeah, you're bringing up some some past trauma here, you know, when the demos don't work, and the stakes are actually really high. That's when you remember things most vividly. So I remember when we were still on the campus at the University of Waterloo. And we thought, you know, we should spin off this company. I was doing a lot of traveling with my with my CFO at the time, Sandro. And we had to try to demonstrate eye tracking to folks in California and elsewhere. But the only time we'd ever gotten to work in the past was in our lab. So the prototype that we traveled with was these glasses with breadboard wire hanging off of the front of them. And they would suspend this unpackaged really fragile micro device in front of the wearer's eye, for the longest time, I would have to continuously bend that wire to get it just at the right angle. But the scary thing was whenever we would have to go through airport security or customs and someone would open up the box and start, you know, poking and prodding the device with their fingers. And you know, hours would go by where I was just convinced this system was not going to work. They touched a chip. It's over. But somehow that that one device managed to survive through dozens of meetings, and there were lots of close calls. But we were able to convince folks that this was the right way to do eyetracking. So yes, demo days are always stressful.
What was the core technology there that you implemented for eye tracking?
Yeah, so we were the first to come up with a system that an optical way of tracking eye movements, but without any cameras. So we had this little chip that would scan a beam of light across your eye. And then anytime the light would reflect off of the eye on to another chip, a photo detector chip, we would actually record these pulses and track one of these glints on your cornea. Between that and now the Tech has gotten quite a bit more sophisticated. Now we're projecting patterns and detecting pulses with many detectors per eye, but back then it was a single emitter single detector setup.
And what was your background in that sort of? Was it MEMS? They were ultimately using the sort of the mirror the fancy mirror system?
Yeah, so my background was in, I would say, you know, ultra precise mechanisms on chip. So my grad school research was focused on a single chip atomic force microscope. So I was used to building things that could measure really tiny quantities and that can actuate very, very precisely. And those technologies were used in the eye tracker to move a mirror. Actually, in the old days, we used to use a Fernell zone plate which would both collimated the light and redirect it and we can scan it over a pretty wide range of angles. And yes, it was based on CMOS MEMS technology. So MEMS technology but where you start off with a CMOS wafer, can you
break that down? What is MEMS? Let's just start there, maybe. And then what's unique about CMOS MEMS.
Yeah. So MEMS stands for micro electromechanical systems. It's basically a set of concepts borrowed from the electronics semiconductor industry. So things like etching and patterning, and you know, making really, really small features. But instead of controlling the flow of electrons, using transistors, these devices can move in sense physical quantities in their environment,
they can move in sense physical quantities. In an environment, physical quantities of what
you can think of, you know, for example, an accelerometer will detect how much something's accelerating, a gyroscope will detect how much something is spinning, a microphone can detect sound waves, pressure sensors, and now there's actually meme speakers. So there's a couple of companies use sound and X memes that are using the moving parts on a MEMS chip to generate sound. There's also things like the scanning MEMS displays that are used. In the near eye display of the HoloLens. There's LIDAR systems, so you can detect lots of physical quantities and create movement on ship. But really, the applications that tend to shine are the ones that take advantage of the scaling laws. So it turns out, these are really, really tiny moving parts, you kind of have to look at them through a microscope. And certain quantities become a lot easier to detect when you're that small. So for example, a gyroscope, you know, your mass is so tiny at that scale, and the stiffness, it doesn't scale down cubically, it scales down linearly. So you get very, very high resonant frequencies, and very high quality factors. So you can actually have this thing shaking in one dimension. And as you spin it, it will pick up the Coriolis force and tell you exactly what the rate of spinning was. So this is getting a little into the weeds. But yeah, when you're designing a mem system, you really want to leverage the the favorable physics of scaling to try and detect things.
Fascinating. So smaller is easier and better. In this case. Yes, in many cases, many case, you'd mentioned that one of the things you'd worked on in your graduate work was on microscopes. And so what is the advantage? Is there? Is there a scale advantage here as well? And what was it exactly trying to do within the world of microscopes,
I was working on the first single chip atomic force microscope. So microscopes that that we're all used to seeing in high school where you have lenses, the little, you know, take us a sample and kind of magnify it. But that only works up until a certain length scale, which is limited by the wavelength of light. So if what you're wanting to look at is smaller than the wavelength of light, significantly smaller, you can't use optics to look at it. So there's a number of instruments that try to break that optics, you know, the barrier, the wavelength barrier. For instance, there's like scanning electron microscopes where you accelerate an electron and electrons wavelength ends up being much shorter than the wavelengths of light. And so you can see things that are smaller that way. And then atomic force microscopes are based on a very simple principle, it's kind of like a record player, you have a very sharp tip, and you bring it very, very close to a sample. And the forces between the atoms on the sample and the atoms on the tip are actually kind of understood, there's a regime where they are attractive forces that regime where there are repulsive forces. And if you can scan this tip across the sample, while measuring those forces and kind of reacting to them, you can build a topological map of the surface. And that's what an atomic force microscope does. But it turns out that with you know, the AFM is that you see at universities, because you're trying to keep something stable within you know, nanometer precision, you're doing it with a big desktop instrument, the big masses in that instrument will respond to like, you know, the elevators and other building vibrations and the mechanical path from the tip they are trying to use to measure your sample in the sample. If it's big, then when the room temperature changes, everything will drift. The other frustrating thing for me was that, you know, as somebody who was interested in nanotechnology, if I ever wanted to use an AFM, I had to pick it like a week in advance. And you know, my professor would have to pay by the hour for me to use it. So that was kind of the motivation to try to make AFM so cheap and so easy to use. And that's, that's a key, right? You have to be frictionless, easy to use, that everybody could have an AFM on their desktop. So we made we made this chip that had all of the moving components and sensors that are required for the, you know, $100,000 FM's on amount of space on a dye that, you know, cost less than 10 cents, right. But, of course, that was the academic way of thinking about it. It's a little bit naive. Once we realize how much work is involved in marketing and sales, and you know, having a supply chain and hiring engineers, you're not going to sell anything for 10 cents, but that was the that was the concept.
So you'd reduce that to practice. You'd actually build the chip that does the thing that you had imagined, as well.
Yeah, yeah. So we got some really cool image Just with atomic step height resolution, and in fact, that company spun out and is now still selling AFM is on the market. So they're more expensive now. But I think the frictionless and ease of use and very fast nature of having something tiny that can scan quickly, is being welcomed by the market.
How is it that you connected? So you have this kind of broad appreciation, perhaps for what MEMS technology could do. But your focus there was on creating this single on a chip atomic force microscope? How does that microscope relate to eye tracking? What's the connective tissue there,
it's kind of a loose connection, I would say. I mean, there's a broad range of MEMS devices. But you know, the interesting thing to note about the human eye right is that it's the fastest it's among the fastest moving muscles in the body, you know, you can, some people can move their eyes at 900 degrees per second when they're making records, which are the rapid ballistic movements that you make when you're changing what you're fixating on. But it's also super precise, you can fixate on things. And even when you're you know, fixating on a very tiny object in the distance, your eyes are making sub 0.1 degree adjustments, microseconds and interest psychotic drift. So this is, you know, the front end of your brain in terms of the highest bandwidth input into the brain is the visual system. And the actuators are very precise and fast. So naturally, you think if you want to capture the dynamics of eye movement, you need a chip that has very fast and precise moving parts. So I'd say you know, that's kind of the connection. The types of actuators we used for the AFM are very similar to the types of actuators we use for the eye tracker, the feedback mechanisms that we use to measure position and to and to move quickly, are also similar. The techniques we use to like be robust to temperature changes, like if you're wearing an AR glasses, and you step outside and in the cold winter, up north here, you don't want the eye tracker to drift on you. So we've developed a lot of kind of background IP for the for atomic resolution imaging, that turns out is very helpful for eye tracking.
Fascinating. And when you think about this unique approach, you know that you're not using a camera for eye tracking? What's, what's wrong with using a camera for eye tracking, what's the benefit of taking this different sort of approach?
Yeah, so at the time, when we, you know, really started to turn our attention to eye tracking, it was clear that if you were going to take, you know, 100 pictures of the eye per second. And every picture, you had to stream a whole bunch of data off of the camera module to some memory storage, and then have a very powerful processor, go through each frame, find the outline of the pupil, find each of the corneal lengths, and you know, try to come up with a model of the eye and figure out where someone's looking, the amount of computation and the amount of electrical power back then and still today is just not practical for battery powered lightweight devices. But, you know, scientists have been doing eye tracking since the 60s and uncovered a huge number of very important connections between the eye and the brain using camera based eye tracking. But usually, the high quality eye trackers are plugged into a supercomputer, which is plugged into the wall, you stick your head on a chin rest, and you do a 15 minute, you know, specific eye movement directed type study, the science is quite advanced. And there's a rich body of literature there. So I think the camera based eye tracking technology was a great solution for for clinical and, you know, sort of lab based studies. But if you want to make a battery powered pair of glasses, where you can't really tell that there's any tech in the glasses, but it has to be capturing data of that kind of quality so that you can get insight into, you know, cognitive function. You can't really do that with cameras.
Let's go into the benefits ultimately, of having a tracking at all, you know that there might be some insight into what's happening behind the eyes by tracking the eyes? What's kind of the general perspective that is general reason why we're even trying to jam eye tracking into wearable devices?
Yeah, that's a great question. And I think, you know, the answer has kind of evolved. So in the early days, eye tracking was used as an assistive technology for people who are locked in so people, patients with ALS, who no longer have the ability to type or use voice based control, they can use their eyes to control a PC. Actually, ad hoc was funded by the Canadian Assistive Technology program to tap glasses that you can wear while you're looking at an iPad and use that to you know, launch apps and type with your eyes. And other companies have had this type of technology around for a while. And then, you know, during the I would say, you know, early 2010s to 2016 initial sort of hype cycle around AR and VR. People thought that eye tracking would be a great way to improve the resolution of displays, and to and to enable human computer interaction again, without having to have controllers or keyboards when you're In VR, these days, things have gotten a lot more specific. So, you know, the, the folks that met her recently published a white paper and released a video where they talk about wanting to be able to pass the visual Turing test in VR. So that's what they're calling it where, you know, you shouldn't be able to distinguish a VR experience from real life if the displays are good enough. And that's all about a number of things. The human eye has, you know, 60 Pixels Per Degree of visual acuity in the, in the fovea, which is the highest resolution part of your retina, you know, if you're trying to create a display that has that kind of resolution, is very hard to render it at a high quality. But luckily, humans only really see three degrees of their field of view at that resolution. So you kind of need foliated rendering, if you want to have that kind of resolution that convinces somebody that something's real. There's also this problem of the vergence accommodation conflict, which is that in VR, everything is at a fixed focal plane. Now, things can appear closer or further. But when you're looking at further and closer things, your eyes are not accommodating, they're just doing this vergence movement, where you kind of go from being cross eyed to staring into the distance. And that can make people uncomfortable. So very focal displays, they need to know where you're looking, in order to adjust the focal plane at which they're displaying information. And then there's like the fact that in the real world, the dynamic range of light that we see, you really need a display with 20,000 nits I believe, to actually recreate that, we're nowhere close to that, because we're splashing the AI box with like very little of which something like 10% of it going into the actual pupil. So you need to track where someone's pupil is. So you can, you know, direct all the light into it. And that way, you get things that are more more efficient, as well as more kind of accurate, so you don't have as much chromatic aberration, you don't have these pupil swim effects. Every single one of these things requires eye tracking first, and then you can actually, you know, implement the display pipeline on it, you know, what we started talking about, which is the eye brain connection, and how you can measure people's eye movements throughout the day and understand their cognitive load and anxiety and, and whether they're in a flow state or not, and how easily they're distracted. All of that stuff, I think is, is new, because there never has been a wearable device that can capture eye tracking data with like high enough data quality, all day, every day, the same way that you wear a smartwatch to measure your heart rate and figure out that, hey, heart rate variability is actually a more important metric for cardiovascular health than your resting heart rate. It turns out that cognitive load variability is a better indicator of your brain health than just measuring your performance on an IQ test or something.
So I want high variability, that's a sign of better cognitive health and low variability.
Yeah, so you know, we've spoken with a few neurologists who, you know, this is not really a broadly adopted term yet, because how do you even capture cognitive load variability unless you have wearable like a fitness tracker for brain health, but the theory is that if you're able to quickly engage when you're problem solving more of your brain into into, you know, a specific task, and then quickly sort of relax and get into a meditative state when you need to, that that kind of ability to be nimble, with, with how you use your cognitive function is an indicator of
fascinating cognitive agility. So you talked about this kind of difference between cameras versus your approach and the camera approach. While it has been used for a long time as yielded a lot of results, it requires a lot of processing, compute, that needs to happen alongside of it, right? As you're capturing all of those images. And you're, you're doing pixel level kind of tracking of the pupil, in that camera image takes a lot of takes a lot of you've found an approach that is substantially more efficient, and can actually be done on a wearable device, but you're not calling to the camera. But you did said there's light, there's a mirror in there sensors. So what's the distinction? What are you actually doing and tracking here? And how is it more efficient
with a camera, right? You have, say a megapixel. And the pixels that you care about are only the ones at the periphery of the pupil, and where are the corneal glands are everything else, you're shuttling a lot of bits, you know, and there are energy implications to having to shuttle a lot of bits to DRAM that, you know, has to store and then transfer those bits to the to the CPU. In our system, you only get a pulse when the beam crosses over the threshold between the iris and the pupil or when the beam crosses over an angle at which the cornea produces a specular reflection that the photodiode detects. So it's kind of like compressing the data at the front end. So you never waste information and power on parts of the eye box that have no useful information. And then the other part of the architecture that makes things lends itself to being, you know, simple and elegant is that we don't have to convert any of these signals into an analog values. So It's really just ones and zeros, streaming from from the glasses to a lightweight microcontroller. And then all the algorithms that we run, they fit in a, you know, ARM Cortex, microcontroller. And everything that you need to calculate gaze vergence, eyeball center, pupil size, whether someone blinked, what was the velocity of their Succot. All of that can be done in a very lightweight microcontroller.
How big is that microcontroller, like physically? Physically, I
think it's around five millimeters, five to seven millimeters on site. But it has a lot of pins, because it requires we require a lot of timers and some resources to you know, directly connect to each of the detectors. So it's a it's a fine line, VGA part of relief.
Got it. And so is you now have this ability to capture what is the only the most essential bits of information from the eye, which is when it's moving, you know, you know that it's moving, you know, where it's moving. You don't have to throw away a bunch of unused data, ultimately, in the process, you have to carry it and throw it all away. And in doing that, you're able to ingest all that data and do the sort of gaze detection other things or is there like another level of interpretation, you're able to take it or you can get to the point where you're able to predict where the AI is going to move next, based on where it has been moving? Yeah, yeah,
that's actually something that we have some, some IP on that issued a couple of years ago. So if we want to write, we can put an analog to digital converter on one of the photo detectors and actually image your eye. So for things like Iris authentication, but that is not required for measuring eye movement. And it turns out that the human eye when we when we're making cards, as we're actually jumping our eye from one fixation point to another, our eyes follow a very predictable velocity profile. And they call this the main sequence in the eye tracking community. So it turns out that based on how big this Akkad is that you're making, the peak velocity that you will get to is related to the size of the bucket, and you can sort of plot that relationship. And so once you get to the middle of this cart, which is where your velocity is highest, I can tell you where the end of your second will be, because I know which direction you're moving in, and what velocity you got to.
So the moment you see this decelerating, you know, you know what's going to end?
Yeah, so like something like 20 milliseconds before your next fixation, we know where you're going to end up. Wow, that's pretty cool. Yeah, it's interesting, because you can, you can think about its implication in gaming and an advertising or you can, like have something scary that you just can't look away from, or an ad that just never, never leaves your field of view.
And the very dystopian future, I can imagine this everywhere I look is the same and I can't escape it. You kind of went from this, this world of atomic force microscopes to this world of eye tracking, with a connective tissue kind of being core MEMS and in the sensors and how you're basically putting all this stuff on on a chip effectively are you got the sandwich, the stuff is reduced down to these teeny tiny sizes, and in this few chips and small chips, as you can make them with the goal of ultimately delivering maximum efficiency for eyetracking within the context of wearables. And you know that you started this at still at the university? And this was you pulled this out of university? Was that the case here for the company?
Correct? Yeah, it's a spinoff from the University of Waterloo, there's a lot of them. Because we have this policy at the at the university called policy 73. Where any invention that comes out of the university, the University doesn't claim any ownership of the intellectual property, it's whoever the researcher was, that invented it, that gets to own it. So that's kind of created this little, you know, hotbed of entrepreneurship in Waterloo.
Wow. First of all, very catchy title. But secondly, that's very progressive. They're really encouraging entrepreneurs to come to University of Waterloo to then do their innovation explorations and then spin off.
Yeah, I mean, it to me, it certainly is a selling point, I think the quality of the undergraduate education is probably the highest selling point for folks to come to the school, I don't know of a lot of people who came just because they wanted to start up a company. But once you get here, you sort of you sort of soak in the culture. And if you were ever going to do something entrepreneurial, I think, you know, the forces on campus, kind of push you in that direction.
That's very cool. And now that you have spun out, how do you describe the vision for the company,
you know, we've always wanted to measure ourselves in terms of some sort of impact factor. And in academia, you sometimes measure impact as like, you know, you publish a paper, how many citations do you get? And the cofounders of ad hoc wanted to try to measure our impact in a different way by trying to figure out like, how is it that the technology that we're shipping is improving people's lives? So I would say our mission is to try to get this tech in front of, you know, billions of eyeballs, as many eyeballs as we can, because this is the most practical and information Rich neural interface, we think that it has the potential to improve brain health and wellness in the human population. So I would say that's what really motivated us to leave campus because we thought we had a chance to have a real impact on people's lives.
Yeah, just think about his number 15 15 billion 15 billion eyeballs, you have potentially the ability to, to positively influence hopefully not with ads that you cannot escape, though. But focus on improving our visual experience through air glasses and improving our mental health, or our ability to communicate more generally, as you engage with the broader market, and the various technical teams, you know, what is your approach can achieve, in terms of like the key metrics that customers care about, relative to some of these other devices? Like what are the numbers that people really care about in terms of, of eye tracking?
Yeah, so power consumption is always top of mind, you know, you don't want to have a big battery. In lightweight, smart glasses. Our power consumption is 10x, lower than anything you can do with with competing technologies. And we've demonstrated actually that we can be 100x Lower, that was kind of an aggressive goal that we had set in one of our recent projects. Bandwidth is important, because you definitely want to be able to capture eye movement, at least at the frame rate of your display. But it's even more important, because if you want to capture things like velocity, if you want to time blinks down to the millisecond, so you can estimate fatigue, you need to have a high sampling rate. But then latency is important too. So you need to be able to report, you know that the eye has moved to a certain position within milliseconds of doing so otherwise, if you're trying to adjust the display, people will notice they're like, hey, there's a lag here. And then form factor. We're always trying to shrink our chips down to the point where you can't see it inside glasses frames, and you have to do some pretty aggressive miniaturization. To get to those small form factors.
How are you doing in that regard on this small form factor font?
Yeah, we've recently reduced the volume of our scanner module by a factor of 50%. So we've got some new versions of our mind limb classes where you can't tell that there's any tech in them. And we're going to migrate to some more advanced manufacturing techniques that should simultaneously reduce our form factor even further down to two and 1.5. And beyond millimeters, and also make it easier to manufacture. So we're kind of at the beginning of the s curve, I'd like to think, where, you know, our technology is new, and it has a lot of room for improvements, and cost reductions with as a function of volume. Whereas, you know, some of the incumbent technologies have enjoyed this kind of Moore's Law progression for many, many years. But I think there's a lot more room at the bottom for us these days.
What is the path from here to being part of a commercially available AR Rig or VR rig,
our company isn't really focused on building the AR and VR systems themselves, we're more excited about getting our product integrated into other people's products. And so the pathway to get from, you know, a reference design, like what we sell to something that's shipping is pretty, you know, commonly experienced by all kinds of component vendors in the space, right. So you get to, initially, some Inari, where a customer will validate your specifications, you get a design when you participate in, you know, prototype build, and, you know, engineering builds, and eventually, production builds. So, you know, we're in various stages of that pipeline with various billion customers. And the timelines are, you know, they extend out to 2024 2025, to really be in more of these products. And in the meantime, we'll be shipping our own smart glasses, you know, fitness tracker for brain health, and hopefully get into smart glasses of some other customers as well.
Talk about that your own product. So this is a pair of glasses that you can outfit with your own prescription, if you choose. Correct, Ben incorporates your tracking technology.
Yeah, yeah. So we've actually shipped these glasses, called the mind link glasses globally to, you know, kind of the most discerning users of eye tracking technology. So neurologists, behavioral researchers, psychologists, they'll buy your glasses, currently, they get to have to tether them to, to a cell phone or to a laptop, and they can capture this high quality eye tracking data. The next version of these glasses is not going to have a tether, it's going to have a battery in it. It takes us about a gram of battery per hour of operation. So our current glasses weigh about 30 grams, these ones will probably weigh a little over 40. And they'll have eye tracking heart rate sensor, the IMU number of sensors, which you know, you shouldn't even notice that the thing is on and that is working, you should just put on your prescription glasses in the morning. But every time you take a peek at your Mydlink app on your on your cell phone, it'll produce a whole bunch of analytics on what your state of mind has been. What is your, you know, say? I guess they have readiness score, battery body battery, they have all kinds of different metrics and wearables these days and we'll have some that are particularly relevant for things like you know, Knowledge workers that want to improve their focus, reduce their anxiety, and be more efficient
wired version is available today, when is the wireless version gonna be available.
So the plan is to do the preorder campaign near the end of this year and beginning of next year. So, you know, late 2020 to early 2023. Awesome, we did the same thing with the wire bind links, it took us about four to four to six months to start shipping.
Very cool. When you were talking with prospective customers, you know, and you're working through their typical component, vendor sort of process, and you hear hesitation in their voices, what for you is the most frustrating part or aspect of the hesitation, what is it that they're hesitating about that just drives you crazy?
You know, it's changed over the years, in the early days, we were just a bunch of, you know, grad students that were trying to sell a chip. And we thought of ourselves as a fabulous semiconductor company. And we figured these big OEMs, they'll be able to take our chip and build the system around it to do eye tracking. Turns out that was very naive. And you know, at the time, we were frustrated by the fact that like, Hey, we've solved the problem for you, why can't you guys use it? It turns out that if you want to sell any sort of referenced design, you really need a full stack solution, you can't just be the chip, it has to be the embedded system, it can't just be the embedded system, it has to be applications that show how accurate you are and show what you can do with it. It can't just be those applications, it has to be some games in unity that allow you to experience like how much better things get with eye tracking. And it doesn't even stop there, you got to have cloud based analytics so that you can actually improve your algorithms in the field, and also extract these things about your state of mind. So the first thing that was kind of an objection from customers is that we needed a full stack solution. And our company looks very, very different today than it did back then out of our co founders. There was like one or two that that was very proficient at software. And now a third of our company is software, which is I think, one of the key enabling and kind of secret weapons we have is that we can we can do these things efficiently in production grade code, so that when a customer has a feature request, we can robustly integrated, which we weren't able to do back in the day. Some of the other I would say not not really frustrating objections, but you know, issues are regarding ID design, like, you know, the product designers, they have very aggressive specifications on what will and what won't fit in glasses. And so we really have to push our semiconductor supply chain to, to be really aggressive with design rules to get things to fit in the tiniest of spaces. That's quite
challenging, as well as industrial designers.
Yeah, they're I mean, they're responsible for some beautiful products that just like, you know, look and feel great and are delightful to use. But the requirements to get there are quite difficult.
Sometimes those hard constraints are the ones that create some of the most amazing innovations, right? When you have the reason to go innovate in a particular direction, it could be to the benefit of everybody. Absolutely. You talked about this idea of, of having some insight into brain health, in this idea of maybe cognitive agility and variability, flexibility, that sort of thing. There's also this notion, at least as I imagined it, that as we are, at least as I've seen some others who, who try to project forward how we're going to be leveraging eye tracking. There's this notion that we can begin to guess our intention, or to guess our emotional states based on what's happening in the eyes in that moment, that sort of capability possible with the way that you're tracking the eyes? And if so, what do you think the risks are in doing? So?
Yeah, so I think there's a large number of publications on extracting cognitive load from pupil allometry. There are eye movement patterns associated with anxiety and depression. There are eye movement patterns associated with arousal. There's a lot of credible research out there that suggests that you can certainly extract things about a person's state of mind by looking in their eyes. And there's actually a few papers which highlight some of the concerns around that. You know, for example, if you're an advertiser, and you want to optimize what you're actually putting in front of someone's eyes, and you actually know what interests them, and what doesn't, there's a lot you can do to sort of, I think, invade people's privacy this way. And so I think, you know, the first rule when it comes to eye tracking data and these types of analytics is that if it's your mind, it's your business, you really shouldn't be sharing this type of data with anyone other than a medical practitioner. So for example, you know, children that have SOS seizures, it's a type of epilepsy. The only way they know how many seizures they had is if they tell their parents and their parents read a seizure diary, and then when you go visit the neurologist, they try to guess how many seizures you had, and that actually determines what kind of dosage of medication you're going to have which actually affects the side effects that you might have. And so wearing a lightweight pair of glasses all day that detects these types of seizures, there's a company called eyes, which is a spinoff from Berkeley, which has algorithms that use eye tracking, to detect, and ultimately they want to be able to predict the occurrence of seizures. I mean, this has life changing potential for those families. Same with, you know, folks that are on medication for depression or anxiety, you can certainly detect anxiety through a combination of heart rate and eye movements, the durations of fixations and some of the some of the other metrics that we capture. So, you know, while there's potential to do good, there's also like with any, I think, great technology, there's potential to do malicious things. And I think privacy is something that's always top of mind
is this ultimately, something that's up to the vendor of the glasses, the hardware, or the software creators themselves, who may be able to access some elements of the eye tracking data is their responsibility to make sure that they do no evil?
Well, you know, there are certain gates where you can actually restrict access to the detailed level of information. So in our case, you know, we have a public API that allows customers to subscribe to things like your gaze direction, and pupil size. But then our own private API's, we access our raw pulse data, and we're able to do kind of more high quality data post processing to extract things like psychotic intrusions and smooth pursuit movements, those are actually what you look for when you've had a concussion. So if you've ever seen someone in a UFC fight, get hit really hard and head, you know, the referee is going to make them look at a moving object, and they're going to see if their eyes are tracking it smoothly. Very qualitative, you can't really, you know, give people a score on like, the likelihood that they've had a concussion. But with our raw data, you can. So I think, you know, unlocking that raw data to anybody, but a medical practitioner, under the sort of permissions of the user is something that you we can control that ultimately, the vendors of the actual VR and AR headsets should have those types of privacy restrictions as well.
With great power comes great responsibility.
Yeah, I think this is a common sort of topic of conversation at the eye tracking conferences, because there has been suggestions that just based on the way you move your eyes, you can tell you can tell who's wearing the glasses. So you can use like eye movement dynamics to authenticate somebody to access the hardware. I haven't seen. We haven't seen that in practice yet. But you know, that's the future we're headed in, where your the way you move your eyes will reveal a lot about who you are.
Yeah, I think as we think about who the major players gonna be in this next great wave of tech nominates, because there's always a bit of turnover through each one of these waves. And to the extent that augmented reality represents that next great wave, it'll be interesting for me to see how this aspect of this plays out, is it apple, and Mata and snap, are the major players at least are jockeying for that position of Microsoft seems to be giving up a little bit of the pulling up at horse a little bit. So it seems Google is actively investing once again, into this direction in this space, and how they treat data generally. And their business model. And their willingness to be restrictive, self restrictive with how they use that data, and how that share that data for their own benefits of their own coffers. is, I think that the area around augmented reality that I find most concern, because we have so much more insight in the eye tracking is a huge part of that insight. Not only do we know where you are, we know what direction you're looking, you don't need eye tracking for that you just use your head tracking for that. But then also to see your reactions to things that you're looking at. And the sorts of things you're describing in terms of cognitive health and arousal and these other sorts of mental states is potentially very intrusive, potentially very incredibly helpful, right, for the right sort of user of that data to your benefit, incredibly helpful, but also potentially very intrusive, and not a world that I'm excited to live in, in a world in which the advertisers, for example, have this sort of insight into how I feel about the world.
No, I hear you. I mean, I think you know, the potential for a technology to fully immerse your visual field in a really wide sort of, and high resolution display. Really control your experience, shut out the rest of the world, know exactly where your attention is directed. And also know how that makes you feel so that you can close the loop, so to speak in real time. So you can actually optimize an experience for a person's pleasure or fear something is something that we're we're only going to see a couple of years from now, I don't think there's ever been a product experience that in real time kind of reacts to your state of mind which is both exciting and concerning. And then when it comes to the, you know, the big OEMs, I think, I think their lesson multiple times throughout some of the names that you mentioned, and others, it's been made pretty clear that the entire business model rests on trust. And if you break trust that the consequences are much harder to come back from, than whatever benefit you might have captured from breaking that trust or for making that mistake or from, you know, leaking that data. So I think I think, you know, I'm kind of an optimist. I think a lot of these companies are now taking privacy a lot more seriously, they realize that their business model relies on trust. And that, especially with eye tracking, I mean, this is one of the important questions. So we'll have to see how they, how they managed to, you know, anonymize and encrypt any sort of eye tracking data that's going to be in the cloud so that only you and your medical practitioners have access to it. I mean, there's HIPAA compliance, that the medical community has been dealing with this type of system architectures to maintain privacy, and they have standards. Right. So I think, I think there's, it's certainly not something that has to be invented from scratch.
Yeah, that's a great point. Let's shift a little bit is you've now been working on this company on this technology and, and exploring its applications and at least part of that time, the opportunities to the VR and AR sort of markets. How has your perspective about the VR or AR market? Or the vendors or the potential change the most? Since you started the company back in 2016?
Yeah, the 2016 timeframe, I was I was kind of trying to level set my expectations based on how excited everybody was at AWS in 2016. Like, we're just on the verge. And then I remember that every year since the, you know, this is it's gonna get big next year. So I think the first thing that kind of hit me was that a lot of these products that were supposed to ship in such a short timeline, timelines were extended. We went through this crazy hype cycle. And then, you know, some of the some of people's expectations were, were knocked down. And then there was a bit of a trough of disillusionment. I think my perspective now, though, is that from what I can see inside these big OEMs, that are spending 10s of billions of dollars a year, you know, relentlessly pushing forward displays, waveguides, eye tracking, and all the other sort of like compute that's required for this technology, I would say, I'm finally at a point where I think that, you know, the quest to ship 10 million units last year, people are starting to realize some value in the technology. So I think my pessimism has certainly been replaced by optimism for the more broad deployment and adoption of the technology. And also, I think, you know, the nature of the technology used to be that like, this is for fun to play video games is to be in an experience that or to do a 360 tour of an apartment. I think as the hardware gets lighter as the displays get better, as the experiences get bigger as the barrier to entry gets lower, I think you can you can start envisioning more productive scenarios in the metaverse like conference room settings, or collaborating on 3d models of you know, CAD models and stuff like that.
Yeah, it definitely feels like we went through that hype cycle, and we've kind of been going down ish, right? We as you just as you noted, there's been all of these delays, and we just start again, another batch of delays, delays, Apple's gonna push things out further, further VR with video pass through device. And we heard Mehta, Facebook medical cover, we want to call them these days, we'd heard about a lot of reduction in investment that they were making on specifically the augmented reality side of things. But it does feel like from a from an adoption perspective, we're beginning to figure it out. Like we're beginning to really see the real high value points in enterprise, we need to really imagine some consumer use cases, and how they can be valuable, it does feel like we're gonna run this point, we're ready to get to the point where it's good and useful. Without all the hype.
Yeah, I certainly think so. And I think I get a different perspective from working with some of these big customers on, you know, AR and VR projects that maybe most people get from, from what's in what's on what's happening on Wall Street and what's in the news, right. So I don't think that there's any sign, even in the presence of the market conditions of today, of any of these companies, at least some of the ones we've been working with slowing down their investment into, into the technology they're developing, it's still going to be, you know, money's pouring in at an unprecedented pace. And technology is being developed by, you know, brilliant people at all of these organizations. So it's strange, you know, I really don't think that the pace of development of this technology and the commitment these companies have to it are affected as much as you might think, by these macro factors.
Yeah, I agree with that. From inside the r&d team. You just turn it away. It's true. No wait hold, it seems turned away a lot of teams. Yeah. And the markets, the one that's being over overly reactive on the upside and the downside, the high points and low points. Yeah, I agree. And somebody who was working really in deep tech, really deep in the weeds, and ran new sensors, technology in this case, what are the challenges? What are the unique challenges about growing a business, and securing the sort of investment funding that you need, while running this sort of deep tech company like ad hoc,
some of the challenges are also the most rewarding thing. So I mean, on the one hand, the product life cycles for, you know, design ends that were that companies like us get involved in are quite lengthy. So it used to be the case that you would release a cell phone every three or four years. But then eventually, when adoption sort of peaked, you know, companies like HTC, I think they were selling 14 different new models of cell phones a year. And so the supply chain, you know, the folks that make our chips, the folk, the folks that make our scanner modules, they definitely want to see volume, we are structured for volume, you know, we manufacture parts on eight inch wafers, and we can crank out lots of them. And so I think one of the one of the issues with being an innovative company working on futuristic technology is to secure supply chains, and have everyone be patient until the actual market takes off, which obviously served everyone well, that was involved in the early days of the cell phone, and it will similarly do so I think, in this market. And then, you know, another one of the challenges is that sometimes we're working on stuff that nobody's done before, for example, you know, there's this new emerging field and optics is, I would call it, you know, five years old, or maybe a little older, of metal surfaces, you know, flat lenses that you use by controlling the phase of light when it hits the surface using nano structures that, that are designed using recently developed tools. And so, you know, we have a program where we're developing metal surfaces that would go inside our scanner modules. And, you know, there really aren't that many suppliers out there that can manufacture these things, there really aren't that many tools that that like lens design tools like XEmacs, that you can use if you're making a lens for a VR headset. So we've had to become experts and kind of liaise with with the folks that invents it invented this technology. years ago at Caltech. And so if you really want to be bleeding edge, you got to take some of these risks. But then when they actually work, it's surreal to see. So we're pretty excited about some of the progress we're making in these more advanced r&d stuff.
How do you convince investors to go along for the ride? Well, it's such a long road.
So we're lucky to have some strategic investors on our board whose roadmaps are well aligned with, you know, the adoption of this type of technology. I mean, they're pouring billions into their own Metaverse hardware, but they're not necessarily going to release it until maybe Apple would release there. So they can be fast followers. In some cases, in other cases, you know, their ambitions are, you know, longer sort of timeline horizons. So I would say that, you know, they kind of knew what they were getting into, on the strategic investor side. And then on the more traditional investor base, I think, you know, we've just been fortunate to have a very supportive set of board members that are kind of walking with us through each of those sort of setbacks and, and design wins, or I guess, you know, NRV contract wins. And, you know, they've just been patient because they realize that this is not a sausage factory, right? It's, it's going to take time for for advanced technology like this to get to scale.
As you look out over the next 1218 months. What concerns you the most?
Yeah, that's a good question. I haven't really seen the effects of, you know, the kind of turmoil that we're living in, work their way into our business, other than some supply chain issues. So if I'm looking at this, you know, selfishly from the perspective of ad hoc Microsystems, I would say that, you know, we're concerned that schedules are being pushed back that conflicts are jeopardizing potentially our access to people in technology, and that inflation might reduce the likelihood that someone's going to buy a really expensive toy. So there are some market conditions that that certainly, you know, concern us. But I would say that, you know, the biggest concern is probably timeline and our inability to control the timeline. So really, we're at the mercy of some of these big OEMs is their product roadmaps when they're going to actually release products and we kind of have to stay engaged, do do the work, and just be prepared for them to say, Okay, we're going to create the dial now, you know, millions of devices. So that's kind of a uncertain state to be in. But I think it's comes with the nature of the business.
Yeah, let's wrap the few in lightning round questions here. Sure. What commonly held belief about AR or VR spatial computing Do you disagree with
When you talk to the enthusiast, right, they are all convinced that we're going to be spending big chunks of our day in AR and VR. And when you talk to the investment community, they'll tell you things like, well, this is a $0 trillion market. So you know, it's not like a rich ecosystem of, you know, technology providers yet. Ultimately, I think if you're an investor, you should be trying to invest in the $0 trillion market so that when it becomes a $1 trillion market, you made some money, right. But I think one of the commonly held beliefs is that people are going to use VR and AR as a novelty, you know, maybe spend an hour or two a day in there. And the discomfort and the power envelopes and the stigma are going to prevent any sort of more meaningful, deep adoption. I disagree with that, I actually think that, from what I've seen, in terms of what the display technology, the power efficiency, you know, eye tracking, and all the other sensor sets where they're headed in the next couple of years, I would much rather have a VR headset or an AR glasses that has, you know, two 4k displays my dual monitors in the glasses, and I can just like kind of turn them on and off, if I'm on a flight, then have to always be in front of like multiple monitors at a big desk in order to get any work done. So I think productivity certainly is going to be one of the focuses, which maybe it's not the most contrarian thing to say, but I think it's something that I've had disagreements about,
besides the one you're building, what tool or service do you wish existed in the AR market?
Yeah, so a couple of things that I wish I could do any are, they actually require the tool that we're building, but that's not where it ends. One of them is, is, you know, not having to travel as much would be great, it probably also be good for, for my carbon footprint, lots of these big meetings that we have really occurred in a room that has fluorescent lighting, that doesn't have a huge dynamic range. And that doesn't have a lot of very rapidly moving parts to it. Right. So you know, I think this was one of the examples of a scene, which could be rendered in such a way that it looks super realistic. But of course, you know, if you're sitting around a conference room with a bunch of people, you really want to be able to make eye contact, you really want to be able to feel presence. And of course, you want the visual experience to be super convincing. And so I think that's one of the I'm hoping that's one of the first experiences that actually is done well in VR. And so I'm looking forward to that. And then another one, which also requires eye tracking is in coaching and improving in athletics. So it turns out that, you know, if you have an eye tracker on while you're playing soccer, or while you're trying to play tennis, or while you're trying to even play piano, you can tell the difference between somebody who is actually very skilled, versus someone who's novice by looking at their movement patterns, and things like reaction time, and what part of the scenery you're looking at. So for example, with soccer, you don't want to be staring at the ball, you actually want to be staring at your opponent's center of mass, so you can predict which way they're going to go next. And I think, you know, the combination of motion sensors, eye tracking, and fitness trackers can be used to like really help amateur athletes like myself, figure out small tweaks that they can they can use to get some gains.
Yeah, I love the fitness tracking the fitness performance coaching aspect. What book have you read recently that you found to be deeply insightful or profound?
I've recently read talking to strangers. That was an interesting book. I think I'm in the middle of Jocko willings book, but I've had to put it down because of travel a few times and things getting super busy.
Let's check out the book remind me I remember hearing him on Tim Ferriss podcast at one point.
Yeah, it's about a leadership, leadership strategy and tactics field manual.
If you could sit down and have coffee with your 25 year old self, what advice would you share with 25 year old Neil?
Yeah, so 25 year old Neil didn't go to class very often, even even when the quality of the materials and the relevance of things were great. And most of the time, you know, the excuse was that I can pick up the textbook and cram right before exam. So that's kind of something that I would say, if you're, if you're in a academic program, that and you don't actually take advantage of it, there'll be very few opportunities where you have the kind of time and the camaraderie of a whole bunch of other people learning the same stuff. To kind of do that all over again. I guess the second part of the advice I would give to myself is there's no reason to rush off and begin your career and sort of put everything on a sort of timeline. In fact, one of the things that I I think I enjoyed the most and I should not have had any doubts about but I did was to stay in school. So when I say stay in school, I'm not talking to you know, people in high school, or elementary school. I'm talking about people who are in a university that has laboratory facilities that has, you know, an entrepreneurial campus vibe and that has professors that are subject matter. Experts, you don't really often have an opportunity to spend time thinking about startup ideas and using the resources of, you know, these billion dollar facilities to kind of fail fast, you know, like, maybe you try something, you take it to some, some folks, and they tell you this didn't work, if you tried to do it out of your garage, it would take a lot longer, it'd be a lot more expensive. And it'd be a lot more heartbreaking when you fail. So I think staying in school a little bit longer than maybe you or your peers expected you to, if you're if you're there for the right reasons, because you love learning, you love teaching, and you want to try new stuff out. I think it's totally a reasonable thing to do.
That's great advice. Any closing thoughts you'd like to share?
I think, you know, having been through a number of successes and failures at a deep tech startup company, I think, you know, one of the thoughts that is probably kind of cliche, from these motivation, posters, having to do the work and be persistent. It takes on a new meaning when your entire life and sort of career and that of your friends and those of your employees is writing on a few contracts actually coming to fruition or a customer designing you into a product. And I guess the thought I would leave you with is that, you know, we've failed a number of times, and there has been times where there's been despair. But I find that like the folks that are on our team that are the most optimistic, I used to think that they're just optimistic for the sake of being optimistic. But I think being optimistic actually changes your mindset, and sets you up to actually put in the work even when you know others are feeling down. And actually is kind of self reinforcing. So I would say that having an optimistic set of mind, and recognizing that the world today is a lot better than it was, you know, 20 years ago, that was a lot better than it was 100 years ago, and it just continues to get better. I think it's important to remain optimistic about the future and about the technology, the exciting technology we're working on.
What a beautiful perspective, I'll have that. Where can people go to learn more about you and your work at ad hoc microsystems?
I'm not really big on social media. So I'm not on Twitter or Facebook. But you know, I have a LinkedIn profile, you can just look me up Neil Sarkar. Ad Hoc microsystems however, it does have a presence. So we have, you can visit our website www.ad, hoc spelled ad H A wk.io. And there's a whole bunch of literature on there about our products. And we also have a LinkedIn page. So please visit us, Neil. Thanks
very much for this conversation.
Thanks, Jason was a pleasure.
Before you go, I want to tell you about the next episode, and I speak with John Rodriguez Cephalo, co founder and CEO of preamble a company on a mission to provide ethical guardrails for AI systems by creating an AI as a service with a focus on ethics and safety. Previously, John was the co founder of virgins Labs, which was snaps first acquisition and became the foundation for spectacles. He went on to be the project lead for the first version of spectacles and the most recent version with a display. In this conversation. We talked about John's background in AR hardware, his perspective on the risks and challenges of creating an engaging with AI systems and the path he's taking with preamble I think you'll really enjoy the conversation. Please follow or subscribe to the podcast you don't miss this or other great episodes. Until next time