The AR Show: Adam Davis (Amalgamated Vision) on Focusing on What Matters with Wearable Displays for Healthcare and Enterprise
3:41AM Feb 23, 2021
Speakers:
Jason McDowall
Adam Davis
Keywords:
image
people
stereoscopic
eyewear
spatial computing
technology
problem
pancake lens
field
laser beam
display
create
optical
thought
laser light
eye
head
describe
company
stereo
Welcome to the AR show right dive deep into augmented reality with a focus on the technology and users of smart glasses and the people behind them. I'm your host Jason McDowall.
Today's conversation is with Dr. Adam Davis. Adam is the founder and CEO of Amalgamated Vision, the creator of wearable displays optimized for use as a reference display in healthcare and enterprise. Adam spent a career as a practicing physician focused on neuro radiology and regularly explored and push the boundaries of using 3d imagery to better understand each patient's physiology. Most recently, he was a clinical associate professor and the director of the image processing lab at New York University's Medical Center. There he specialized in medical image post processing, volume rendering, data visualization and image based surgical guidance. He's also done product development for Siemens Healthineers and Olea Medical. It was from this deep appreciation for 3d medical imaging that Adam began developing a new type of wearable display highly suitable to medical environments.
In this conversation, we dig into the technology and the unique perspective that is driving us development. The solution itself, the glasses lie on your cheekbones just below the primary visual field. It uses a laser beam scanner with a novel pancake optic to project an image directly on the retina. When you look down. We start the conversation with some background on medical imaging and the impact that stereoscopy that's seeing images in 3d can have in medicine.
There's a large body of evidence that stereoscopic imaging improves eye hand coordination, and improves judgment for visual tasks. This was well studied in multiple industries. most familiar in medicine, if you show stereoscopic images through an endoscope trainees, naive trainees have a more rapid understanding and learning of a technique than people who are shown monoscopic images. So I think it has a very broad application and a very deep impact upon more than healthcare. But all all of society, all tasks, all industries.
As a reminder, you can find the show notes for this and other episodes at our website v AR show calm. Let's dive in.
Adam, you have a background as a neuroradiologist. First of all, what is that? Exactly? And how did that ultimately lead you to augmented reality?
First of all, thank you, Jason, for having me on the show. Appreciate that. Well, I didn't start as a neuroradiologist, I started as a neurosurgeon, I was in training as a neurosurgeon at New York University Medical Center. And I then left that to enter interventional neuroradiology, which is the treatment of diseases of the brain and the spine, the face, the head neck, through minimally invasive methods, usually percutaneous, going through the skin, with a needle, or more often going through the blood vessels,
putting a catheter plastic tube through the blood vessels to reach an area of the brain or the spine, much the same way that a cardiologist uses a catheter to go to the heart to do a cardiac catheterization. And as a neurosurgeon, I saw that there were people treating the same diseases by being minimally invasive. And I was really attracted to that. And it was done under fluoroscopic guidance angiography where you do a series of X rays, and it gives you a roadmap of the blood vessels. you inject contrast dye through the blood vessels, take a series of pictures, and it shows you where you are. And I was really attracted to that. Because if you could treat a disease, minimally invasively by going through the bloodstream instead of opening up their head, and get and have the same efficacious results, it seemed to me a better way to do that. As part of being an interventional neuroradiologist, I also trained to be a diagnostic neuroradiologists which is rady radiology radiography, MRI, CT of the nervous system, the head, the brain, the neck, the spine, the spinal cord, and I was in training in the 1990s during my neuro radiology and interventional neuroradiology fellowships, and I was introduced to stereoscopy and I had never encountered it before. Other than perhaps, you know, as a child when you had the ViewMaster of the Taj Mahal and you would see stereoscopic image when they would present the two different pictures. But I was introduced to the concept of performing x rays, radiographs, angiograms in two slightly different perspectives, usually separated by six or seven degrees of rotation, and then viewing those images with cross side technique. You could also view it with a prism, glass, anything that you needed in order to present each image individually to To each eye, and that gave you the this stereoscopic or depth perception within the picture. And what I learned was, well, first of all, we were immediately a small group of us were immediately attracted to this method. And so much, so we were almost stereo junkies, we would film all of our cases in stereo, and then sit around the reading room, looking at the cases, just staring at them with our eyes crossed, you know, almost reminiscent of, you know, some bad B movie of stoners sitting around on a couch, just staring at these images for hours, because they were really captivating. And it's more than a parlor trick. What we soon realized was that stereoscopic technique gave us a more powerful way of looking at images, we were able to, of course, understand depth, relationships, spatial relationships, which vessels which structure was foreground, which structure was background, but there was more to it than that there was some kind of additional cognitive benefit that we were getting from it. And if I it's difficult to describe, but I think if you put it into categories, it allowed us to ignore background noise, to ignore complex structures that were unimportant to the object of interest, focus our attention on the region of interest that we wanted to look at. And that selective attention was really what attracted us to to stereoscopy. And as an aside, I carry that technique with me through my entire career. And along my career path, which as a practicing physician spanned about 25 years, I would always meet a small group of doctors who used stereoscopic technique. It's almost like a secret society or a cult, you know, within medicine, you'd meet them and you would you have your your, your secret handshake, and you would know that that other person liked to look at stereo images. So much so that later in my career, I was the director of the 3d imaging labs and postprocess imaging labs for radiology at New York University are responsible for doing all of the, you know, the volumetric rendering of CTS, MRIs, etc. And when I was in that position, I went through the entire medical center, resetting all of the reconstruction protocols for the CT scanners and MRI scanners so that their output would be in stereo. And I can tell you that 99% of the medical center had no idea why all of their images were coming out with six or seven degrees of separation. And I was doing it for the like, probably five doctors in the entire medical center. You know, who did this, I guess it was my cosmic joke on medicine. But it's interesting that that one technique is so powerful and continued with not only me but everyone else who did that. And it's interesting to note that stereoscopy in medicine was present back in the 1940s 1950s, it was routine for doctors to take x rays, chest x rays with the six degrees or seven degrees of separation and then look at the images through prism glasses to see them in depth. And all of that disappeared, or mostly disappeared in the 1970s 80s 90s, white cross sectional imaging CT and MRI came into the into the foreground.
What was it about those technologies that made them less supportive, less capable of doing stereoscopy?
Well, they were supportive, but you had to post processed them and bring them there. But more importantly, by doing cross sectional imaging, it You were now imaging entire volumes of anatomy and slicing them like bread. So it's a another way of seeing depth of seeing volume of seeing inside an organ that stereoscopy just doesn't possess. And of course he can MRI are extraordinarily powerful, but they're very powerful in their ability to chop up data in space. They're not necessarily more powerful in their ability to give a visual display. And what I really enjoy is that fast forward 2020 I see the entire realm of extended reality, all of the buzz, all of the interest in extended reality, to me is just simply the evolution of the same thing that had been happening decades ago. The joy of seeing stereoscopic imaging the end of life. ment of saying stereoscopic imaging the power of a stereoscopic image, I think that is what is driving all of the enthusiasm about extended reality and certainly virtual mixed augmented reality.
And so to kind of restate the benefits from your perspective, the benefits of stereoscopy is that you get a clearer understanding of the location of the information in the image. But more than that, you get much enhanced appreciation of much greater awareness focus, you get much greater focus on the things of interest. And those two elements together allow you ultimately, I presume, to make better decisions as a surgeon as a diagnostician in trying to understand exactly what's going on inside the human body,
or even as an airline pilot, landing a plane, there's a large body of evidence that stereoscopic imaging improves eye hand coordination, improves judgment for visual tasks. This was well studied in multiple industries. most familiar in medicine, if you show stereoscopic images through an endoscope trainees, naive trainees have a more rapid understanding and learning of a technique, then people who are shown monoscopic images. So I think it has a very broad application and a very deep impact upon more than healthcare, but all all of society, all tasks, all industries. And one interesting thing, many years ago, it's actually in the middle of the night, because I often wake up in the middle of the night and read, I came across an article, it was done in Japan, and it was a functional MRI experiment in which they had patients in the scanners. And they were showing them pairs of images, color images, and some of the images were monoscopic, they were the same exact image. And in other cases, they were stereoscopic, they were slightly different perspectives of the same picture. And the interesting thing was the fMRI showed that when the viewers looked at the stereoscopic images, they had activation in the language centers of their brain that they didn't have when looking at monoscopic images. And the input was in the Broca's area is actually the production of language on the left side of the brain for most of the subjects. And, you know, to me, that was I was struck dumb by by seeing that because I thought to myself, you know, stereo is so powerful. And I wondered, why are we so taken by this, and when I looked at this, I thought to myself, well, stereo reaches, you know, deep into your head, and it touches those areas of language, you know, which is really what makes you a sentient human being, it's, you know, really what makes you human is that ability to, to have cut, you know, that consciousness, language, the understanding, and it just amazed me that's stereoscopy, specifically will touch upon language areas in the brain separate from the the typical visual areas.
So this is kind of indicates that this is a more foundational fundamental understanding that the brain has of this type of data, as opposed to the more abstract 2d imagery, the kind of more abstract understanding, I guess, of 2d imagery that we're we're seeing, is that kind of how you interpret that,
exactly. It's how I interpret it. And I think it's how other people interpret it. I always find it interesting that when people discuss virtual reality, they speak about presence. They often talk about this difficult to describe feeling of being there, you know, presence, it's a greater feeling, it's a greater sensation. Some people describe, you know, virtual reality, sometimes almost in religious terms of enlightenment. And I thought to myself, Well, there you go out there, seeing the stereoscopic presentation of some some visual display. And it's, you know, lighting up the language areas of their brain. And they are sensing that there is something more to this than just looking at a 2d picture,
with such a long and deep appreciation, exposure to stereoscopy and an appreciation that we're kind of Now coming back to it with the, you know, VR and AR sorts of devices that the industry as a whole is constructing what was the connection for you specifically, what drew you to your own ambition to create your own device?
So what what was really the bridge between this, you know, seeing these images stereoscopic Lee and setting me on a path on a path to pursue it more directly to try to apply it? That's a good question. I think I saw more application to stereos could be then then looking at a static image. I thought to myself, you know, why can't we apply this in more dynamic lives. scenarios. And at the time. So this is early 2000s, I was becoming acquainted with 3d volume rendering, when you would, you would do a CT scan or an MRI scan, and you would reconstruct the objects in a 3d format, and you would be able to rotate it and move it. And I looked at these images, and I could actually reformat these images. So I could see the three DS stereoscopic Li, which isn't it was itself pretty cool. And then I thought to myself, well, that's one measure of spatial relationship. The other measure of spatial relationship is what you would see with your eyes just seeing things at two different perspectives. So what would happen if I took a 3d volume rendering and imaged that in 3d, but superimposed on it, the live radiographic imaging, also in 3d, and the interesting thing is that if you think about it, the stereoscopy is very powerful because it gives the relative spatial relationships and there are many things that gives you spatial relationship more than stereoscopy, the parallax motion, the size of an object, the clarity of the object, one object occluding another, there are many different clues that you use visually to tell where something is. But I thought to myself, you know, if I can look at this person with two different perspectives, and an angiography suite, two different X ray machines, two different X ray tubes, I will know that he relative spatial relationships, and if I confuse that with the 3d object, now, because it's a CT or an MRI, and it has the actual distances, I will have both the actual distances and the relative distances fused within the same scene. And I thought to myself, well, that's actually the same thing as real life. That's mimicking the actual human experience of walking around seeing things spatially, and knowing how how large they are, how big they are, how the distance between things. And I thought this is this would be an amazing technique in order to understand everything about a spatial relationship, in one very complex fused radiographic image. So I wrote a white paper about it, never submitted it for publication, I wrote a white paper, and I circulated it to friends and industry, and people found it fairly interesting. But I thought to myself, you know, okay, this is, you know, this is kind of the holy grail of visualization. Now, let's step back a little. Let's, let's think for a second, how can people see things stereoscopic Lee? How can people use things stereoscopic Lee, while they're working in healthcare? You know, why can't the surgeon see 3d images of something that they're doing or even just a reference, you know, the preoperative MRI or CT in 3d while they're working? You know, what a benefit that would be. So I think that's what started me down this path. The problem was that the methods that people were using to see things in stereoscopic mode were fairly unsuccessful. So there was the, you know, for movies, people would use, you know, colored glasses, you know, like the red, red, blue colored glasses. And for, for stereo on monitors, people would use LCD shutter glasses with alternating, you know, each eye would alternate. And as the eyes would alternate, it would show you the different perspective on the screen, they would go into 60 or 120 hertz, or even prism glasses, and people hated them. People love stereo, and people hated the way that you would you would look at it. And the other problem was the vergence accommodation problem. And people speak about that all the time that as you accommodate or focus your eyes on a certain distance, and you converge your eyes to look at something they often don't match together. So when you show things stereo on a stereoscopic screen, or in a stereoscopic heads up display, very often, yes, you see it, but it's an uncomfortable experience. And I thought to myself, now, you have to remember this is around 2001 2002. So, you know, if you wanted to understand virtual reality, you actually had to go to the literature and see what you know, some academician was doing in some optics lab somewhere. You know, I thought to myself, well, stereo, and stereoscopic display is really powerful. But Do people really want to have this in front of their eyes? Do they want to have this convergence accommodation problem? And, you know, I thought that for, you know, quite a while. And just by happenstance, I came across an article in Wired Magazine, of all places, I guess we're all good ideas come from. And I saw an article on laser beam scanning, that there was a small company in Washington state that was producing laser beam scanning Pico projectors. And, you know, I thought to myself, well, that's great. Why not just take that laser beam projector? And instead of putting it on the wall, if you use two of them, why not just shine it directly in the eyes, and then you'll have a display directly on your retina. And because it's a laser, it's in focus, you don't have the vergence accommodation problem. You know, this seems to be the solution to the idea of stereoscopic display. And I guess that's what set me off on this path.
And so there that was the, I'm guessing that company was Microvision, that was the workout of University of Washington, Tom Furness.
Exactly
that technology. for you at the time, especially what set lasers apart is that it was small. And it was always in focus. And, you know, bright and relatively power efficient, I guess, maybe compared to some of the other technologies. Certainly volume efficient compared to some of these other technologies we're trying to pursue, is you kind of begin to pursue this African discovering this technology. What did you learn about laser scanning in practice? What are some of the challenges that you kind of began to see as you begin to implement something around it or some of the benefits you saw in practice?
Well, I found out it was really tough to squeeze a laser beam scanning output through the pupil of your eye. So when I first looked at this, and I first started thinking about it, I, I reached out to Microvision, actually, and they spoke with them. And they had some early prototypes, such as Nomad, things of that nature, which reflected the output of a laser beam scanner to the eye or from a mirror. And it was a form of virtual retinal display, in which there is no intervening image, the image is painted directly on the retina of the eye, the laser beam scanner goes back and forth, and up and down. And it paints a picture in a in a raster like fashion, similar to a cathode ray tube. But the problem was that in, in order to have this low profile in order to really get a true virtual retinal display, you needed to, to bring this close to the eye, if you wanted, of course, to have eyewear. And when you did that, all of a sudden, you could no longer see the image. So micro vision, like everyone else pursued a different path. So instead of trying to project directly to the eye, they went directly to waveguide technology. And they had hooked up with the US Defense Forces with DARPA. And they had created a laser beam scanning through a waveguide, which was used for helicopter pilots. And this is what they had developed. And it was very expensive. And with permission, I was able to see this prototype, to see what they were developing. And it was about the size of a pair of ski goggles. And the image quality was horrendous ghost images couldn't see through it, the resolution was terrible. But regardless, you know, I was being told that this was the solution to converting laser beam scanning to an image that was usable and practical. So we actually pursued that avenue for a little while. And it was expensive to produce at that time, you know, you needed to have all of these use total internal reflection, of course, like metal waveguides do now. And it was just problematic. And at some point, we thought to ourselves, actually, it was my wife, who looked at me when we were encountering all these problems, and we just could not get traction with this methodology. You know, she looked at me and she said, Well, why don't you just take the damn thing and shine it in your eyes like you wanted to at the very beginning. So we went and pursued that and took laser projectors, commercial laser projectors, a Sony Pico projector, and we put it through lenses in order to knock down the power of the laser output. And I put the laser projector right up to my eye right up against my corner. Yeah, and you know, we turned it on. And lo and behold, it was an unbelievably gorgeous image. It was brilliant. It was sharp, the color saturation was deep. It was, it was breathtaking. The only problem was, I only saw about 2% of the field of view. Because the laser, most of the laser beam scanning output was not going through my pupil. It was going all over my eye, but not through my pupil. And you would think that that one sentence would take optical engineers 30 minutes to solve. And really, that problem took about seven years. We went through so many permutations to try to do this. And the optical engineers that I was working with at the time, you know, kept saying to me, you're sure you want to use laser beam scanning, why don't you just use a an LED, and let's get this over with. But for the obvious advantages of laser beam scanning that you mentioned earlier, we we pursued this avenue. And eventually, as as we were beginning to think that this was going to be hopeless. We happened upon an engineer who had a tremendous amount of experience in this and was able to solve the problem for us. And at that point, which was around 2010. Everything took a complete turn in a different direction.
So here the hard problem, just to restate the hard problem trying to solve is that even though you can directly look at a laser and get some brilliant, colorful images, you didn't see very much of it, just because of the way What was it about the laser light specifically that that made it so challenging to get more of the image visible to your eye, because it was being scanned from such a narrow beam such a small place. And the way that that scanning beam was hitting your eye just didn't allow for very much of it to end up there.
Correct. The laser beam scanner disperses the scan. So you need of course, to have some distance between the output of the laser beam scanner and your your pupil. So just that back and forth, raster fashion means that only a very small part of it will actually enter through the pupil and end up on the retina. So it's that dispersion of the laser beam scanning that that's really the that's really the problem.
I'm just trying to visualize the solution, like in a movie theater. When we see something projected on the screen, which could be done by DLP, it can be done by some sort of laser scanning system. That system, while it might be small is projecting on a relatively large screen, and then that light bounces off that screen into our eye. But here you're trying to create the same sort of effect. But rather than having it bounce off a screen, you're having a pass through a lens. So you're effectively transmitting against this lens, which allows that laser light to be spread out into reveal its full image, but then redirects into your eye in such a way that it gets to consume the entirety of that image. Is that fair?
That's perfectly fair, that's a perfect description, you hit the nail on the head, which is in a movie theater, you look at a screen. So you project on that screen and every one of those points, then projects in a multitude of directions, you have multiple angles of view, on your TV, for instance, right you have an LCD screen, and it has multiple angles of view from each micro from each LED in your, in your TV, when you have laser beam scanning, that laser light is only going in one direction, and you have to be in the right orientation of that direction. So as it spreads out, it no longer meets the geometry of VRI or of your retina. So the question is, how do you take that dispersed beam and now recollect it to put it all back through the pupil so that you can project it on the retina in a you know, to grab the entire field of view? Well, that's actually problematic. And we started with refractive lenses, and we were able to take the laser light we you'd go through a refractive lens, it would refocus the light through the pupil and it would end up on the retina. And using standard laser beam scanning memes, you know, commercially available ones, we were able to on paper produce an approximate 16 degree field of view. So the obvious question became well, why not make bigger refractive lenses collect more of the light and get a bigger field of view? Well, the problem is, is that as that laser beam scanner angle increases, and as your diffraction of that light into the eye needs to increase, you start to lose the quality of your laser light at the margins, so you have excellent resolution, and excellent color reproduction at the chief Ray, the optical axis, the center of it. And as you progressively go out, it's not bad. But when you start to get to the edges, you start to have a loss of resolution, you start to have chromatic aberration, meaning the Red, Green Blue, the colors start to separate because of the wavelength. And the image quality deteriorates rapidly at the margins. And I think that that problem was really what stopped laser beam scanning in its tracks as what people thought to be a viable solution for eyewear. The point being that, as you mentioned, there is no screen, when you have a screen that forms the image, you have a real image in your way, when you do virtual retinal display, there is no real image, the first real image is the one on your retina.
Now, one of the key implications here is that you discovered that the see through characteristics that would be enabled by a waveguide really did not mesh. Well, with laser scanning. I think Microsoft HoloLens two is maybe appreciating the practicality of that right now. But so instead, you're going with this direct view sort of model, you're you're not looking through the screen, you're looking through the display into the real world, you're looking just directly at the display, or there, from your perspective, especially as you think about utility in a healthcare setting. What are the sort of the trade offs that come with see through versus a direct view, sort of wearable display?
So it's interesting that one of my original thoughts when we were looking at eyewear for extended reality, even though we didn't know, that's what it was called early on, one of my concerns was that people don't want to be bothered with buisiness in their visual field, and I thought of the, the interventionalists and the cardiologists and the surgeons that I knew, and I thought they would never tolerate this a, you know, all sorts of data being in front of them. And what's interesting is that when we started to take this virtual retinal display pathway, what it defined for us was a different paradigm of eyewear a different head mounted display paradigm. And that was, can we give all of this information in a non obstructive format? So for us, the Gobi became, can we present a virtual retinal display in a form factor that is so small, that you can wear it on your face that you could not look down or up to it. But when you look out, you see the real world around you. And virtual retinal display, the way that we're describing it lends itself to that format, that format of let's not obstruct the vision, let's not use a waveguide. Let's try this different format, this different paradigm. And that's really what took us to the point where we are right now. And it's why we continued our development to try to see can we optimize virtual retinal display to get better image quality so that it matches the expectations of the industry of industry in the consumer, and still do it in a format that's small enough so that it will be adopted by consumers? Because we were well aware as as everyone that if head mounted display is large, if it's cumbersome, if it's vital to simply uncool people won't wear. And that's, I think, what brought us to that, to that differentiation between seethrough and what we like to call look down reference head mounted display.
So can you help me kind of visualize exactly how a surgeon for example, would utilize this in the operating room, whether it's pre op or actually during the operation itself?
Sure, and more than a surgeon and interventional radiologists in the suite working with their hands looking at the screens, physicians during procedures need reference data, very often they want to refer to the x rays, they want to refer to some kind of pre surgical planning that they have already done. And while they are working, let's say if they're working in the abdomen or working in the head, they will most likely want an unobstructed view of the anatomic region or they may be wearing loops for magnification something of That nature. And what I believe would happen is that surgeons would be working. And as they were working, they would say, you know, where exactly is the relationship of that anterior cerebral artery relative to the aneurysm in the middle cerebral artery, and they would look down into their eyewear in order to get a 3d representation of that angiographic image, so that they would have a better idea of where they are and what they're doing. You could also, you know, do fused images in order to have the surgical approach mapped out in a 3d stereoscopic way. And there are content providers for that there are several companies that do that kind of imaging. So I think as they work, they would be continuously looking down at the reference data, whether it's anatomical, or whether it's some kind of graphical assistance that they have designed beforehand, in order to help them move through the surgery. Another way of doing it would be remote expert, telemedicine, you can imagine someone in the field doing a procedure, they want to see their hands, they want to see the patient, you maybe it's an emergency medical technician, maybe it's a surgeon in a remote area doing a procedure. And as they're doing the work in remote expert could be showing them certain things as they're working, you know, to say, you know, to to show them the relationship of two things, or to show them a diagram, or just to show them something that will aid them during their procedure to make them more informed, to make them to make them smarter. Yeah.
Great, great examples. So going back to this notion of obstruction, what are the sort of the practical implications maybe of the kind of the standard seethrough sort of model that might face a healthcare worker?
Well, you know, if you've seen pictures of healthcare workers, or anyone for that matter, using a HoloLens, or using even the older versions, odg, or using an unreal, I mean, it doesn't really matter, the you know, what the manufacturer is, their field of view, their primary visual field is littered with information. And the problem with that is that it is distracting. It takes the user away from the object of interest, it takes the user away from seeing small details in their field. And that is the kind of thing that I think all, you know, high performing professionals, you know, try to avoid that distraction, they there's enough going on, that you need to concentrate on the task at hand, that having that additional information directly in your field of view is undesirable. And I think we know that. But we tend to ignore that. And it always amazes me that people don't talk about this more often, you know, much in the same way, if you're concentrating deeply, you don't want a radio on the bat in the background. You don't want to hear music in the background, especially music with lyrics, because it's distracting their own only somebody's tasks that you can attend to at a given time. And in the visual field when you're distracted. It kind of limits the ability to work. And I think we have been seeing that over and over again, in the industry, we're seeing see through head mounted display fail spectacularly on multiple levels. And the industry always concentrates on the technology of it. Well, what is it about the see through display? Well, the field of view was not large enough. The image resolution wasn't good enough. It was the interface with the technology that was problematic. And that's why this device failed. And I often think to myself, well, maybe it failed, because people really don't like to look through a see through display. Are there instances where see through display will be useful? Absolutely. Of course there. It's a big world. And there are lots of tasks. And if I was driving a sports car at 180 miles an hour, I might not want to look down for a fraction of a second. So there certainly will be use cases. But I think in general people don't want to Be encumbered with distracting vision,
the new paradigm that you are creating, describe how that works in practice, describe kind of a scenario maybe where that how ultimate, a surgeon, doctor, or some sort of other healthcare worker would utilize this sort of different paradigm.
Sure. So the look down reference display, or for that matter, look down extended reality, or look down spatial computing, because what we're describing here is a display. And I like to remind people that our solution is really a display solution, it's that you can use in this particular form factor, we're not describing an entire ecosystem, we're describing an actual optical solution and display. So how would this be used? Well, it's small enough, so that it sits snugly along the face. Beneath the primary visual field, it sits in your vert in your inferior, vertical peripheral field, so low enough down along the eye, beneath the infraorbital rim, that lower bony boundary of your orbit, and it sits on your face along the cheek. So when you look forward, it is in your peripheral field, your vertical peripheral field, but that actually has such a small contribution to your visual field to your attention that you habituate you ignore that, when you look out, you see nothing. But when you look down, you're now focusing your primary visual field directly into the eyewear. And depending upon the construct of our optical solution, you know that, practically speaking, we've developed optical designs in which the field of view would be 45, by 25 degrees 45 by 12, and a half, the smallest 145 by 12 and a half, it's pretty much the equivalent of two smartphone screens held horizontally, side by side, about 15 inches from your from your face. So that gives you a general idea of how much information we're talking about. So what we see is that people would be interacting in their environment. And when they look down, they would get that just in time information that they want, they could look down, see what they need, and then look back up to attend to the task at hand. I think for most people, that's a better way of interacting with digital data. It's less intrusive. And it is also hands free and immediately available. And I think that's really what people want. They want just in time data.
So this notion of just in time, hands free, relevant, contextually relevant, absolutely differences, at least at some level is that maybe you don't get a superposition of the data on a real world object? Well, you certainly could, you could describe that.
And you could do that with pass through video. So there's no reason that you can't use video to survey the environment around you. And put that image through the eyewear that you're looking at and superimpose upon that digital data. And I would suggest that that might even be a better solution than seethrough. Because when you are able to control the video pass through, you're able to co register to fuse your constructed digital content with the real world view in such a way that they match better. You can combine that field of view, you can combine the brightness and the dynamics of what you're looking at. And I think it would be a a superior experience because of the quality of these blended video inputs. So video pass through would be a way of doing this. I often think that instead of see through augmented reality, I often wonder why the visual why the virtual reality companies that have you know, large headsets that are exclusive, why couldn't they do a video pass through as well, it almost would be, I think, a more pleasing experience to the user to have that pass through to see the world around them on a large headset than doing a c a c through headset.
We've seen products like Mario has produced some sort of video pass through sort of system and they've got a pretty nice latency, pretty low latency and their ability to process pass through that video. And their primary use case for that is to create more realistic training scenarios. So if you're a pilot, if you're a surgeon, and there's a benefit from mixing some sort of real world construct, whether it's a surgical center or whether it's a pilot cockpit, along with visual digital data, like the scene that's outside of that window of that cockpit. That's a training tool. That's their initial application of this, that they've seemed to have found some really nice traction. So definitely are a couple of companies out there that are pursuing this notion of video pass through and beginning to explore the practical benefits and trade offs that are there in creating that. But you're right, definitely one of the huge benefits is that you get a much cleaner integration of that real world and the digital data occlusions a heck of a lot easier to solve for, you know, you don't have to worry about as much the translucency of the image and getting it washed out against the real world background, you can do so much more in terms of creating a higher fidelity version of that that integrated image. So in this device that you've created, so far, can you describe, you know that it sits down below the the bony part of the orbital socket here on your on your cheek? What's the sort of how much mass how much bulk? How much weight we talked about in this sort of device that you've created?
Well, I can only give you estimated answers, because I feel a great degree of of notoriety in that I am probably one of the few people you've interviewed, who has traction in the extended reality world whose entire product sits on paper. So it's an interesting question, but it is certainly answerable. And the beauty of this paradigm is that all of the components that sit on the face beneath the eye, are limited in number, the only things that need to be there are the optics, which can be made of plastic, the laser men's driver, which is extraordinarily small, and the end of the optical fiber that takes the laser light from the laser diodes, all of the heavy components, the battery, the asix, the the the junctions, all of those heavy components of the laser diodes, themselves, they all can sit somewhere else, because the laser light can be taken from those components that say that sit on the side or the back of the head via a fiber optic cable to the front of the head. So the components that lie in front of the face would be extraordinarily light, and I'm speaking in the neighborhood of 10 or 15 grands Wow. It's it's almost nothing upfront. And if you had a very light plastic frame or titanium frame, that's really probably the majority of the weight would be the enclosing frame itself, not the components, the heavier components would sit to the side of the head or behind the head. Now, having said that, what we're describing here is a display. And we are an optical design company, we're not an entire eyewear company, or a you know, an electronics company. So our assumption is that the data to it would come from some other source something like your cell phone, or something that you would have in your pocket or on your belt, that would be the source of the digital information that you would simply display in your eyewear.
Got it. So this notion that ultimately you are creating this direct retinal projection that's passing through this this lens. And there's so much innovation in engineering in trying to perfect this lens in a way that works well in the sort of the sort of new paradigm. What was the sort of technology that you ended up with? I think when we chatted before he described as a pancake lens. What does that mean? And what are the implications of using that type of technology?
So
this is the work of our chief optical designer, our chief optical engineer David Kessler. And before I describe this solution, I think there was a lesson to be learned here. And the lesson I learned was that there is something to be said for older experienced engineers in technology in Silicon Valley. We are always enthusiastic about you know, young engineers and young entrepreneurs and the next, you know, wunderkind to to change the world. But you know, there is no substitute for experience. And Dave is an extraordinarily experienced optical engineer, spent many years at Kodak was a distinguished scientist for Kodak, and has been around the block more than once. He is most likely on people's shortlist of leading optical engineers in the world. And finding Dave was really what pivoted. Our company, because I struck up a relationship with him and a friendship with him. And I convinced him over the years of the merits of a look down eyewear. And once I was able to convince him of that, I was able to always push him and motivate him to try to find a better solution. What was interesting is that I said before that, you know, using a refractive lens had all of these difficulties with the image reproduction on the retina. And Dave's solution for this was a pancake lens. And the pancake lens is is not a new invention, it was actually patented decades ago, and their patent is expired by Raytheon. And what a pancake lens does is it receives the light polarizes the light, so that it keeps it so that it passes forward and reflects the light internally. And as it reflects it internally, it bounces it off of a concave lens, a mirror, so that it then reflects, again, outward in the same direction towards the exit pupil. And by polarizing the light at different points, you're able to keep changing, you know, you change the polarity of the light, so that you always keep it moving forward. And by folding the light path, and by changing the direction of the light path, you're able to reproduce the image in a very high fidelity fashion. The interesting thing is that we use a pancake lens as a relay. We're not forming an image with the pancake lens, it's simply relaying the light created at the mirror, to the pupil of the eye. So the novelty of our design was using a pancake lens as a relay of the original image the original light source in the light path. What that enabled us to do was to create a field of view that had excellent image quality across the entire field of view. Our resolution was superb, it met the limits of vision of the human eye, one to two arcseconds, and that image quality continued, across the entire field of view, there was very little degradation at the margins. Secondly, we had almost no chromatic aberration, you know, all of the colors stayed had very little dispersion at each of the points of light on the retina, there was no field distortion very often, you know, the margins of the field become warped, a rectangle, state or rectangle with our field of view.
You might ask, so why didn't everybody who's a pancake lens, if this thing has been around for 20 years, you know, why didn't people use it. And the reason is, is because a pancake lens, because of all of those polarizations. You do, it is extraordinarily light inefficient. Theoretically, the best you could do on a pancake lens is 25% light efficiency. In real life, it's actually closer to about 6% light efficiency. So if you were making a head mounted display using an LED, and you use the pancake lens, you would be in trouble. Because the problem of an LED source is that you are doing your best to get brightness, you are doing your best to get battery life. And what you don't want to do is expand all of that energy to lose 95% of it in your optical configuration. The beauty of our design was that we were using laser light, we were doing everything we could to get rid of the energy of our laser light, you know, we only need to use 110 1,000th of our output. So having a 95% loss of light at our pancake lens. That was great. It solved, you know, part of the problem for us. So the beauty is, you know, we used a really beautiful optical component that had a serious drawback for most traditional optical designs and apply to it a light source that obviated that problem.
The laser combined with the pancake is really the perfect combination here. Absolutely. And this, this sort of innovation ultimately enabled you to win The recent NASA I tech competition, what's the Can you describe that when and he described the opportunity that NASA sees with Project Artemis and human spaceflight here for a product like this?
Sure, we were invited to compete in the NASA eyetech competition. And I strongly suggest any entrepreneurs, any innovators out there, look into this program, because it's really quite, quite fantastic and a great opportunity. They put out requests for submissions to enter this contest, which happens in several cycles during the year. And what they look for is technology that can benefit the space program. And it needs to be a technology that not only will benefit their needs, but they're practical, and they like technologies that have a commercial application as well, because they understand that their best success to gain products will be if somebody wants to step in and help these these startups, these entrepreneurs, these innovators get their product to market. So it's a very, it's a very smart way of having a competition. So we entered this competition, and we moved through various stages to come to the final competition, in which you present your technology to the NASA judges, and they consist of the chief technologists at the different science at the different NASA centers, as well as members of the entrepreneurial community, VC firms, technology companies, etc. And what we presented to them was the following, we have a eyewear design, in a form factor that we think is dramatically different from the type of digital display that you have been looking at, in the past. NASA, like like many industries, like the entire aerospace industry, has the need to get information to their to their personnel, whether that personnel is an astronaut, a pilot, or someone working on the ground, you know, working on their aircraft, they want to be able to get digital data to people quickly, so that they can respond in a hands free manner. So when astronauts have an extra an extra vehicular activity, you know, they'll be working on something in the outside the space station, they need to look at a display typically something like a the equivalent of an iPad strapped to their wrist. And to do that you lose use of your hands. So they have been interested in heads up display. But the problem with heads up display for not only them, but for the Air Force as well is that heads up displays are fraught with problems, the transmission of light through them is suboptimal. And in very bright, Stark high contrast working environments, the reflection of sunlight, or bright sunlight and dark shadows, it, it creates havoc with that kind of a display. So it was very, you know, it underperformed for them to have that kind of large, helmet mounted head mounted display. So they have been looking at different solutions to this problem. So when we presented to them, this solution of something that is extremely low profile that lies along the face, that leaves your hands free, that doesn't need to connect to the helmet, it just, you know, the the astronaut will just put it on that the source to it can be anywhere it could be within the spacesuit itself. They were very enthused about that. So, you know, we were able to, you know, progress through we and we were very gratified that we had won an award from them for promising technology. And it's the kind of thing that really motivated us as well. I personally think it would be extraordinarily cool to put our eyewear on an astronaut. I would actually, if someone made me the offer of getting that done and doing nothing else with that company, I would actually take that even though everyone in the company always says to be No, no, no, no. I just think that would be extraordinarily cool. And it would really be an excellent demonstration of the strength of this paradigm.
Where do you go from here, you have worked through the design challenges. So if you describe where you are ultimately in developing your vision, and we're going to go from here,
a fairly practical and conservative guy, and maybe it's because of my long years of medical training, but I always feel like you should You know, prove something before you take the next step. I'm not a big believer in planning huge programs and paradigms without having yet proven the underlying components that will make it possible, particularly the critical components that make it possible. So I felt that looking at amalgamated vision, you know, what is really the most practical, pragmatic step for us to take? And that would be a proof of concept proving the technology that I just described to you. Do we have high confidence in this design? Absolutely. Our engineers are very conservative. And they are certainly 99% confident that this design will work because we have done large numbers of simulations to show that the optical design it, at least is successful. Having said that, there is a of course a large leap between having an optical design and fabricating it. But we're fairly confident about it. And interestingly, so was NASA. Because the components that we're discussing are from an industry that's matured over decades, laser memes, laser diodes, fiber optics, pancake lenses, these are all well known, tried and true technological components that make up our eyewear, we don't need to develop something that no one has seen before, what we really need to do is just to assemble these components in a proof of concept. And the reason to do that is if I was a large electronics company, or a large HMD manufacturer, and I, you know, I was approached by an optical design company, I would say to them, you know, the proof of the pudding is in the eating, can you show this to me? Can you show me that this actually works? So I think to myself, that's what I should do. That's what we should do it amalgamated we should do the proof of concept that definitively and absolutely proves that laser beam scanning via a pancake relay is a viable solution to head mounted display.
What are the hurdles ultimately, in delivering that new prototype?
Well, the hurdles are financial. And interestingly enough, this is an interesting paradox. Our technology is in many ways, boring. And I know that it's almost odd to say that, because I've described to you a new way of seeing things we've talked about putting it on astronauts, we've talked about how this is a completely different, you know, breakthrough design. But when we approach people who are interested in investing in display, and in digital data, they're much more taken with these almost grandiose plans to create something that is using words such as, you know, disruptive and, and culture changing. And, you know, when we approach this, we tell people, we have an excellent design, and we obviate the problems of see through technologies. And that for some reason, doesn't capture the imagination of venture capital investors. They're much more interested in a really, in a really fantastic story. And for that reason, I find we have difficulty getting traction, you know, among among investors, having said that, you know, winning a NASA eyetech competition certainly brought us notoriety, being vetted by them, or receiving that that nod of approval, you know, said to many people, oh, you know, this is seemingly boring, but maybe there's something to it, maybe I should listen. So, you know, that was that was somewhat gratifying. The other thing is that I cannot tell you how many people we have presented to have said to us, well, that's all well and good, but everyone is doing see through Microsoft is doing see through Google invested in magically they're doing see through North focals, you know, Kira there was odg there was metal, they were all see through. So obviously that's what the direction we need to move towards. And I found that venture capital decisions are actually extraordinarily conservative. They want to go in the direction of the mass market. And if the direction of the mass market is wrong, all of that investment is going to go in that direction. Whether it's wrong or right, so that is, to date has been a major obstacle for us. Yeah,
they're definitely definitely the case. We imagine the venture capitalists as a whole are risk takers, but in reality, they're relatively conservative, they have certain metrics, they need to meet on behalf of their investors. And just like the old days, in, you know, corporate it, you know, you never get fired by choosing Microsoft. In VC land, you never get fired, whatever that means in VC land, by going with a consensus going with with the mainstream thinking around things. And I can imagine it's quite challenging to present in a paradigm that's quite different than what the mainstream is pursuing at the moment. I guess the counter argument is that none of these major companies who are pursuing their current approach have actually found any financial success, yet, something needs to be different.
I agree with you, that is actually has been a major argument for us. And it's something that when people approach us, they're starting to realize, despite billions of dollars of investment, there is no market leader. And that really speaks volumes. So up till now, when people would look at our paradigm, and they would look at our technology, and they would think to themselves, that's really simple. You know, that that's a little boring. Now I find people are coming back to us, and they're saying, Well, you know, maybe that's simply a really good idea. And we should pursue that. What's interesting is Carl Gutenberg, who writes a blog on technology is is quite a chromogen. I enjoy watching him, you know, look at all the different technology in the field and, and giving his critique. You know, he once said in a blog, you would think that these large companies would do due diligence on the feasibility of the optical design before they invested $500 million. And it sounds like it's almost impossible. But I tend to agree with him. I think that there's large amounts of monies being spent, and large amounts of technologies being chased, that are certainly interesting, and certainly have technological spin offs that are amazing, but are maybe a little too far in the future, or maybe a little misguided at this stage of the game.
Yeah, let's wrap up with a few lightning round questions. Sure. What commonly held belief about spatial computing Do you disagree with, we've covered one at length, in some ways, throughout the whole the whole conversation,
I think the belief that spatial computing will lead to some kind of sensory overload, and being detrimental to people is not true. I think spatial computing will be ultimately used in a very sparing and relevant way to deliver the information that people need at that moment, in a very relevant fashion. I think spatial computing will be extraordinarily successful. And I think it will be maybe a little more plain vanilla than people are thinking it will be. I completely
agree with you. We imagined these Marvel Universe Marvel Cinematic Universe inspired sorts of experiences with these sort of devices. But I think especially in the early days, it'd be very utilitarian in nature, its value. And that utility will come in part by the fact that it's not there all the time. It's out of the way. And that data is only there when you need it, which is not all of the time. Besides the one you're building, what tool or service Do you wish existed in support of or based on spatial computing?
Well, this is a slightly tangential to your question, but I think it answers it. I think there is not enough attention to the content of extended reality, or the correct applicability of extended reality. And I'm often reminded of the HoloLens videos in which medical students are standing in a conference room, and they are studying the anatomy of the heart. Now, let's move aside for a second, the fact that those were actually manufactured graphics for those videos and not actual learning demonstrations. But I think to myself, well, here are medical students studying the heart. Why do they need this projected in the middle of a conference room? That there are tables and chairs and people and reflections from glass? Is it studying the kind of thing that's done better on a quiet opaque background, so that there is less distraction, and you can focus upon the details. interactive learning with other people is certainly a great thing. But for many people, and for many tasks, learning extended reality experience needs to be quieter. It would be as if someone proposed reading a book by projecting the pages of the book on a piece of glass and put that in the middle of Times Square. Don't you want your experience to be more self centered, so that you can understand it better and that you can absorb this information in a more focused and personal way? So I think that's one part of the content, the applications of extended reality in spatial computing, that really needs to be understood. And that will come out I am very confident that once head mounted display and spatial computing takes hold, and people start generating content, the industry will quickly filter out what needs to be interactive and what needs to be quiet impersonal.
Yeah. Yeah. What book Have you read recently that you found to be deeply insightful or profound?
About a year ago, I started reading Carlos Castaneda, and I started reading his novels. His his nonfictional accounts of his experiences. And I don't know if you're familiar with Carlos Castaneda. These books describe his interactions with Don Juan Matisse and Don Gennaro, who are Yaki Indians, who are shaman there, they are very spiritual men. And they expose Carlos to an alternate reality. They teach him that there is more in the world around them to be perceived than we think there is on our daily on our daily living. And it's a really fascinating pocket, it causes you to think about and question, you know, the reality of the world that you live in about alternate realities, you know, things that are happening around you, that you simply don't perceive. And what's interesting is that in order to get to this place, you have to unlearn what has been put into you since childhood, you need to free yourself of the suppositions that the world is a certain way. And when you start to free yourself of those burdens of narrow perception, that's when it starts to open up the world around you. And you start to see forces and meaning and beings in the world around you that you normally wouldn't perceive. You know, whether you you know, believe it on a literal basis, or you believe it on a metaphorical basis. It's really interesting and eye opening and mind expanding. And it fits nicely with the idea with ideas in science, those very far out ideas in physics of multiverses that we live in or string theory that you know, we live in multiple simultaneous realities. And these books are a very spiritual look at the same exact question, which is what is the reality of our of our life and our world around us?
Fascinating. Last one here, if you can sit down and have coffee with your 25 year old self, what advice would you share with 25 year old? Adam?
That's an interesting question. Well, I would certainly tell myself to take a step back and look around a little bit more, to be a little more accepting of people and situations around them. I spent my medical career as a fairly hard edged doctor and I'm sure you know the type and I think I would tell myself, whoa, you know, chill out a little bit. You know, keep that curiosity keep that compulsiveness that you have to always you know, be sure that's the right answer. Be sure you're doing the right thing. You know, be sure you're you're doing your best for the patient. But I would Say, you know, be a little kinder, be a little nicer, be a little more open to all of the people around you. I think it's a good message for not only myself old, but I think for a lot of physicians, you know, I would also remind myself of what the truly important things in life far and it sounds, you know, almost hackneyed, but your family and your friends, and, and appreciating, you know, what you have in life, you know, those are really the things that ultimately will matter to you. You know, when I look back on my career in the past and my career going forward, that's the kind of thing that I want to the kind of way I want to lead my life the way I want to teach my kids. And it's even the way that we conduct our our amalgamated. You know, we try to be very open, we treat business relationships as almost people, we are very upfront with them. You know, we're very exposed, you know, and I think it's good, I think it's a more humane way of conducting your life, conducting your business, and better ultimately, for for everyone for the all of society. Yeah,
that's great. Any closing thoughts you'd like to share?
Well, we've we've spent a good amount of time discussing the paradigm that amalgamated vision fields will be as successful. But it's important to remember that there is really no one format that will solve all of the problems. For all of the different use cases, we feel pretty strongly that there will be use of many different form factors for the many different problems within the different verticals, see through contact lens, our look down paradigm pass through occlusive. I think all of those different form factors will come together, they'll be sorted out by the marketplace. And I think there'll be an entire palette of choices. For people looking to use HMD to take care of a problem or enhance some facet of their work or personal life.
Where can people go to learn more about you and your efforts there at amalgamated vision,
they can go to www dot amalgamated vision.com. Or they can email me at Adam dot Davis at amalgamated vision.com or our business development person, Paula kakin. At p kakin. At amalgamated vision calm and we love hearing from people. I love hearing from young engineers. And we would be more than happy to have discussions with people and open up a dialogue. I very much enjoyed this interview. I really did. I thank you for the opportunity to speak. And I thank you for the format.
Yeah, it's my pleasure. And thank you. Before you go, I'm going to tell you about the next episode. in it. I speak with Julia Brown. Julia is the co founder and CEO of mind x, the creator of a novel brain computer interface that combines neuro technology, augmented reality and artificial intelligence to create a look and think interface for next generation spatial computing applications. Julia has experienced spinning digital health companies out of academic research labs, and she has an academic background in computational biology, engineering and Human Centered Design. In this conversation, we get into the potential and alternative approaches for brain computer interfaces. And of course, we dig into what Julia and her team are creating at my index. I think you'll really enjoy the conversation. Please subscribe to the podcast so you don't miss this or other great episodes. Until next time