The AR Show: Karl Guttag (KGOnTech) on the Attack of the Clones and Magic Leap’s Wasted Opportunity (Part 1)
4:50PM Mar 22, 2021
Speakers:
Jason McDowall
Karl Guttag
Keywords:
hololens
light
ar
display
problem
big
pixels
waveguide
put
mirror
lens
image
move
optics
eye
sprites
laser
clones
degrees
field
Welcome to the AR show right dive deep into augmented reality with a focus on the technology, the use cases and the people behind them. I'm your host Jason McDowall. today's conversation is with Karl Guttag, who you may know from his technology blog KGOnTech at kguttag.com. Karl has 40 years of experience in graphics and image processors digital signal processing, memory architecture and micro displays for use in both heads up displays and AR glasses. He's got 150 patents to his name related to these technologies and many billions of dollars of revenue attributed to those inventions. Karl spent nearly 20 years at TI Texas Instruments. It was named a TI fellow, the youngest in the company's history. And the 20 year since he's been a CTO at three micro display system startups in two of which he was also a co founder. And these days, he's also the Chief Science Officer at Raven, a company developing a hardware and software platform to deliver mission critical intelligence to military and first responders when they need it most. Well, he is dedicated to creating successful AR glasses. He's also the industry's resident skeptic,
lI sometimes say I'm the resident skeptic of the AR industry. Everything we're doing has a list of pros and a list of cons. And what's going to happen is the marketing departments going to tell you all the pros, and there's not many guys out there who tell you the cons, I guess that's the purpose of my blog some days is to tell you what's wrong with thing, because they're already gonna tell you what's right about it. So with not being cold,
Like my first interview with Karl several years ago, this was a long and wide ranging conversation that I split into multiple parts. In this first part we touch on cloning both of microprocessors and AR devices. We also talk about why c through AR is 10 times harder than VR, the importance of field of view in AR versus VR, the poor visual quality of the HoloLens to the challenges of diffractive waveguides, and laser scanning displays, magic leaps, wasted opportunity, and more. Some of it gets technical, but Karl does a good job of making it accessible. If you've ever wondered what it's like to sit down and have a drink with Karl, it goes something like this. Let's dive in.
Karl, cloning seems to be part of tech culture. And even though this may be more, more than most recently, we've seen Facebook really actively leverage the innovation at snap and cloning features from them on the software side. And you know, you've seen it firsthand from the early days of the PC. I remember you telling me once that Sega and Nintendo were ultimately cloning your early chip designs, why were they so eager to kind of take what you had done and put it into their products? What was the mindset kind of behind this cloning within the tech industry?
Well, I don't know, I just think there's been a lot of success, probably the most famous clone back in back in that day. Now, the younger people here probably haven't heard of it. But it's been called the Z at their Intel did a chip called the 8080. And then zilog, who's long since gone, did a very successful clone of that chip, where they enhanced the feature set. So they basically took the 8086 instruction set, added some new instructions. And it turned out that that chip was extremely popular. That was where you saw CPM plugged into Apple's they were using z 80 chips and that chip with the enhanced instructions became more famous. And I think people knew the Z 80 better than they did the 8080 by the late 70s. This was all in the late 70s era. So So actually, even though Intel kind of invented the instruction set and everything most people know is the at the similar thing happened in video games I was involved. My very first chip at TI was the 9918. That was the first chip to have sprites. We call this sprites, we named them sprites. I know the guy who came up with the name. So we did the first sprite chip and I was one of about six engineers originally on that program. So I had a lot to do. And actually I was very heavily involved in the design of how the sprites work. Well it turned out that ship was pretty successful. It got designed into the clique of vision. It was originally for the TI home computer which wasn't so successful but had a certain following. And it also got designed into the MSS computer in Japan, there was a company called ASCII Microsoft, which is kind of a renegade and in, I met the guy a few times it's kind of a renegade operation back when Microsoft was tiny. They basically got a license and it was it was kind of a weird relationship. It wasn't quite part of Microsoft, but they call themselves ASCII Microsoft. Well, they they back this thing called the MSI x computer. And it used the 9918 in Japan. So it became really popular, and two companies both Sega and Nintendo developed games for that plan. form. And they were also involved in colico vision clico vision also use 918. So when Khalifa went out of business, Sega and Nintendo who were both developing games for that, and developing for the MS x computer, decided to go out and get different clones, or I'm not sure how they did it, whether they funded it, or whether they, whether they just used it, but they both got different chips that had sort of z 80. so to speak, the 918, they come up with the so the sprites worked very similar meta fact, many of them had many of the same registers. They were they were not quite register level clones, but they did have many of the same registers that were in the 1918. If you go back and you can dig this up through the archives, you'll see. And you talk about, like, why do we have five sprites in a line and then we don't get into that now. But I know all the reasons behind that. And what caused that, basically a memory bandwidth issue more than a hardware issue. But anyway, they copied the way that they work because they were already developing software around the the 918. So it's really an expediency at work. Well, they knew how to use it in both companies and develop software for two different platforms. And in some ways, the critically, the family con computer, which was the computer that came out of, of Nintendo was an, like a next generation NSX computer. So it kind of the Genesis is all there you can see how they were not it wasn't just a random accident. They were already developing for it. So anyway, that's kind of how that came about.
What about the IP ownership issue? didn t II kind of have all the intellectual property behind the patents behind the work that you're doing there at TI?
Yeah, I never understood that. back then. I was a beginner. And actually, I was spending a lot of time in England on a lot of this went on, I was working on my new chips. I don't know how that all worked. I don't know if there was cross license agreements. I know Yamaha was one of the companies who cloned it and I never understood why ti who was getting pretty ledig litigious at the time with respect to IP never dealt with it. But it wasn't my it wasn't my department. And I do not know why they they didn't pursue it more. But it is kind of ironic that only the real effects Shia NATOs, of anti computers even know what the 918 is, but they all know about Nintendo and Sega. More than Sega is more or less common these days. But, but they know about Nintendo, but they never, you never see them very often reference back where they got their ideas from for that chips.
For the uninitiated, what is a sprite? In? Why was it so important?
Okay, well, yeah, the up until that time, we have what are called playfield graphics. And I've got some documents to go back. But basically, everything was hardwired at that point, you, you have these, you have the play field, which was static, and might scroll a little bit left or right or up or down. But there was a play field, which was like your background. And sprites were the players, the player graphics. And what they did is they could fuzzy position x and y. So they had this idea that you have this teeny tiny bitmap that you could place anywhere, almost basically a cursor, and basically all they were were like cursors, like a hardware cursor would be considered one of these guys, only, they're a little bigger, and you may have a little bit of color with them. What sprite did and what the nine 810 is the first chip was the first ship to use DRAM as the first consumer chip in the world to use dynamic RAM as its memory, because we were already thinking we're gonna need a lot of memory for this thing, which was the other reason why it was pretty successful early on, because everybody else up into that point was using static RAM. So Ram was really precious. We could we could do a little more because we had it But anyway, we did processing. And so our sprites, what we actually did is go out there. And we had a little tiny kind of hardwired processor that would go out and grab information about the sprites during the during the scanline when you couldn't draw, and then we pull information in. We call it pre processing, we go through and figure out where they were going to be on the screen. And then during blanking, we grabbed just the information for sprites. And because of the way our sprite processing work, we can handle four and a line and up to 32 on a screen. And so people know about sprites, blinking and stuff. Well, the problem was not actually hardware, we probably could have squeezed another fifth sprite on there. But we didn't have enough bandwidth. We literally the D Rams were too slow at that time. And we couldn't afford really fancy what they call page mode and fancy or addressing modes. So we were limited by memory cycles to do everything we had to do. And that's what kept those from getting that fifth Sprite. And I think the later chips they started using faster Rams and faster interfaces, and they could pull more and plus, you know, technology was really progressing at that time. So they can put more hardware on so they can handle more sprites on line. But this whole thing of how many sprites on a line, that's me, I'm one of the ones that came up with that good or bad but, and I can totally justify why we did what we did. Because there was not a memory cycle of spare, we used every memory cycle we had. But we were using D Rams when we were using s Rams. And that limited our bandwidth a bit to
some of the optimization challenges that you faced at that time are similar in some levels to the optimization challenges that we're facing today, in augmented reality headworn devices, right, I hear some of this observation around sprites, and I think about the optimization for motion to photon latency. And trying to, you know, use whatever trick in the book we can to anticipate where we need to place content within the user's field of view, so that it looks as seamless and as you know, up to date, low latency as possible.
Yeah, yeah, a lot of it's about managing bandwidth I that that and really, my whole career at TI I was looking at bandwidth. I was also involved in the video RAM, which is all about this bandwidth problem. How do we get pixels in and out? How do we get the display back then we had to refresh the display. And we had to we had to display information while we were putting display out to a CRT back then. But you had the same thing today, you have this one memory? And do you want to be writing to it to put new stuff in or do you want to be displaying what you've already got in there with frame buffers. And that changed with the the video Ram was all about give a separate port to that the video ram not too many people know this either led to the synchronous DRAM, it's exactly I mean, it's it's an exact the same people were involved, I was involved in that. And the same people who worked on the synchronous on the video ram also led to the synchronous DRAM, if you look at the later video RAMs, what they basically did was take the serial port, which was actually a ram as well. And the end at first it was a shift register and later became a ram static RAM they started using the serial port as the buffer for the fast access. So you know you have your fast access modes. So basically, it was a look at today's synchronous DRAM is that way the graphics ram then became went back and kind of took across as some of the video ram features and the features from the synchronous DRAM but it was all about bandwidth was. And this is by the way, I see this with the new Apple m one processor. And all I used to design CPUs for after I did the 19. I spent the last the next 18 years working on CPUs, and I was a CPU architect for ti for about 18 years. And it turns out the processing gets to be almost infinitely fast. People are always surprised in processors, how little area that aliu and the adders. And anything that does math is and how much of the thing is about getting data in and out getting instructions in and out. It's really traffic copying, it's really getting data moved around. And all I heard one guy theorize that and it was basically true today that processing becomes infinitely fast, it's it, it's just getting infinitely fast. What takes all the time is moving the data around, have the same thing with getting stuff in it's it's managing how you get stuff from the ends to the outs, and it's tough and it gets into display devices as well. If you have to build up like fields sequential color, and you can't display your first pixel until you get the red, green and blue all and put it in a buffer and then separate it back out. Well, that's going to be a delay. And so yeah, you're it many, many, many things. And if I was on the degeneration side, after I did the 918, I did some CPUs, then I did graphics processes, which were CPUs aimed at doing graphics and moving pixels around. And it's all about managing the bits coming in and out was a little bit less. So now I like I say when I worked on graphics CPUs, I was mostly I was back in the days before the Mac team and I started working on bitmap graphics and user interfaces. And you know, back then we couldn't afford the floating point processors and whatnot. Now today a graphics chip is a floating point processor. And ironically the probably the number one app for those is Bitcoin. And we never had that in our business plan that one day with the graphics would be so big that it would be being bought like crazy to mine Bitcoin. So
the unexpected consequences of technology. Yeah.
I, personally, I think bitcoins a little bit of a Ponzi scheme, we invent a fake coin a thing, and then we create a method that makes it really, really hard to create more of it. So it seems like a total waste of effort. We're spending tons of electricity and manpower and buying up hardware and whatnot, to basically produce a fake currency for somebody for marginal Ponzi scheme. Yeah, it seems like the world's greatest Ponzi scheme to me. But that's, I'm old school, I think you should be building product to build products for users to use, I should say,
yeah, a lot to be said for the gross inefficiency of the of the Bitcoin mining side of that sort of currency independent of, you know, the belief in its store of value or its use in in transactions, that the notion that you have to spend waste energy, there's no real world value being created with all the electricity of computers,
right. And you're spending power you spending real power, real energy, it's global warming and all that. To create a fake product that you create is fake scarcity. You created a scarcity can sell and it's a little bit like, if you go way back in history, I think it was like the tulips, right, the big tulip thing back.
The Netherlands
was I forget what century it was. Yeah, but there was centuries ago, that this big thing where they were selling these tulip bulbs for outrageous fees until the whole market collapsed on it, took that analysis and made a Bitcoin many times. But I think the interesting thing here though, is the M M, one we've been living, remember that when I joined ti The 8086 was already out this is this is I think the 86 came out in 1980s 1977. Or there abouts within a year or 276 77. Somewhere in there, it came out. And we've been living with that really crappy architecture ever since. Now part of it is people made too big a deal about risk and we can get in that debate. But a large part of the thing is, it doesn't really matter too much what the CPU is, it matters how you manage the data, the caches, how you get data in and out. And if you look at the M one, I've looked at it in gross detail, but isn't from my 20 year ago history, took a quick look at it and said, Yeah, what they're doing is they're just they're trying to integrate more and more on the chip, more integrate more memory on the chip, so that they can control the bandwidth better. Because you know, once you go on chip, bus width is kind of trivial. You can go very, extremely wide with your bosses. Once you go on chip, you pay a penalty when you transfer from on to off. So big interesting, see how it scales. But it looks to me like that's a, you know, the whole thing about risk versus CES was for the large part grossly overblown. It was way over stated. It was it as somebody said, it's really about pipeline. That's another thing is about pipeline depth. What happened is is that you designed based on the pipeline depth of the memory and the bandwidth that you add on the architecture you add. And what happens is, once you get beyond that, then you have to start faking it. I always say like like with superscalar. And all they had to fake it, they had to leave themselves notes on all the lies, they were done, say I've done this operation, you really haven't. So you got to keep a note to yourself, in case there's an interrupt or something happens, you have to go out anyway, we're getting pretty far afield from displays and graphics. But I do think that the M one is vain. And we'll probably get to Apple and their AR efforts. But I do think the M ones kind of in is more interesting than usual. And it's not exactly a breakthrough. It's just really a company big enough to pull it off. And the fact that they're able to use fabs that Intel doesn't get Intel has even Intel hasn't been able to keep up the fabs at tsmc
I think that's been part of Apple's magic over the years is this this integration, right? The integration on within hardware and between hardware and software, hardware, software and services has been often where they found a lot of success is in that side of the challenge.
Yeah. And they have the ability to put it all together everybody that had these ideas, but they didn't have the ability to pull it together and to get people believe it because it's a little bit like Bitcoin, it's a bit of a confidence game, you may be able to put it all together, you may be the small startup, I have the best idea in the world. But if you can't get people to believe in it, or believe you're going to pull it off. They're not going to step out on the they're not going to step out and not kind of jump into your airplane with you if they don't think you're gonna pull it off. You know, they feel like well, I'll go with something safe and Apple's big enough. Big enough in there to be safe.
Yep, that's very true. I think a lot of startups appreciate the challenges. They're just getting people to believe we're seeing, you know, speaking of the startups and going back to this notion of clones, we've been seeing a similar type of behavior now with AR glasses over the last, I guess it's been 18 months, right? We started talking about in real light. 2019 may be off by a little bit there. But in real light was announced a while back and now is coming to market in 2020. Right. They're starting to release in Asia. They have agreements to release here in Europe. First part of this year, maybe we'll see something from Israel in the US at some point this year. And then Lenovo also with the a three which they just announced, both of these devices take a lot of cues from the old odg ar nine.
Well Lenovo is not an accident. There's at least two people I know of who worked on Lenovo. Who will work for former from od j? Yeah, that was a joke last year at CES, I think people found at least six clones of the enrile at last year's CES. And I'm told there's even a guy building the optics, there's a optics company who just builds the bird bath updates. So there must be a dozen or more clones. Now, some are using l cos most are using over LEDs. But yeah, it's it's as simple as coin in the realm. If you by the way, if you look inside, almost all a bird bath is basically is using a mirror as your optics, curved mirrors your optical element. And the problem you always have is getting the light in and back out a mirror and a beam splitter. So you can route the light in hit the mirror square, or we call on axis and get it back out. And that's used inside of optics all over the place. If you dig inside almost any optics, you'll oftentimes find a birdbath structure in there. The reason why is that if you know about lenses, lenses, whenever you put a refractive lens, you go through glass, it always has Chroma problems like color separate with a mirror, a mirror always reflects all the colors is saying the same direction. So if you go through a glass lens, you almost immediately need to correct for it. If you don't, you're going to see color break up. Now on a camera, actually, nowadays, modern cameras fix it in post, they literally have processors. I'm a man, I've gotten some new digital equipment. And yeah, you don't see any Chroma aberrations anymore. Because they go in with the computer and say, Yeah, I know how that lens behaves, I'll pull the colors back together again, the problem is, when you're doing it real time into the eye, your eye doesn't do that your eye says hey, the colors are broken up. So you see a lot of this when you see like a simple single lens in a VR headset, you're going to see the color break up most of the complication and this part people don't get most of the complication in refractive optics is in controlling color. Back in the good old days when we had film and film then do all this post, you know, you could the film recorded kind of like your eye but once you watch it, like hit the film, and they've changed their lenses on you can see it and lens design how they changed that because they are they know that they can fix the colors in posts are letting kind of color. If you look at a raw image of color of a lens before it's been corrected, it looks terrible now, but so they're they're kind of depending on that anyway. But when we do VR, we can't do that. If it comes out of your display and your optics tend to break the colors up well then you're going to see a red green and blue broken up pragmatically by the lens and, and but but controlling color is the heart is probably the hardest thing in optics because most everything you do be a diffractive optics like HoloLens uses, or glass lenses, colors tend to separate the beauty of a mirror is that you can get extremely good optical quality mirrors extremely cheaply. And that's what makes birdbath just the popular option, then you just need a beam splitter to get it on axis. Otherwise it distorts so you can see the really cheap. I call it the Google Cardboard equivalent VR. And by the way, I give a shout out to Mira Mira just got their design into the Universal Studios Mario game. Yeah, in Japan here at super Mary, it's going to come out in February if assuming, you know with whatever happens when parks open, but it's already designed in the game like that. Now there's this really simple but there's this off axis. Because of that you're going to get distortion. And people think well, we can fix distortion by pre correcting it, but we can pre correct the distortion with a computer. Yes, you can but you can't correct the focus. You can if you had a high enough resolution display, and you corrected it, it would look okay, like squares would look square and not some distorted thing. But you still can't correct focus you have to correct focus, you can't correct focus digitally. So what happens is when you go to a birdbath, you put a beam splitter in there, which creates its own problems with blocking light, and extra surface and and space and other things. But it does does mean that you hit the mirror on axis there's actually a secondary thing called spherical aberration, which is a little more detailed and then you can make the mirror a little fancier. Or you can put a little coating on the mirror a little bit of glass or other optical surface on the mirror a little bit, makes it in what they call a magnum mirror to correct for that aberration. But But by and large mirrors are just phenomenally cheap. And don't have that color problem. So you see it all over. The problem with all these clones
is is that when you do a birdbath, you get your none set of advantages, everything as I sometimes say I'm the The resident skeptic of the AR industry. But everything we're doing has a list of pros and a list of cons. And what's going to happen is the marketing department is going to tell you all the pros. And there's not many guys out there who tell you the cons. I guess that's the purpose of my blog some days is to tell you what's wrong with things, because they're already going to tell you what's right about it. So what's not being told. But yeah, what you do a birdbath designed like that. It looks pretty good straight on. But if you look at it from the side, it's sticking out over your from where your glasses would normally be. It's sticking out about an inch. So when you look at a side view, it looks pretty, pretty funky. Also, it blocks a lot of light, almost every birdbath is going to block at least 70% of the real world light. That's every one I've looked at. A lot of times you see pictures of them, and they end they they're not really working optics. But if you see a working one, you always see like a mirror reflection. That's because that outer curved mirror the the birdbath of mirror, the current one is usually semi mirror it's is partially coated mirror. And therefore when you look at it, you see your reflection in it. What gets your eye is about 70 30% at best of the light from the real world, when you start doing that you've lost you've lost a lot of real world light, it's like wearing dark sunglasses. So when you do that indoors, yeah, works indoors. But then it's like you're wearing dark sunglasses all the time. And I always say to you, that you light your room with 30% of the You mean you should turn your lights down and all your rooms by 70% because you think it would be better to light your room that way. Now what you've done is you've now walked in the room. And now your room is really too dark for normal use. But there's no way to get back from that particularly on the see through version you can do a little better. There's us. I call it an A and a B type birdbath. They're using the birdbath if you look through the car mirror, there's a similar one that used by some people where the mirrors on the bottom, the curved mirrors on the bottom, and you look through just the beam splitter that's a little better on light pass through but it's and I don't know all the physics of it. But almost always those are encased in a solid piece of material. I imagine you can't do it right. If you do it in air, where's usually the first pass you see from enrile and the ldg that are all copying you to gr nine are all done in air. There's Yes, there's a surface there. But the fill between the the beam and all is all air. So those are much lighter, almost always when I see the bottom end and one of the famous bottom and one is, is in turn sideways is Google Glass. Google Glass is a bird bath. But the bears on the end when you end it's turned sideways relative to these other ones, the bird bath limits where you can put things so because you got to kind of have a straight path, you'll see any stereo birdbath, they're gonna put it on the top they got to do top in. And a lot of people don't like that heavy You know, this is where we go back to pros and cons. I can, I can tell you what the pros are the image quality is almost invariably very good. Particularly if they use an OLED it's it's like, it's almost like looking at an OLED in a mirror. There's a little losses because you got some surfaces there, every surface you have degrades the image quality. And the beamsplitter has two surfaces people don't think about this way, but you have one going in and one going out. The mirror only has one surface because it's reflecting. But then you got to go back to the mirror. So you think about it, you got to go to the beam splitter usually twice, you got to go once on the way you have a reflective path to the to the mirror to the curved mirror, and then you have a pass path through it. So you're really stressing the quality that that beamsplitter to. And so it's going to show up, you'll probably see some secondary reflections. The other big thing you'll sometimes see is a cup under him. Because most of these have a fort, they're going to have that 45 degree beam splitter and I've had this problem where if you were in like at a conference, I was wearing a badge. And what I'd see is my badge in a badge I was wearing in the image because and some people put a little cup under there to block life on the bottom. But that looks even funkier and this this is always a trade off to whether we go for looks or do we go for image quality. Every one of these has a trade of I sometimes talk about my dirty list of right now it's about 23. But it's around 20 major, major problems with AR and for every one you have an advantage, you know, you say well, this, I'm going to solve this problem. You almost invariably hurt something else. Like if you're going to make the you want to make it more transparent. Well, you got to give up something so when you try to improve one aspect, you heard the other. Generally when guys talk about their great new technology, their great new thing, they don't talk about the problems that just cause they only talk about the Advantages and every one of these trade offs you make almost invariably every optical structure. Every method has pros and cons to it. They're not all pro
as it relates to kind of the evolution of the odg. Our nine design, which I think was, what was that? 2016 2017? ish, somewhere in that? Yeah.
All right. All right, somewhere in there. I know I wrote an article on it when it came out, and it's actually gleek dates back. It's interesting to watch odg if you go back, the AR six had a birdbath, the AR seven, one with just a simple beam splitter and refractory optics, they are seven then they sold quite a few of those. And then they went back to the birdbath for the AR eight, or nine or eight or nine are basically the same design just did different resolution OLED displays, but the cost of OLED displays coming down. Everybody's going to tell it up there. But yeah, it's but they kind of went back and forth. But yeah, everything seems to be in so i think i don't know that these are sect probably goes back. early teens, early 20. teens. But yeah,
with the work done at Lenovo, as you noted, there are some legacy team members there, including my close to you who was the head I think of the RNN project at od G is now leading efforts at Lenovo. Are there any improvements? Like are there? Is there an evolution in the capabilities of the the devices themselves, whether it's the overall volume that they take up, or the visual quality they able to deliver or some of the other 23 no list of challenges that you have on your list of of AR?
Well, they're not doing every improvement, I think they know about there are things they can do in terms of coatings and how they treat things and how they deal with the light. I mean, they're still within that birdbath design, he's a better beans player, you can use better coatings to get down on reflectance, you can do other things to try to make it cleaner. But once again, it's a thing simple design, because the mirror is doing double duty. Most people don't realize this I make I try to make a big point of this, but most of the so called magnification. When people think magnification, I think, oh, you put a lens on and makes the image bigger. Actually most of magnification comes by moving your eye closer, almost everything in AR and VR, I did a real simple I looked at a TP VR headset of felt one of the clipped on a phone. And like it magnifies by about 5x. By the by the way, the way we measure magnification is measured at 25 centimeters, you look at the size of the image at 25 centimeters away versus the size of whatever distance you can use an app and I looked at I said you know what a on the VR headset I was looking at it was making the image 5x bigger, I measured it with a camera, big camera take a picture of both, it was 5x bigger. But three and a half x of that came from moving the camera closer. In other words, if I just moved the camera closer one thing that you can do with a camera that you can't do with a and actually I stopped it down. So I didn't even change the focus, I locked the focus. But I was able to stop it down to like f 22. So it created it would have a very huge depth of field. So I could bring the camera right down but not move any of the focusing motors, I found that three and a half x of the magnification came from the distance change. So it was only being only a small percentage of the magnification was actually the lens most of what a lens does in an AR or VR system is allows you to focus the image not make it bigger. Now this does get to be a problem in AR because what people don't get another big thing is like a VR pixel on the order depends upon how what the resolution of the thing is like about 50 microns AR across a AR pixel is typically around six to 10 microns. So you're looking at 20 to probably 20 to 70 microns for VR. And he'll get it at anywhere from three to 10 microns for AR pixels. At some point you do have to magnify you actually have to make the start making those pixels. Biggers because I get to be too small. And so what you see this thing as you'll notice, like what the early VR headsets, everyone was complaining about the screen door effect or the chunkiness and all they needed to get their pixels a little smaller, but it's really hard on a flat panel to get them that small. There's almost there's this gap, I call it the pixel gap. There's a gap where when you go to silicon, now I can make the pixels really, really small. And it's actually about the transistor technology. When you do like a television set, you use one kind of thin film transistors. When you go to building a cell phone use a different kind of thin film transistors, laser in the field and all this stuff. And then when you go to a micro display on silicon, you're using semiconductor substrates to make your pixels. Well, it turns out there's this gap area where it's really that they haven't quite crossed yet. They're trying to make the thin films even smaller and smaller. But at some point, you can only make them so small. And so there's this little gap between about 10 and 20 microns, that nobody fills very well, because if you try to put big pixels on silicon, the pixels get expensive on silicon, you tend to pay for area. So the silicon guys want to give you lots of teeny tiny pixels, because they can sell you more pixels for the same amount of money. The flat panel guys doing like VR, they can they're just keep inching them. They keep inching in that direction, but their technology costs go way up, because it starts to get really hard to make really tiny transistors to control the pixels on glass. So that's kind of the just one of those kind of common trade offs. We see technology wise,
he noted that there are some similarities, but some very key differences between VR and AR. What's so hard about AR?
Yeah, by the way, you know, if you go on my blog, there's some videos that I tried to explain this a little bit, but also very quickly. Basically, in VR, you get to put the display right where you want to which is right in front of the eye. And you know, you look at it like Oculus is big miracle was they figured out that by putting about $1 of optics in each eye, you can make a basically a cell phone type display worked pretty well. For VR. When we do AR we have, it's like not just a little it's not 2x, as one person described, it is 10x more difficult. And that might be an understatement. With AR, you can't position the display in front of the eye because you block your view out. So you now got a position of further away. Also in VR, you gave up every pretext of looking good of trying to make it small and light and one night, you got to basically swim goggles, right? With AR, you now want a small enlight. So that cons to push you into a really tiny pixel. And as I said before, we talked about how you want to move it closer to the eye to make it bigger, where you can't get the free advantage, the free advantages move up close to the eye, just put some optics, let it focus. Now we're going to have to move it around the corner around your temple, usually it's up on your forehead, that display has to be moved farther away, even though you're using a small micro display that where the pixels are probably too tiny in the first place. And you actually would like to get it closer to the eye and just put a little optics to focus in. So what's happening now is you've put it in a mechanically bad place, you've optically moved it away from the eye.
So now you're forced to not only change the focus of it, but you've also got to start figuring out how to magnify it, how to make it make the pixels bigger, because the pixels are actually too small. But if you make the pixel, if you made the display beggar, you know, you're not going to put a cell phone size display around the temple of your head. So you've done that. And then you want to make it transparent. And I always say that, as soon as you go to that combine, or whatever we call it and there are lots of combining structures could be waveguides could be your bird bath mirror, whatever you're going to use, it's going to do some damage to the image and some damage to the real world. We're only arguing about a mount. For example, with the bird bath. Even if you had ideal which don't exist, but ideal optics, you're still going to lose a lot of light. Because you're still going to lose, you're still going to block light to make to route the light to your eye from the display, you got to lose light at the same time, you're also losing usually a lot more light from this place. So you're wasting a lot of light coming from the display. And you now have to look through whatever it is that redirected the light to your eye. Invariably, anything that's going to redirect light to your eye from the display is also going to mess up light from the real world. We see this with HoloLens or any diffractive waveguide. You see diffraction effects, you are looking through a diffraction grading now the grading is detuned. That gratings at about not quite 45 degrees, but it's designed to bend light in the visible spectrum at about 45 degrees to redirect it from bouncing around inside the waveguide to make it come out to your eye. But that also means if you have a light source, it's about 45 degrees off angle from the same direction as the display lights coming. It will take that light and directed in your eye. It works both ways. You're basically looking at the world through a diffraction grading. Now it's detuned a bit it's about the square root of two the gratings about the square root of two further apart, but it does mean if you have light sources that are coming from angles It will then get into the range where to direct it to your life. So you're always trading the damage that the combiner does. And you've now got to make something that's optically seen through it. I mean, it's just you, you know, there's another type of optics called a freeform optics. And you can build a freeform wedge, you build this little wedge, and it's not too bad that you can use for VR. And people do do it. But if you want to make it for AR, that wedge would totally distort your image of the real world. So now you have to put another lens is bigger than the original lens to glue to it to compensate. It's called a compensator. And you got to compensate it. So now you got about an inch thick of glass or optical material can be plastics too. But you have this really huge thing. Just because you got to make you got to make up for the what you screwed up in the real world. Some other little trick that that you hit, that's really kind of kind of funny too. And you see this with HoloLens when I finally realized what HoloLens was doing, you don't realize about HoloLens, both one and two, actually have a lens built into the shield, the back surface of the shield between the waveguide and your eye, it's got a little lens in it. And there's actually another lens glued onto the waveguide that compensates for it. So there's that so and what they're doing is they're trying to change the focus of the virtual image without changing the focus of the real world. So what they do is they have a, that just like the freeform, they have a lens, that correct, that changes the focus of the virtual image coming out of the waveguide. Because it's focused at infinity, anything that's in a waveguide, like that is going to be end to anything, it's an a pupil expanding, waveguides gonna end up focused at infinity, they don't want to focus at infinity, they wanted to focus at about two meters. So that lens has got adopter adjustment, that's going to move the focus a little bit, but then they have to put a second lens on it to correct for it. So you actually got a two lens sandwich. Now this comes in interesting when you start talking about well, how do you do vision correction, building the things into optics and whatnot, you have to realize, like, if you put a curve, like I was looking at this for some other purposes and said, well, let's say we want to have a stylish curved thing, and we're going to embed some optics in it, we have to realize that that that because if you had a curved piece of plastic, say, and you were betting something in it, well, you have to realize that the virtual image is only going to go through one of those two surfaces, if it was curved, but both surfaces matched each other, it'd be neutral, and you would, if it, you know, assuming you'll have other problems, you wouldn't notice any distortion. But if you only go through one of those two services and put your thing in the middle, now, you're only going to go through one surface, and you're going to get an optical power out of it. And I'm sorry, you say a few words, and then I just ride along.
So one of the challenges, right, that's also there, you know about this really ridiculously hard problem of both delivering a high quality virtual image, and an undistorted view of the real world. doing both of these things, doing any one of those things, I guess, maybe independently is as a much easier problem. But doing both simultaneously is a real challenge. And in one of these things, you know that we're working with really tiny micro displays for AR because we the thing has to sit on her nose has to be lightweight, it has to be stylish, and it's a smaller display moved further away from our eyes. So don't get the benefit of the magnification of just moving display closer to our eye. There's also an expectation. So when I read, you know, comments or hear comments from community, people in the community, that a 40 degree field of view isn't big enough. There's this notion that we need a large, immersive, fully immersive sort of alternative reality recreated within these AR glasses. I guess, you know, if the argument is that's the ball fully immersive alternative reality, then they're not wrong in this kind of commentary about we need a larger field of view than 40 degrees. But maybe that's not the goal, at least not in the near term.
Yeah, I think we've, I think, I think we've been polluted by VR. What's happening now is we're starting to set impossible goals for everybody. And people don't realize that they've learned about field of view, they pick up people do this all the time. I've seen it in photography, when I was back doing that 20 something years ago, I see it in, in all kinds of fields. You have a bunch of people clamoring for something because that's the term they know about. They know about field of view, field of view was trivial and easy in VR. They had the opposite problem in VR. That's why everything looked chunky, VR the if you take a cell phone and hold it up to your face, you get a damn big field of view. And like I said about all the optics is doing is not changing the field of it's not really magnifying or much is mostly just changing the focus of that thing. When we go to AR we got we got the opposite problem. We got this teeny tiny bit display that we're trying to make big enough. And so our problem is just massively worse. And also, and I give credit to fat star that Thunder gave an excellent presentation a couple of years ago. photonics West, the AR VR exhibition or conference within Mr. Within thing that Bernie crest of Microsoft puts on,
or organizes that sort of makes this point very well that if you go to a movie theater, and this would be it's I put, he puts this up as a poll question I do. Now. Thanks to him, is that okay? If you went to a movie theater, what do you think that if you go to a movie theater, a normal movie theater, what's the I considered the ideal sit in the house, you know, what they designed for? About 30 to 35 degree field of view? That's a movie theater, you go to IMAX. IMAX, I believe they're 40 to 45 degrees. That's an IMAX. So you, people are miscalibrated, thanks to VR, what the what the 110 degrees gives you as an immersion. So it could be useful for just making you totally forget about the real world. But it's at your point in AR, the question comes back with AR, what are you trying to do, if you're trying to be immersive, then forget AR, put the VR headset on. And and really, you're working vision with any visual acuity is about 30 degrees, what happens is your eye tends to people, you'll see these charts, they say see the visual acuity, the phobia or the macular whatnot, the phobia is like three degrees, immaculate like five degrees, therefore, you know, you really don't have vision. But what happens is your eye doesn't work, like a camera, your eyes jittering around all over the place. But it only likes to gender over about 30 degrees, it can move further than that, but it gets sore. And when it's reading text and fat start. That's why it has an excellent presentation on this that he gave. But if you look at like a newspaper, print, newspaper print hell at reading distance about eight degrees field of view, if you read a book, most of the time the book, you're going to find it, it's probably 15 to 20. That's it. If you go outside that if you want optimum reading speed, you do not want to feel the view of the text to get too large, what you're reading needs to stay. So your eyes don't have to move too far. Once you get past 30 degrees, like when you watch a computer monitor, this is a big mistake everyone makes in this stuff is that they think about a computer monitor and how big they are, what you don't realize is past about 30 degrees and that some people they call Ed Turner's and some people they call eye Movers. But But past about 30 degrees, most people start turning their head, they do not look with their eyes anymore. If something's more than 30 degrees out your eyes. And that's only shown these studies, that where you get past 30 degrees, your eyes get sore really fast, you can't sustain and really, your eye likes to sit more like 15 degrees for long periods of time. So if you're reading a book or whatnot, you're not going to read a book like you would not take your monitor, try this, try this at home, folks. Take your big computer monitor that you have and take your text and blow it up. So it fills the entire screen. So its width, the width, and see how fast you can read. You can't read very fast that way. This is something that's lost. It's also lost when you start talking about I tend to think that the sweet spot, we talked about arc minutes per pixel and what's your visual acuity, I always find that the sweet spots around one and a half arc minutes per pixel in terms of anything less than that you see kind of a graininess, and you can't put text up. But if you're trying to put text up trying to read something like directions or whatnot, that's kind of the optimum because you get much more than that much smaller than that. Yeah, it's a little bit better. But you're kind of Sandy, using like fine sandpaper on on fonts. If you go much bigger than that, it actually slows your reading down. Because you're now having to make the font bigger to make them readable. So you kind of want to have a certain Angular amount for the fonts. Like I mean, the number one complaint I hear all the time about HoloLens two is the the text is horrible. And you can't get back from that because once you make it big, the only way to get around that is you have to make it big. But if you make it big, it's actually getting harder to read. There really is an optimum size for readability. There's this optimum sweet spot between the field of view and the size of the font and the size of the pixels and all that and it to me it revolves around about one and a half arc minutes is is always that sweet spot for readability. So are you trying to immerse? Are you trying to breed that's 4045 pixels
per degree.
Well, pretty close. Yeah,
yeah, about okay. And so this notion that a massive large field of view is required for AR? What is it that we should be using? When? What are the sorts of 30 degrees? Maybe there's a nice sweet spot. 40 degrees should be good enough for a lot of at least the way that we are comfortable in using our eyes in our head and, and consuming text.
I come back to the use model. Yeah. What are you trying to do? If your goal is immersion? Well, you definitely don't want to be wearing this out on the street. This is I was at a conference back and Whoa, there was in Japan, back in, whoa, got to be around 2010 or so. And I remember we were looking at these things and had like, no ibox. So they had to literally have somebody fit these things to you. And then what they did was, they put us in this little corral. And we were supposed to be walking down the street, but they didn't let us walk. They put us into the crowd. And they had miners they had people around us to mind us. And they showed video screens with the illusion of us moving through the city were that we would see things in our eyes. And ears summarise they are on so many levels. Now it wasn't I don't think it's really a or it will say are in a way, it was like a teeny tiny prick of a display that creeped in anyway. And they had the miners on it. And I looked at that, because I was the next in line, I looked at it, my thought was, yeah, what's the half life of their customer base, and they let people off in the streets with these, they will be run over by cars instantly, their half life, their customer base would be nothing. There's both a cognitive issue, but also a field of view issue like, and that's our talks about this and his presentation. And actually, if you put all this stuff up there, I'm gonna put this really pretty picture up, but it covers up your hands. They're supposed to be working on this, it's going to show you exactly how to do it. And it's going to show you exactly what the thing is gonna pop a nice bright image. And we looked at your hands, you can't see them because the image is covering them up. If you've got to start thinking about why are we doing AR and i think i think this is just so missed in the industry, it's huge, that people are not thinking about what are you trying to do with this thing? Now they're not asking like Question number will actually be question one is what problem you're trying to solve. Now what type of technology you trying to deliver, I got into bitmap computer graphics back around 19 8081. Not because, you know, wanting to move hard around, I was trying to solve a problem I really got into the Xerox PARC stuff, because they were you can see how this user interface was really going to improve things. So you kind of got to be user centric, as opposed to being technology centric. Even though I'm a technology guy, I love technology. I also like I love light fields. But I don't think they're practical. But I think they're technically extremely interesting. This is a problem. We also see in a lot of the research done in AR, I see a lot of stuff where I said that's a really good PhD paper is really good research. I like to see people doing all that fun stuff. I think it's good. I think it's mind expanding. And it may filter in some of that mind expansion may filter into better products or better things. But none of that stuff is going to get to a user to a mass audience in the next 10 or 20 or 30 years. Just not as it it's so far we can't do. We can't take baby steps. We can't put a decent image up on a guy's face. And you want to do all this stuff. But anyway, back to it. What are you trying to do with AR Are you trying to feel the fist? Okay, if I'm trying to show movies, do you really want to be walking down the street looking at a movie? You know, you kind of get killed? So the question then becomes What are you trying to do? I think if you look at what's really successful, and they are one thing, I'll give HoloLens two, HoloLens two has an absolutely crap display. I put it on my website, look on my website, I proven nascence page I've taken as far as I know, the best pictures ever taken of HoloLens most accurate, show you as close as I can to what a person sees in it. Okay, and it's crap. And what I don't show in there, when I talk about is flicker, it flickers. It has tear, I mean, any way you want to measure it, the HoloLens two has for crap display. But what they got right, is they said, okay, it's going to look ugly, we're giving up on the glasses, we're given up on this, we're going to say what this is for is industrial applications, where I want to lock something in the real world to what you say, to like, turn the knob here, turn the screw here. And now I can tell you a value proposition. I can now tell you that if I can save, you're paying your factory worker, probably a loaded factory cost over 100,000 a year that salary benefits, facilities, retirement and you know, any equipment he's got to use, and maybe more than some of these guys may be using half a million dollar pieces of equipment that they're using. Using an addition to their salary, so you've got at least 100,000 a year tied up in a person working on an assembly line.
Heck, I can sell you a $5,000 thing loaded with software and everything, not just the headset $5,000 a year or $5,000, I can make that guy a few percent more productive, like 5% more productive pays for itself in a year. If I can make him 10% more productive, pays for himself in six months. Every businessman in the world will jump all over that. And that's the selling thing. But what do they say? I'm not gonna worry about whether a message your hair, guys gotta get paid, he's getting paid to do it. I don't care if he looks ugly, who cares? I don't care. But what they did get right is several key things. And I think it's good object lesson what to do and what not to do. They said, Okay, we're gonna we're gonna forget about how it looks. Because it was it was it basically it was a no one comrades, like, magically totally screwed it up. They they made every mistake they could make they knew that is insane how much money they got for how stupid they were. Okay, and that product, okay, and it shows in the sales, what do they sell maybe 6000 units. After spending $3 billion, they put a half a million dollars, they spent a half a million dollars for every unit they sold, give or take. It's insane. But HoloLens what they said is forget all this stuff. Besides which we're not really in the headset business. We're really in the Azure Cloud, I think it's even in a vision or part of Azure Cloud now. So it's really a loss leader for that. But they did find there's a value proposition that there are some real uses. If you go back and say, what are we trying to do here, and you say, Well, I'm not going to worry about Lux, I'm not going to worry about whether it looks like glasses, I'm not going to worry about this. And then the other thing, what I'm going to do is I want to make it more comfortable, I'm gonna make it self contained. So I don't have co words that snag on things like magically, I'm just going to give to all that. I don't care that the image looks like total crap, which it does. But I'm going to lock it to the real world reasonably well. and whatnot. By the way, the other big thing I find is it's when I get almost irate about is this whole thing of when a guy says it's going to replace your monitor, they are insane. I'll challenge them to abate anytime. You're insane. If you think you're replacing your monitor anytime in the next 10 years with one of these things. That's not even the right use model. And you're not going to do it with AR because AR among my list of 20 things, probably impossible to solve is what they call the heart age occlusion. You can't stop light properly. hard edge occlusion means that I can stop the light from a block any light in the real world from a pixel on my image, I have the hard edge I have to exactly block that light. That's why everything in AR looks ephemeral look kind of transference loosen, because it's what what we have is a ratio of of light, that the we're got the racial light, now you can block light in the real world and make it seem more solid that that was the big trick. And magically, people said magically looked more solid. Well, one more solid because they blocked more like it wasn't because the image was more solid or the image was better is actually dimmer. If they had had the same amount of light blocking as our lens, it would look even more translucent for HoloLens because it's a simple light ratio. The problem is the light in the real world, if I put a.or tried to block a pixel on your glasses, if I put a pixel size.on, your glasses, you couldn't see it, it would be so out of focus, effectively, you'd block like 1/1000 of the light, and you wouldn't even notice it. Because the life in the real world is getting focused by your eye and so like the light rays that came from something in the real world will go right around that your eye will then bring him into focus and you won't block anything that may be unsolvable. I think that may violate laws of physics. If that's true, then you're always playing this game of how bright do I want to make my image versus what I see in the real world. That's that's the big thing is, as people always lament about, they say, like a VR display brings their black with them, they get the bring black, they get their own black, whereas an AR are black in AR is whatever you see, take it, take a look out there look straight in front of you. That's your new black level is whatever you see, and you're having a manufacturer tried to create black now. Yeah, we can do glasses that will Damn. But now you're doing the real world as well. And do you really want to do that? And there's, you know, there's a limit to how far you can go.
So enterprise use cases can make sense, right? Yeah, from at least from Microsoft. They have enough sensors that they are doing, you know, locked into the real world. Some of the information.
Yeah, by the way, one other thing they did that's really important is they went for eye relief. We can argue back and forth. I think you guys talked about last last week you even talked about this about making different size units. They make some compromises, they basically went for a ease and ease of distribution model, where they said, Okay, we're going to give up field of view, we're going to give up some things. But one of the big things they gave was I relief that you can wear them with glasses. That's, that's, to me a very big thing we look at. I mean, it's really a great contrast study to look at what magically did versus what HoloLens did, HoloLens. And then we'll look at HoloLens two, HoloLens, two went further in direction of making it easy to distribute, it says, if you have to have custom fitted in lenses, and you have to have custom sizes, now you can't stop now you got to have lots of different things. And if you're going at a conference like the first place, I swear 90% of the original HoloLens all went to conferences for demos and stuff at conferences. Well imagine on a conference floor, I got to have five different demos. And oh, I got to fit your little lenses in there. This is a big problem, by the way, with the with the birdbath ones, almost all of them are have to move it close enough to the back, this magnification problem, if they moved it further away from the eye, that mirror would get really big and distorting. So they they end up always shoving them close eyes and then requiring inserts. Well, like a lot of people don't have even perfect diopter problems. They have astigmatism, which means they distort the image in more than one direction and whatnot. And so when you go to these conferences with these things, they sometimes have this pack of of lens correctors right of stuff you can kind of put in there but it never works particularly well. And it's it's always kind of kludgy to stick those lens correctors in, HoloLens went the other way, they said, hell with that will just give you big relief. It is a lot harder support a wide field of wide field of view. I mean, it got it gets a lot harder. If you're going to give eye relief and support wide field of view and do all these things. And this is the problem I keep talking about. You kept compounding and you say, Well, I want to wear my glasses. Well, as soon as you said that, bing bing, bing, bing, bing, all these other things got a lot harder. And so what what how was that right? Considering I always said HoloLens, one was basically a research project that escaped the lab, HoloLens two, except for the display image quality, did a lot of things where they they clearly learned from the HoloLens one to make it more ergonomically and more practical. It's got a lot of practicality in it, that, like magically didn't give a rat's about and magically, it was just just horrible. I don't know how they're going to, you know, pivot right now. Because their starting point was horrible. If they're really pivoting to industrial, magically, one did everything wrong on Magic Leap.
What do you think version two of that looks? Like? If they're really focused on enterprise? Are they just throwing out the old first version design and starting completely fresh? Or what? What are your expectations there,
if they're going to go industrial, it's going to have to look a lot like HoloLens two, you know, the one thing that they kind of got semi, right, you know, they went all those hold light field, and that was all a bunch of lies, it wasn't a light field, they're doing a multi focus thing. One thing, that's a problem, because I've dealt with some doctors and stuff, third, to kind of important ranges, you have like, once you get to two meters, you're kind of it's kind of to two meter focus, if you make your virtual image focus to two meters, you're pretty good to infinity, you're in the far what I call far focus, you know, anything from two meters on is pretty well, so that that's one of the reasons why they set for two meters, two meters, six feet, that's actually pretty good video gaming distance, almost everything in VR and AR, they'd like to move the focus to about two meters. It turns out and this is another irony, in a way, but VR, they have their have the what the problem the VR guys have is they're trying to move the club focus further away, because it's too close. The AR Guys, if they use waveguides, are trying to move it closer because any of these diffractive or even Loomis or whatever, but all these guys who are doing these people expanding waveguides end up with column ated light that in column A light ends up appearing to focus in infinity. So it's focused way out there. And so what they almost always do is try to pull it so the VR guys are trying to move the focus from like an inch away to moving out to two meters, where the AR guys are trying to move focus from infinity. If they use these fractal waveguides in now the birdbath guys have the same problem as the VR guys. They're trying to move the focus because if the display is too close again, but once you go into the optics that they put in to make a waveguide work column eight the image and once you column eight, the image, its focus is infinity. You got to move it back that Hopefully it all makes some sense. So the one big thing they got right, is if you deal in the medical profession, if you think about it, there are two very important focus ranges. One is that about two meters, that covers much of the real world and much of what people do in gaming. But if you're working with your hands, you work in the hands. And I'm stealing some doctors and things like that. They're working in arms, you know, handling, which is probably more like, a half a meter, cup, a foot or two. So they want to be at arm's length are really relaxed on Blank with your elbows bent. So now you'd like to have a close focus range for that. So you could argue that that's, I mean, that's a problem. I know, people who want to use HoloLens. So one of their problems is, is that if you're focused at your hands, then the image you're dealing with is out of focus. This is at vergence accommodation complex. And that's you could argue that's one thing magically got right was the idea of having two different focuses, but they paid a big price for it, you're not going to look through today, you got to pay for two sets of waveguides, you got to look through two sets of waveguides, who crap the image of even more you're looking, you're looking at the real world through six layers, a waveguide. And it's more expensive. Also, to make all that work. I think that part of why they had to shove it closer to the face, you get all those problems. So it goes back to they they got they told you about the two focuses, but then they had this whole list of other problems they created with it. But yeah, but there is a reason to want, at a minimum, those two focus ranges. But you know, you could argue that you get there with a flip down or something. But there is a problem with HoloLens that I've heard a lot of complaints about in the when you have to work with your hands. And you can argue a lot of their guys a lot of their use model is guys. Yeah, you got to walk around the shop floor and see the motor. But when you're working on the motor, or you're working on the thing or turning the wrench, it's got to be at hands length. So there is some issue with that. And it's a little discomforting, to not not have the focus be right there.
One of the things going back to Microsoft, you noted that you've documented on your blog that the visual quality took a step back with HoloLens two, yeah, in HoloLens one, they're using an LCD based display and now cost display in HoloLens two using a laser scanning approach laser scanning display. What do you think Microsoft was trying to accomplish? as relates to the display and optics with HoloLens two? Why laser beam scanning in that mix?
Yeah, well, I don't know that that's, that's always puzzled me and you can't get a straight answer on it. I've heard it's about brightness. Although, with l cos, you can get brighter than HoloLens quite a bit brighter. There are a lot of a lot of guys out there using l cos. And DLP for that matter, that are much much brighter. The HoloLens, HoloLens, I believe gets to around 500 nits. And I think that's a downhill center of the display net. It's not a it, I don't think you get that across the whole display. And it's kind of hard to tell, because colors are all over the place. And I know they made some improvements in the waveguides. But it's still, if you went to a store and bought a TV that looked like that you'd return it. So the laser has, in theory, what we call better add time to complain. It's a really complicated a nasty word in the field. And I'm not sure I fully understand that I do. But I've learned a lot about it. I've as long as I've been in displays, you keep hearing this word bandied about at time do. And it's a, it's a basic concept, though, that that light gets more random. If you take light, it's like energy. If people you know, maybe you know from physics, that energy always tends toward tea, which is more of a random form of energy. Well, light always tends towards a diffuse pneus or randomness. If you look at the product of the angular diversity of a piece of light, the angular diversity versus the size. As you increase the size, you can decrease the angular Drizzy. If I put some light through a lens, and column made it somewhat semi column ated, the area of the light will increase, but the light rays will become more straight. But if you look at the product of the angular diversity and the area it stays a constant or gets worse, because everything's going to damage it a little bit. It's like anything you do with energy, you get more heat, more random eat energy. Well, anything you do with light, you tend to randomize the lights and more. And one of the problems you have with any waveguide display and this is a huge factor today. I think this is probably one of the dominant issues that people are struggling with, is if you have a pupil expanding waveguide which may not be the right eye I'm not convinced yet that pupil expanding waveguide Particularly the diffractive ones are are the long term winner. But when you do that, you end up to get the projector small and all you end up and but did the physics of the thickness of the glass and all, you end up with a very small entrance area. So what you have to do is you have to take the light from the display, and cram it into a very small hole. Well, when you do that the laws of time do say that you can only couple so much of that light column mated into that hole effectively. And I'm my understanding is on a typical, and these are all typical numbers, there's lots of variables in here, but you're going to lose about one, only about 1/5 of the Nets from say, if I had an OLED or micro micro led only about 1/50 of the light is going to make it into the waveguide into the little tiny entrance area. And then you're going to then take that 1/5 of the light, and then expand it over the exit area, which is maybe another 50 to one loss. So right off the bat, your your your
one 2001 2500. So you you're you've gone from nets into nets out have been dropped by over 2,000x. And then and then you have to look in the other losses like the got the fraction losses, you have reflection loss, you have other losses. So on top of that, so typically, I'm called you're looking at maybe 1/5000, or less of the light comes out that went in, in nets. Now the one thing that laser scanning does is you don't have that loss, the the actual generation, the light is not all that efficient lasers, people think laser, like when you take an OLED or a micro LED, when you turn a pixel off, it's pretty well often it consumes extremely little power. One of the problems without cost is you eliminate the whole display. So even black pixels take the same amount of power as white pixels. All that l cost does is redirect the light someplace else. It doesn't, it doesn't absorb it, but it sends it to someplace that does absorb the light. But it just changes the which pixels get reflected to the eye. But the light inputs are always a constant for that for an image with a with a OLED or a micro, micro OLED or micro LED. When you turn a pixel off, it's pretty close to off, there's a little bit of power, but very, very little with a laser, you consume about 30 to 50% of the power and the laser, whether it's on or off, they have that for the scanning lasers, they have to keep them in what they called sub threshold. So what they do is they actually are powering that laser all the time, because they came from totally off to on, they would do that plus there's a lot of what we call inductive power. Basically, when you turn that laser on and increase the current into it, you it's really reluctant to do that. And so what happens is you have to drive live batteries. So actually laser scanning engines are extremely inefficient. If you tried to put up a white screen with lasers, it's lousy, its efficiency is terrible. If you put up a black screen with lasers is lousy. And Matter of fact, I did a study Godley 10 years ago with a laser scanning projector versus an L cost projector and the laser scanning projector to consume much more power to put up black that may help cause projector did to put up light, a white image or a black image that took the same power. But they there was actually only a small increase in power to put up a white versus a black image laser scanner. But the one thing that happens and the one big advantage they have is that on do that that laser beam a laser beam has essentially is that time is essentially perfect. So what happens is almost all that light would get in theoretically. Now let's add another all these things. lasers. lasers are one of those things. laser scanning engines are one of those things that people don't think about it very much it works great in theory, it sucks in practice, and you kind of see it. So what happens is the the problem you now have with the laser is yes, it would in theory infinitely couple in, but the light is not behaving correctly. And you have to condition that light to get it in because remember we said before we have to calibrate not only the light, we have to calibrate the image. So we have to make the image look like it's focused in infinity. And so what happens is that laser scanning and it's not perfect, particularly with a dual scanning laser. And if you look inside of HoloLens two, I saw my blog and there's been some tear downs of it, but I kind of showed how big doesn't have this huge optical thing with a bunch of mirrors a bunch of actually slightly off axis optics done but bunch of curved mirrors and they're trying to condition this light and shape that so it shaped properly to get in. Now you can do it another way with what they call an exit pupil expander. an exit pupil expander is like a really miniature rear projection screen has to be extremely fine grain. But it's like a miniature rear projection screen what the exit pupil expander is, is this going to kind of randomize the light rays because those light rays are going almost to perfect. So what you do is you take a basically make a rear projection TV out of it miniture. And then you put a lens in at the column eight the image. So now you're back to L cars, you do that the efficiency sucks, it's worse than alkalines. Because now you've got all the discipline. Now you make that EP that pupil expander, where it's very well called high gain, and it only slightly randomizes the light, but you have to kind of randomize them so you can get back together. Again, the thing that HoloLens does, is they are allowed lens to did is they have this really complicated set of mirrors and stuff, that engine is huge. You know, it's funny, you're talking about how tiny all these laser things are. But that no, they actually if you look at the, the engine got bigger, when they went to lasers not smaller, it's actually bigger than the HoloLens two, it's a lot of air, but a lot of mirrors. But they had to do that to make it work. So you still got to take the laser light and it's not doing you know, a lot of these things come back to whatever the display puts out, you just can't shove into the eye. Now, there is a company that tried that North, no longer with us, at least as a company, they were bought by Google's effort this year or last year now. But they were trying to go direct in the eye, but they have no eye box.
So you were talking a bit about the challenges of laser in the overall volume of the display. And the energy consumption of the display with a laser beam scanning doesn't really get you much advantage over using something like l class at least when it comes to the volume and the energy consumption. But the big advantage, as you noted is that it's easier to cram all of that light into the small hole required by the waveguide. That the trade off that aid hanu enforces is easier with that laser because the light is coming out column A that is coming out from a single point. And so getting all of that into the into the waveguide there is then a larger percentage of that light is ending up in the waveguide not losing, you know, not 98% loss here, you're losing sort of navion less than than the scanning side of it, though, then imposes some additional challenges in terms of creating a decent field of view. Can you describe some of those those added challenges? I guess, scanning that laser round?
I don't know that the field of view so much. It's the resolution? It's really hard to control the beam accurately. I described this on my website many times, but the scanning process is just total crap. Yeah. I mean, I'm old enough to remember CRTs most people nowadays would even know what a CRT is, if it bit them. But you know, I'm used to that whole scanning process. And the issues with scanning. And laser scanning is is a really crappy way to do it. Once you have pixel registered things, it's very easy to get the image into those pixels. But with a scanning process, you actually are always rescaling the image the scanning is no longer rectilinear. It's a very distorted scanning process that has holes in it. And you either make it overlap or you make it blurry. The the resolution of laser beam scanning is not very good. So the problem you have is how do you content and people forget it's an accuracy problem. It's not just first of all the mirrors are too slow. I mean, I will, I've never seen anybody debate I'll debate anybody on stage anywhere. I'll throw the gauntlet down, certainly after COVID, or I'll do it online. But there's lots of things that are absolutely fundamentally wrong with with HoloLens two, and they haven't really been able to solve them like the scanning is like for x too slow for the resolution they claim to have, it's at least 4x lower than it should be. There are problems that you're now mechanically moving a mirror, and trying to direct that to move around. And what you have get down to is you've kind of changed domain it's a grass is greener thing, where you've gone from your problem was something like that time do, you have now moved it into another domain where now you can't accurately control where that beams going to scan, and how fast you have to turn those lasers on and off, that does lasers have to turn off in a nanosecond to to hit that pixel. And now you can't control color depth very well, because they also that velocity of that beam is highly variable as it gets to the center of the screen is moving at maximum velocity, as is moving at the outside is moving really really slowly. So what you actually do is you end up using up a huge part of your color damp to dim the images you've got to the outside because the beam is moving slower but speed equals brightness, the slower you move the beam, the brighter it's getting relative. It's actually We know an optics most of the time the center is the brightest and the outsides tend to be dimmer. It gets back to all the physics of what's going on. But almost invariably, normal optics are dimmer on the outside than the center unless you do something to correct for it. In the case of laser scanning, it's the opposite. You have to heavily correct the outsides outsides tend to be bright in the center tends to be the them because they've been moving at its fastest. But then you have another problem that lasers do not one thing led us do that lasers don't do is you can dim LEDs almost infinitely you can, you can turn a laser way way down and get an extremely dim amount of light. Lasers don't waves until they get to a certain amount. So when they turn the laser on, they have to actually have it on. As I told you to keep a sub threshold, they like to keep it just below where the laser starts lazing. But then once it turns on, it tends to turn on with a CAC. So if there's no light, slow gear in lasers, they just go from medium to very bright. They don't they don't have a dim. And so what happens is that now your color depth is getting traded between this very narrow band between basically medium brightness and super bright first versus they don't have any way to dim it. So that also uses up their color depth. So any if you measure by any rational means look at these displays are absolutely horrible, plus a flicker. And that's, to me a long term problem. I know there are people we back in the 80s This is funny, everything history repeats itself. If you go back and study it, there were ISO studies back in the 80s that complained about the fact that you have to get the refresh rate. You may remember computer if those of all that Remember, the computer monitors are going to 75 and 80 hertz, I believe in Europe, they required I think they put it as like an a health and safety requirement that monitors would go at least
would go at least 80 hertz or thereabout. And that's that was old CRTs if you're gonna look at a static image on computer monitor, they wanted 80 hertz because they felt it was the health and safety issue. Homans to and I can prove it has has components of flicker down to below 30 hertz. I mean, that is horrible by any and nobody ever addresses that I don't see anybody other than myself talking about it even there's so much stuff get overlooked. One of the things we have is we have a once a confirmation. But basically, if you look at the AR community, other than me, I think I'm the only one I'm the only one out there saying what about all this stuff.
But most people in our community are kind of got this confirmation bias. They all want to promote it, they all want it to succeed, which is fine. That's okay. I like Bernie crest. He's He's a good guy. I like him. We get along? Well, when we're not not discussing technical issues, but we get along well. But it's it's great. But you have to realize you've got confirmation bias, the group of people you're talking to in the AR community, are all believers, you're talking to the converted, you're converting the converted, you're not talking to people that are looking at this stuff objectively, and analytically and saying, Well, yeah, here's what's really wrong. Or I get this Moore's Law thing, which is just like, oh, there'll be a miracle somehow. There'll be a miracle solve all this? Well, some things and this is where Ed time did get you at thundu brings you back to Earth, I call it time do the killer of dreams. because it brings you back you can't get around the physics of it. Same with laser scanning, laser scanning, it's got a lot of physics problems. And I think in some ways it was the grass is greener, I do not understand why hollow why Microsoft chose it. And I think it's they knew the devil of L cause their issue with us the biggest one being fields quencher color and color breakup. That because it produces the display, red, green and blue, it also introduces lag, there's more photons to electrons back to photon delay, because you've got to go red, green, blue, you can address that by feeding up that field to clients and upping that rate. So there are ways to mitigate. But you can it's always going to be worse. Although there's a lot of delay in laser scanning to because you think about it, you've got to take this image and you got to recompute an image based on the distortion that's coming out because it's hugely distorted image coming out of the native of the laser scanning. So yeah, I don't understand why I know that compound photonics compound photonics has had the greatest technology that nobody's ever seen. In some ways. Compound photonics had a really great specs and I've heard really good things about that. I've never seen one but I've had people I trust tell me their display looks really really good. As far as I know, they never got into markkula but I had a really good point two six inch 10 ADP really good contrast super high field rate should really address that field sequential issue. They also claim to have some neat technology to reduce the latencies. And all that stuff, they've even got a thing where they can, instead of building the image in your graphics controller, where you do it to fields, you actually build the image in red, green and blue. So you cut down on some of that latency and all, but I've never seen him go to market with anything. Now they're working on micro ladies. But they supposedly had a really good elkus device. And I don't know why it's been coming to market and left as a yield problem. I know it was a business problem. I know what it was. But I've heard good things about the samples that have gone out. I think it Microsoft, I mean, I it's unimaginable, I'm guessing Microsoft may have taken something that already had like Microvision had sunk half a billion in it, they'll have lost over half a billion dollars. And they probably more than doubled that, to get it to do all the things they had to do to get it production worthy. And it's still kind of crap. I mean, look at all the problems they had when they introduced it. I hear they have to have some really extreme comps as part of the what the devil you don't know is, is if you just think it's laser scanning, and it's really simple. But really, you have to get down, dig down below that and get into the physics, I understand the precision required to make that up to the waveguide is ungodly. And that was part of the problem that happened originally why they had so many crappy things, the smallest little error, unimaginably small errors would cause problems. I know they've got some kind of blue filter and they're doing some other stuff. Now that reduces the problem. But even on his best day, you're still going into a diffractive waveguide that's going to lead let end up with a color a colorific image White is is their enemy, they're not going to want to show you a white image. So anyway, it's got a lot of other devil in it. l cos has an advantage in the time during going back to at ANU. And why did you'll notice that most of your diffractive waveguides have used your fractal ones and and your Loomis as well. Most of those waveguides have used l cos or some use DLP. Although I'm seeing people switching from DLP DLP is getting used apparently less than less, I'm seeing houses that used to use DLP switching Dell cause reportedly they can get more resolution for less money is is the the refrain I hear, but that the beauty of L cost goes back to this add on do thing when you do l cause the time do is set by the size of the illumination source the optically or from the set I do equation the L cos looks a lot like a mirror, you lose something due to polarization but it looks a lot at I do why is like a mirror. The led the illuminator led sets the size of the time data sets the add on do the smaller you make that the better. By the way, you can also use lasers eliminator and this is something that I know digital lenses got a that a really interesting white paper is really good little bias at the conclusion is they are best in their white paper was that always comes out that way somehow. But it's a really good white paper that Jonathan Waldron put out, he's now kind of stepped down as CTO and stuff. But it's kind of on his way out, he wrote a really good paper on it. And it shows this. So I do recommend reading that's a real interesting paper. Just remember, it's got kind of an inherent bias in it. But we're they're going to use laser LeMay. So now if you if you illuminate l cause with a laser, you actually have the same kondou as you would if you use laser scanning engine. That's why back in the day, you don't see too much. But if you use a if you use laser illuminated l cos it's focus free, you don't need you need a lens to kind of set up but you get a depth of focus, it's you just what they call maxwellian. Which basically means if you you can put in, you could once come out of that projector, you can put a screen in any destination, it'll always be in focus past a certain point, it tends to focus from a fairly short distance to infinity. It's highly in focus. But that's because of that, that said time do you think they all just kind of interrelated focus free and, and at time do you're getting into column mounted light, but anyway, the LED is setting that so because of that. Now if we look at say a micro LED, micro OLED or micro LED, now you're back to the sides of display, you have a little tiny emitters in there you have all these little tiny LEDs. Unfortunately, you can't call them eight very well each of those individual led now people put lenses on it if they put optics built in, but they don't have the column ation gain. As I kind of said you can kind of trade column nation for area. So what happens is when you take that led, they're going to column eight that light out of the LED corn and the owl cause so you've improved the column ation of that light by the amount of area that you've increased it. Now you're then going to turn around and cram that in a little hole inside the waveguide So what's really setting the time do with other losses factored in is the size of the LED versus the size of the hole you're going into in your waveguide at your entrance area of your waveguide. But at least you're down to something that's in the millimeter range versus a typical OLED might be, you know, probably the 20 millimeter range. So it's like, huge, and by the way, areas, what counts here too. So if you're, if you're say 10x, bigger in one dimension, you're 100x bigger an area. And so your time is just hugely worse. If you started with those, that's why you'll see like the overlay, if you look at it, the birdbath going way back to the beginning with bird bath, you see them getting used a lot with OLED displays. They're not what we call pupil expanding, they don't have this little tiny entrance port that all the lights got to funnel into. Because of that, they don't have the set time do issue that you have with waveguides. The wave guides, the wave guides be the diffractive or even Loomis is marijuana's when you get into these people expanding wave guys, you get into the really small entrance areas, these entrance areas, then turn into like a, an endtime, do funnel, or blockage that really backs up the the you can't get the light into, and then they expand it back out. So they have these huge, huge losses. That's why you'll notice, you'll see this one, by the way, a big point I'm making these days is that you kind of got to match up various display technologies with various optics solutions. They don't line up. And and so what happens is when you're talking about waveguide, a diffractive, pupil expanding waveguide, or Illumina style, you are talking something that needs a highly calibrated light source that tends to throw you into L cos and DLP. It's an issue. Also, for microbiology, it's kind of interesting, we've seen busick come out, combining a micro led with it, they're still talking a fairly small display and a fairly small eggs and scary they're working with J bird. And you can get light through it, it's not impossible. The problem is, is that it's still very, very inefficient. And that power shows up, you can kind of get away with if you're doing sparse information. Remember, we said earlier that with a OLED or a micro OLED, your black pixels don't consume much energy. So what you do is you drive the heck out of it. And you can you can deal with the average power that comes from that a lot of what people don't seem to get a lot of people focus on the battery. And I'm not as worried about the battery, I'm worried about getting rid, I'm worried about getting rid of the heat. You know, people know how some of these displays get pretty hot up against their foreheads. And all the bigger problem is, I mean, battery technology is improving, and they're coming up with new batteries and all that. But he is there. If you're if you're putting the power in a very, very few percent small percentage of the light ends up in your eye, most of that ends up in eat someplace up around your face. And so that becomes a big problem. You know, people think micro LEDs are like, oh, that you hear about these millions of nits, right. I've been trying to work out the numbers and the best I can figure it out, an OLED a micro OLED. And sometimes I call them that they're kind of like, not so good LED, but there may be the micro led emitter is smaller. And that means when they put these little micro lenses on it, they can column eight, the light better. So they increase what that combination is doing is really increasing the net, you're reducing the angular spread. There's another big thing when you watch a television set, everyone a television set wants something that's very, very diffused, they want a very diffused light. Therefore when you move your head, the display doesn't get better. But everyone used to complain about LCD, the older LCD TVs and laptops, when you move your head a little bit, the colors shift, the brightness shifts, everything shifts, they're getting better about that. So it's much more diffuse. But what you're actually giving up is brightness. If I put a if I put lens lights on there and compresses so you couldn't see as much I make the center brighter, you haven't changed the amount of light coming out, you just concentrated the light. In the case of AR headsets and VR headsets to we only care about like going to your eye. So we actually like that light being column ated or add more columns and more crunched down more concentrated, because most of that light that's going all over the place is actually making your image quality worth so that light is going to bounce around and some of that light is going to make it back to your eye by a path you didn't like. And that's going to reduce the contrast that's what happens is that your the basically contrast loss is due to light that's bouncing around and coming out where it's not supposed to, or you didn't want it to. So when we look at micro o LEDs and micro LEDs, inorganic ones, the newer the newer, brighter ones. The efficiency ratio is when you look at column h Maybe they get five to 10x, the micro LEDs do over all LEDs. If you look at the photon generation, it's another five, maybe 5x, or 10x. When you, when you look at it all together, maybe a micro only D is efficiency versus an LED, the LEDs are maybe 20 times more efficient, give or take. In other words, electron two photon, if I put a watt in, I might get 20 times the net. And that's not lumens that's nits, because I'm able to column eight micro LED light a little better to because the emitter, the mid area smaller. And that means I can put a lens on it and in column made it a little better. And that's, that's very, very rough. Most of the brightness values, how do you get from a typical OLED is 1000 nits. And a typical micro LED, they talked millions of nets. How do you get from A to B? Well, a lot of that is power.
You got 1,000x difference in brightness, but only a 20x difference in efficiency. Well, all the rest comes from power. So that gets to be an issue that when you look at these things, when he's trying to crank up and get a million nets out of a micro LED, if you put up a light display, that sucker is going to be consuming a lot of power, probably more power than you want to generate out of the battery, and more power than you want to try to dissipate.
You're gonna need
none of this. I mean, we're using the car, the carbon nanotube type, flexible things that are pretty marvelous. But even they can't get rid of it. And besides which all that heat sinks do if they don't get rid of our they just move it. So you're going to move it someplace, you still got to get rid of that heat. Otherwise, it just literally backs up, you if you put it in a heat pipe or a heat spreader or whatever. If you don't let the other end of that spreader, pump the heat out with a fan or radiators or fins or something, if that heats not moving out, it'll eventually the whole thing will just reach it statius where if it stops moving. So power is a heat and power becomes a big thing too in this whole equation. And so it's not just enough to get brightness, this goes back to this whole problem. So there's a lot of issues with these diffractive waveguides, when we look at them as being the standard people like him because they look like a piece of glass, because then it's a piece of glass and you got to protect it. So you put in the big plastic shell. I always say that, that everybody designed draw board starts with a Ray Ban sunglasses, and they always come out looking like HoloLens. And it's like, well, why because well, they started with the lens. And I think, and you can argue this is the same thing with the laser scanning.
I think a lot of times, you you, you almost always readily see that in that list of advantages. And you haven't been checking carefully enough or you don't you only discover later, what are all the problems, what are all the nightmares problems you then have to solve, because you went for the advantages. And that happened with all these things that were saying that we just see this story repeated again. And again, you talked about this, the way you're beginning to think about some of these challenges, we have different waveguides which are appealing because they in theory can be quite thin and small write the code of this huge volume advantage, but they're also hugely inefficient. And part of the challenge is getting the light in. Part of the challenge is getting the light back out and pupil expansion and trying to make a decent sort of ibox viewable area for that sort of display. But it is in large part besides the birdbath objects which talked about earlier, the other kind of heart of the industry standard Microsoft using diffractive waveguide magic leaps using diffractive waveguide. A lot of these guys are using diffractive waveguide, or aspire to use diffractive waveguide.
It's a problem of physics that you also got to remember and this is one of the things you can't avoid. If I want to get light to your eye, I have to have light rays coming at the right angle to illuminate that part of your eye. And because light rays tend to move in straight lines, unless acted upon by like a black hole, or in a string gravity field, which we don't know how to make the glasses either. We have to have that light come from a place. So what happens is is normal conventional optics tend to get kind of big in three dimensions. numbers if I want to put a lens or a an optical or a mirror or whatnot big enough to get that big field of view it has to be big. And one of the tricks that waveguides these people expanding waveguides do is it conceptually it's a way to route light out and make a big surface that's still flat. That doesn't grow three dimensionally. Most optical structures like mirrors, curved mirrors, they grow tend to grow three dimensionally lenses all this stuff is not just going to get bigger in diameter. It's also going to get thicker and bigger and heavier too. And so things tend to grow three dimensionally but they about these people are spending waveguides, it's a kind of a way to cheat the physics a little bit, you pay a price and the image quality tends to go down. But you are able to make a flat surface and get that light routed out to a far distance to get light coming in at a fairly big angle at your eye. And therefore increase the field of view. Because the first principle is that light rays corner, what that illuminates that corner of your eye, you got to follow it back through the lens of your eye back to where the sources. And so you've got to have a big service. And so the reason why these things look attractive, it's a way to get a larger field of view, with a small thing now one of my new favorites, and I don't know how it's all going to work out. But I liked your company called a rim. They're doing some that's kind of in between. They're not doing the people, although I hear now, they may be looking at some form of pupil expansion. I don't know all the details yet. But I've looked at some of their stuff. And they're doing what I call a super slanted. They're kind of doing the T IR thing of bouncing light through it. But then they have what I call a super slanted splitter. So they're it's like normally beamsplitters. Most of the time we see them they're like at 45 degrees. But they've got some, they're doing one where they have two of them in pairs, where they slant the light at an extreme angle, I'm I'm guessing it's maybe 10 or 15 degrees, I probably should look it up. But it's a very, very slanted compared to other beings, but but they use them in pairs. And by doing that they get a fairly thin structure. It's got some disadvantages too. It's not perfect, but they're not doing people replication, it's like looking in a mirror, you get a almost a perfect image out. It's like looking into a mirror, and you make one of the mirrors partial the mirror that's near the display. Now it's not it's not not home free, because you know that they've still got the problem. If you want a bigger image, you got to make the projector bigger. But they do have a way of routing the light to your eye. And it's t IR in there. It's not as much as you would in a diffractive pupil expanding waveguide. But they are using t IR to route the light out your eyes, they bounced quite a few times. You also have a company called tos out there, they're doing something a little different. They're like the ultimate and free form. But they have a funnel in there, which is like a segmented mirror. And because of that, you really don't want to look through that because your eye can never perfectly line up this durations. So they always put the image a little off to the side. There's this nice and then it's all molded and sealed up and whatnot. But and Chris's book, by the way, I recommend everybody they get Chris's book, it's on the SBI website. It's an absolute wonderful compendium of all the optics out there. But one of the things in his book is he comments on his things like tos tend to be limited to about 25 degrees. And that it's not an actual hard limit, not a physics limit. It's a practical limit, what happens is when you get much past 25 degrees with the techniques are using, things just start getting big and thick and bulky. And it just spirals out of control. So yeah, and when you go back to it, the guy like I say when they looked at Hollins, I'm sure they got their technology from Nokia, Nokia developed the the fracfocus. No, no, the diffractive wave guys were bought by Microsoft from Nokia. And so they kind of started with a waveguide and started saying, Oh, this looks like classes. But then they had to finish the problem. That's why I talked about got started with Ray Bans. And without a lens, they started with something when they put the when you put those things on there and you don't protect them, you don't put all the other crowd around them. You don't put the batteries in the processors and the light sensors, and you don't protect the waveguide they look kind of like glasses, you know. But once you finish the job, and once you do all the things, and you run around that spiral of design, where you say, well, I've got to protect the glasses. So they're going to get bigger, I got put shields around them. And I've got to put batteries in here. You know this is getting too heavy to put on your nose. So now I've got to put a headband around it to protect that now I got to put the power in that this and that. That's what happens to you. The reality is, even though you start I always ask this question, they started with a wave. Basically, what I went started with was a waveguide, or visa glass. And by the time they got done, it was you know, 5% of the total or less of the volume and the weight of that headset, probably a few percent the volume and wait for that headset, and that's finishing all the real problems. And I think the same thing happened with the laser scanning. They started with all this is theoretically great. But then we got to finish it, we got to actually build it. We don't just have to build a prototype sitting on a lab bench, we got to put one on your head. Every time you do that, boom, it it ends up as big or worse. You know if it was me on that program, I would have I want to I certainly would have checked out something else and maybe they had some other requirements. We don't know. But they know they said it was for brightness. But Heck, I can get much brighter displays without cloth than I can with them, then they're doing it
anyway.
It's funny how these things go. But I think a lot of times it's because many designers and even worse than the user community, the people that don't really understand it. Many times, even the scientists sit there and say, we know there's all these advantages, and they don't know what the disadvantages are yet and they they take off after making a decision we have to go to direction.
The conversation with Carl continues in the next episode, we talk about matching display technologies to the right combiner optics technologies, Carl talks about which of these he thinks has the best chance of being successful. He goes on to discuss the importance of matching what the device can do well to the user and the use cases. And we get into some of those use cases. Please follow or subscribe to the podcast so you don't miss this or other great episodes. Until next time.