The AR Show: Tomas Sluka (CREAL) on the Power of Light Field Displays for AR Glasses
5:04PM May 9, 2022
Speakers:
Jason McDowall
Tomas Sluka
Keywords:
display
eyes
image
field
pixels
light
ar
glasses
content
device
focus
vision
problem
started
technology
big
approach
3d
pandemics
deliver
Welcome to the AR show where I dive deep into augmented reality with a focus on the technology, the use cases and the people behind them. I'm your host Jason McDowall. Today's conversation is with Tomas Sluka. Tomas is the CEO and co founder of CREAL, a Swiss technology startup that developed and commercialized is a radically new type of display that brings natural focal depth to a truly 3d visual experience within augmented reality. The inspiration for Creal originated with the first commercial VR and AR headsets back in 2014. Like many others, Tomas suffered from very strong eyestrain and lack of immersion, he realized that the problem came from the non existent focal depth in the 3d imagery. Since then, he and his growing team have been working to solve these problems and dramatically improve the visual experience. Prior to founding see real Tomas worked as a research engineer at CERN as a particle detectors. And as a researcher at EPFL on the development of electronic nano devices. He earned a PhD in mechatronics, and is the author of over 50 scientific publications. In this conversation, we talked about the visual discomfort many of us feel called vergence accommodation conflict. In today's AR and VR devices. Tomas describes how our eyes work and the difference between natural 3d and the stereo 3d we get into today's devices, we discuss light fields, how see real is able to achieve them, and the implication on display technology,
the eyes working basically like a 3d scanner, and the brain is building the perception from it. And this is very important for VR and AR, because everybody's speaking about, you know, is 8k and 16k displays which you need to have, it's wrong or wrong direction or dead end because the AI actually needs very little. And to make at the end AR glasses, this must be exploited, you know, 90% Ideally, or 100%, of course, that we provide only the minimum necessary what I need, and then the brain will still perceive it as as the fantastic experience. And that little is actually really little, it's probably way less than the smartphone screen.
We go on to discuss early applications for the technology, the path to light field AR glasses, and Tomas his journey as a founder. As a reminder, you can find the show notes for this and other episodes at our website, the AR show.com. Let's dive in.
Tomas, when did you discover your passion for technology?
I have one very clear moment in the childhood, which I really think that our mind very much my future. I guess everyone has such a moment. But I had really specific decision at this point, which I kept later. And it came, you know, I'm a bit older. So I grew up in 1980s, in Czechoslovakia. And at the time, the pinnacle of communication technology was a TV, radio and phone. And it was a fascinating magic. I just had no idea how it works. And people are talking from it. And I wanted to understand how it works. And it was out of question to start opening the devices which we have at home. But there was one moment when I was 10 years old. My parents took me to church, which was not normal. I was there only on funeral of my grandparents. And the breeze took me somewhere to the frog gave me a candle and started saying something and spraying water on me. I had no idea what it was it was my baptism. But at the end, I got a present book and a small radio relay pocket radio made in Yugoslavia. And it didn't survive a single day, I think or next day, I just took pliers, hammer, whatever I could, and I opened every single component inside. And just realize that I still have no idea how it works. And that was the moment I just decided I will do everything what is needed to understand how it works and to be able to make such devices because this was obviously made by people. So they had to understand it, they had to create the device. And it was fascinating, right? Just pieces of material inside the doing some sound or DVS. Also the images and so on. And I kept this decision later each time I needed to decide what to study or what to do. And I think without it, I wouldn't be doing what I'm doing today.
So really was a guiding light this this constant desire to know how it works and then to be able to make it yourself and ultimately to be at the cutting edge of making the new things yourself.
Yes, that's the key part right? Radio, I mean to make a new radio is still making something which is known but once you can make it you can make also other things right?
Absolutely. What was your first experience with VR?
Well, this was a gradual Of course, I consider VR mostly as something which provides 3d visual perception on the first place. And of course I saw all these Toy's where you have static images, present it in 3d to the stereo, or even some antique gadgets you can find on tourist places looking really 100 years old images, or is out of stereo grams, you know, it's like a wallpaper with some pattern and you specifically close your eyes and some 3d shapes pops up. Or the other glyphs, you know, the color filters on your eyes. And you are looking also on some specifically made with a and you know, it was the only experience when you can see digital information 3d Or somehow artificially recorded information in 3d, even though it is really cool. It was just unique, right? And this is how reality looks like. So this is how also the digital content should look like. You probably mean the actual VR headsets of the different first commercial headsets. Well, first VR headset, which I really saw with moving content was headset, which I made by myself that was really the cardboard before cardboard, I bought lenses. And the phone first maybe even made some stereoscopic images before I found some you know, and you can start looking at it. Then you find even videos which which are 3d and it was not bad. But of course, no interaction, no facial tracking and tricky. And head tracking and the commercial headsets. I think the first one was probably HTC Vive because I could go to shop and just try anything which was there. And surprisingly enough, it was not really better than the headset, which I built in home in some way. The weak point was really the image. So truly 3d, but far from what it could be. far from reality, of course,
what was the biggest gap for you in terms of that visual experience? Was it the quality of the pixel quality of the display, or was some other attributes of the 3d visual experience?
I think that the first one is really that unpleasant feeling in the eyes. So I felt like you have a pressure in the eyes, especially when I start looking at objects close, I think we will get to it, that's really the vergence accommodation conflict still present in VR headsets today. And I'm probably more sensitive to it than other people. But there was also one more aspect, which I really didn't like and which feels like it's not really truly 3d And that is no depth of field. Now at that time, I was actually take I bought a DSLR camera. And I noticed later that I was doing very much older photos with very shallow depth of field, which means big aperture of the camera focusing somewhere close and making everything far blurred. And vice versa. Just feel that this is giving certain depth to the image even when it is flat. And this was not there at all or it was wrong. Right, you know, it was fixed and focused on where it was still blurred. So I knew that this is missing entirely.
But both of these things then for you the vergence accommodation conflict issue, as well as this true depth of focus possibility within the 3d image. So it felt very static and uncomfortable, ultimately, are not quite realistic and uncomfortable both at the same time. Would you mind actually kind of getting into and explaining vergence accommodation conflict? Sure. And what why is it you think that maybe you experienced that more than some others,
the first, these two things, the vergence accommodation conflict and that field are related. So it comes from the same souls divergence or cooperation going for the fancy term for something which is quite common or normal to explain. So divergence is the behavior of our eyes, they converge closer when you want to look on something closer, and they are more or less parallel looking to infinity when you're looking for. And this convergence or Virgin is driving the eye focus at the same time. So when you converge close, the brain tells the eyes also to focus at the same distance. And because in the virtual reality headset, you don't have the image in that same distance, it's actually flat image somewhere far optically, the eyes have tendency to focus on the image itself, like out of focus. And this breaks the natural harmony between vergence and accommodation focus of our eyes. And we don't do it normally, in reality, the eyes are behaving all the time correctly, until you will take the virtual VR headset and the eyes really start searching for that focal plane of the image. And we feel it or many people feel it, I hear that some 15% of people kind of feel it immediately. And it's probably me. And the rest feels at least something usually within 20 minutes. So it's
pervasive. This is one of those things that we kind of talked about within the industry have as a known problem, but nobody's really, truly kind of come up with a good solution to resolve that that problem. As you kind of look across the industry. What are some of the approaches that others have been taking to solve this problem? Or are there any
the first solution, which is actually not solution is to avoid displaying things close because the problem happens when you want to see images close. So the farther you look, the problem is smaller. You know, even with the camera that focusing between two meters and five meters or 20 meters, the difference is very small. But when you want to focus between 50 centimeters and 20 centimeters, the difference is huge. So the problem is close. And if you really pay attention to content which you are enjoying, or not enjoying in VR is often far. So you have some controllers and really touching things far. Or if you see your own hand, like hentrich, it often doesn't have any defined button, which you would try to focus on. So, for example, the hand looks very much the same when you focus on it or is blurred. So the first solution is really to avoid displaying the things in the problematic space. And for example, you can look at the documentation of Microsoft HoloLens at their web. And there is a whole page dedicated to it. And again, the recommendation put the content at least one meter far or ideally, two meters far to avoid the problem. Otherwise, people will feel unpleasant. So that's not a solution. That's the way how to avoid a problem. And then there was a lot of efforts to get, I would split it between two types of efforts. One, I would call engineering effort, usually dependent on eyetracking. So you try to understand where the user is looking from eye tracking, and then to move somehow optically, the image plane to the same distance. So for example, Facebook reality labs, when they were still Facebook reality labs, they had really multiple moving the screen closer and farther, this is massively magnified by the lens, the change of distance, and then simulate digitally the sharpness and blur of the content. But of course, this approach requires pretty reliable eye tracking on the first place. And typically you don't really mimic correctly, all the the focus cues anyway. And supposedly, it's also not easy to really correctly calculate always the focus and blur, but it may somehow work. Magically, then try to make two depth planes with very, I would say, complicated approach. So they had a, it's not a VR, it's AR But the problem is the same. So they had two sets of waveguides. One was putting the image to 50 centimeters, and second to 1.5 meters, which reduces the problem, each distance was used at different moments or when you are looking close, everything was at the close plane when you look far on the far plane. And but they were still clipping the content at 37 centimeters. And the reason what I hear from them is because it's too unpleasant for too many people.
So this, I'll call it the engineering solutions, I hear that also avegant head, I have never seen it solution with 3d planes. And today is lightspace technologies they have if I am not wrong for them, the planes with which they sweep, so you see kind of all for the planes all the time. So of course, the problem is that for dips, plains is not kind of continuous, right? I've seen it it works really well. But it still feels pretty complex solution. And then there are something I would call a full solutions, which create the light with all the depth already. And then you don't care about who is looking where. And this started usually with this lens array on display approach. So we take a classical display phone or laptop screen, or actually postcard, most people may know it from postcard and then you have some like lenses on it, which basically project different pixel to different direction, and it feels 3d at the end. But here the problem is that you very much reduced the resolution of the display behind it. So from different angles, you see different pixels, but you don't see the other pixels. So I'd say you see one of 20 or even one of 60 for example. Nvidia even shows like the original images, and you see that to create the virtual pixel, the image is repeating and I counted around 60 times. So you need 60 pixels to create one virtual pixel. So, which means you reduce the resolution of the display 60 times and then you have unnecessarily too many colors because you know you could have each pixel different. So it's very inefficient approach. And then there is a lot of effort invested into holography. So this is based on so called Face spatial light modulators you you flush a beam phaser So, light wave on reflective display which is somehow shifting the face of it Light and create the wave bouncing from it. And this actually can create really 100% true light indistinguishable from reality, in theory, but in practice, this is not that good. So it is I would say ultimate solution, but very difficult to really achieve from many points of view. So computationally the speed of this modulators was at least so far pretty low. And the result, what I have seen didn't look perfect. And I think there may be a lot of other kind of unique approaches to do that, which, okay, I know some of the RSR, I may not be even aware, of course,
yeah, there's actually one that I remember reading about at one point, I'm sure there are many others. But there's one specifically that comes to mind, which is that it was an attempt to change the shape of the lens itself, again, kind of similar to moving physically the lens or the display forward or backwards behind the lens. But to change the shape of the lens itself, which effectively changes the focal distance, kind of the bottom line is lots of approaches that people are taking to try to solve this problem. It turns out to be very hard problem to solve well, especially when you talk about the level of complexity, or the level of speed that some of these adjustments can be made within a system that actually can respond to how quickly the human eye can shift focus, which as it turns out, is not that fast, and a bunch of others. I'm actually curious, what was it that you were doing prior to deciding to tackle this particular problem set? What are you been doing kind of leading up to this exposure to VR, and there's a deep appreciation for what is the vergence accommodation conflict problem?
Yeah, many different things. But I would still like to mention at least one more approach, because it feels like that will be in a product soon. It's very similar to the first one I mentioned, based on eye tracking, you are moving the display. But now Matok, together with imaging optics, which is now part of metta are doing the same optically, not by moving the lens really like changing shape of the lens, but change using different refractive index of the lens based on polarization. So it's, it's a very fancy approach. So we're having like electronically controlled tunable lands without moving parts. But if you look at the bottom, the approach is still the same, you are moving optically that flats display to correct distance, just in a more fancy way.
And so there the implication is that you might be able to solve vergence accommodation, but you don't actually solve this to depth of focus, the true higher quality 3d perception,
yes, you, you don't solve it optically, there is really no depth of field, but you can do it digitally. So you really blur digitally part of the image to appear that it is out of focus. But if there are big sauce, you will still see the big sauce, and I will be attracted to them. And there are other depth cues for the monocular depth cues for each eye, which this cannot handle very well. One of them is called ocular parallax. It's the kind of movement which between close and far objects when the eye is moving, and the eyes moving all the time. We don't normally perceive it at all consciously. But it's supposedly as important as the focus cues. And to mimic this in the classical engineering approach. When you are really tricking the eye, it's pretty tricky. You need to have very fast and precise response to cover it. And again, Facebook reality labs were presenting that they tested it, they work on it, and so on. But again, it doesn't show the reliability, it doesn't show really the actual experience until you can try
it right. Yeah, very true. So what was it you're working on? Before designed to approach this problem?
I did many different things. And that's probably one of the reasons why I do a start up at the end because it didn't create a specialist at the carrier. So I did my PhD in mechatronics, which was kind of a modern tournament that time basically control systems. But I focused on a lot in the physics classes and on the material science. So I started PhD on basically mathematical modeling of very specific materials, so called federal week materials, I switched to more practical paths. Still, during the PhD, I was doing active control of materials to absorb or reflect vibrations or waves, but like 100% or appear as totally transparent for the waves. I did part of it also on my stay in Japan, which was absolutely fantastic experience short, but like traveling to different planet. Still during my PhD, I went to CERN, which is the particle accelerator with detectors here in Switzerland in Geneva. I was a member of Prague, academia of Sciences, the physics group, which was taking part of let's say, half of one of the detectors. And this is how I got to Switzerland, basically. You know, at the time the project was already delayed. I actually arrived the moment there should Be the first collisions and detection of the particles, but the detector was not finished. And I spent their two years on commissioning the detector. So really putting it together, making it work. And also actually, at the end, making the control system or supervision system of the control system. So you can imagine there is some system, which is taking care of all the small elements, you know, checking whether everything works fine. But if the control system doesn't work, you don't know that it just appears like everything is okay, so I did kind of the layer on top of it. And but when the detector was finished, and soon after, actually, there was really the detection of Higgs Boson, the only predicted particle, which was still not confirmed, and big news and so on, but not that much work on the detectors anymore. So I moved to the Lausanne, the Swiss Federal Institute of Technology EPFL, and started working again, on the materials I was not related to last 10 years already. And this was a six years period, working on many projects. But I would say the most fascinating one was, we discovered that we can make a conductive pass in insulating material, and we can move it in it. So and it's a few nanometers big. So you can imagine that, then you can put some electrodes on it and have increased something like a transistor, which can be written inside work as a transistor, and then delete it again. So also patented the idea and so on. But it's totally impractical, you know, very expensive materials, almost impossible to make it. So practically no way to get it anywhere to the industry. But fascinating experience. But already during this time, I started missing that kind of practical engineering, really doing some electronics, doing some optics, and so on. And I had a problem with the virtual reality, right, I wanted that the VR and ultimately AR will really become the next communication platform, I knew this is inevitable world is three dimensional. So the digital content has to be present in 3d as well. And as a part of the reality, this will happen, but I knew that come on without without the correct image, it will never really work or, or maybe it will work for some time. But if you bring better image, it will just storm the industry, right? It's just everyone will start using it. And during the time, I just noticed that, okay, people are working on the solution, but I didn't like any of them. Like it was either too complicated or impractical, or the quality of the image was poor. So I dedicated more and more time to think how it could be done more efficiently and practically. And I had a moment one of around three heroic moments I had in my life that I just saw, that concept which I draw started simulating it around, put all the numbers together, you know, how much is needed to have the image, and so on and so forth. Okay, that's it. That's it. And to prove that it is actually practical approach, I decided to build it at home. So I actually converted one of the rooms in our apartment, I took my son to the bedroom. So we were all sleeping in one room and converted my son's room into workshop, because I couldn't do it elsewhere. And started really ordering components, putting it all together and made the first prototype really there, I left the job filed a patent and had a prototype, which then I started showing around, there was an Intel at EPFL at the time working on the wound project. And I could just expect any feedback, you know, actually expect nice, you know, here we have better, but no, it was like, wow, we have actually never seen it, there was a true light field image with the correct focal depth, which the microscope like prototype was showing, I started showing it to more and more people also to come to talk very easily. And the feedback was overwhelmingly positive, even though there was just a loop of some very simple content. Mario hand and flying snake. I just felt good. You know, I was definitely happy with that I was hoping that others will be as well. And that was the moment to start a start up right. I didn't have qualification to ask someone can you know hire me to do that? They said, well, let's let's do it. On my own or on our own. We started being more people involved at the time. You made
the commitment to exploring the problem before he decided to start a company around it. So you left your work in the lab. And you decided to really focus on exploring what the sort of solution might be. Yes. And then based on the feedback, you decided to start see real around it. Yes,
I had the, I would say a level of irrational conviction that this will be good and very simple demonstration. First to myself that it is okay that it will work fine. And you know all the logic even though now it is pretty big and doesn't do much that I knew that this can be improved just with a kind of standard effort, I sought standards for the DEA and it's way more than
we want that sometimes the all the work is hiding behind that initial bit of work, you realize how complex the mountain is that you need to climb? Behind that first peak, you'd mentioned lightfield, can you describe kind of the concept of a light field and how it's different? Maybe then the sort of images that have been created by somebody other approaches?
Yeah, first, maybe the light field is quite broad term and is used in engineering and on the business way for many different things. So in physics, it is basically a description of flight around us. So in every point light is traveling in all possible directions. But in engineering sense. Typically, it's like a multi view display, that you have many different viewpoints, looking at some 3d scene from different perspective. And this is done in many, many different ways. We do light field as well. So that's nothing really new, like light field is really means to create different viewpoints. We just do it very differently than the other concepts and more practically and efficiently.
Maybe to help us understand the different unique approach that you're taking. Can you share more about what you've learned about how the eye perceives visual information?
Yeah, I should have maybe set off. Of course, the more important thing that light field provides image with the correct focal depths. So when you are looking at some digital objects closer, it actually really is shining from the distance, not from some display farther. So you can really focus and it becomes sharp, close. And you can change the focus anywhere continuously, and it will be correct as much as in reality, if it is done well. And this is one of the things the focal depth, which I really need, and I think is still very much underestimated. So you know what I learned about ice? That's the question or vision? Well, a lot until the moment we are actually selling vision test device. But there are, I would say two quite shocking aspects which people often underestimate about the eyes and division. So the first one is how how important input it is for us. So if you just take some numbers, the 70% of all receptors we have on our body, are on retina are inside the eyes. And among all the, the total of the receptors, the brain dedicates like 90% of the sensory detection or analysis to the vision. So vast majority of everything we have ever received to our brain and still are receiving every day comes through the eyes. It's literally the high bandwidth input to the brain or the connection point. And the second shocking thing is how bad the eyes actually are at the same time. Like how little it's needed to create that sensation, which we have. And you you may really experience it on your own eyes if you do a small test with your vision. So for example, when you are reading, you're always moving the eyes. That's actually the reason why the eyes are moving, they're not moving for the 3d perception they are moving because they need to point only the small high resolution part of the vision to the word which you want to read. So try to look for example at some word and read the sentence around without moving the eye. And you will notice that you can't read even the next word often. So the eye is good only an extremely small field of view like like a coin size, half a meter away from you. And the rest of the vision is actually worse and worse the farther you go to the to the periphery and on periphery were practically colorblind. See, just you know very little. And you know, and we don't feel it right that it is actually so bad. It's the brain which creates that visual sensation, and fantastically right. But the eye doesn't need much data. It's it's way less than even the display at which we are looking has because each single moment as I mentioned, I'm right now looking at some words and some text and I don't see you at all. We just a little bit on the periphery and you are not there. So the eyes working basically like a 3d scanner and the brain is building the perception from it. And this is very important for VR and AR Because everybody's speaking about you know With 8k and 16k, displays which you need to have what's wrong or wrong direction or that end, because the eye actually needs very little. And to make at the end AR glasses, this must be exploited, you know, on 90%, ideally, or 100%. Ideally, of course, that we provide only the minimum necessary what the eye need, and then the brain will still perceive it as, as the fantastic experience. And that little is actually really little, it's probably way less than the smartphone screen needs. The industry
has some rudimentary understanding of this. But there's some basic appreciation of this when we look at concepts like foliated rendering, which is trying to take into account this idea that you don't need to compute absolutely full resolution everywhere in an image, you can gain a little bit of efficiency on the GPU side of things on an image calculation side of things. If we know where the eyes looking in that moment, we can just create higher resolution only in center. But that doesn't really solve the problem that the display side, because the display still has however, many pixels are in that display, every one of them producing whatever color it's been assigned. In that moment, there's no real benefit. And the only company that I've heard publicly working on foveated display solution is Oregon with her kind of latest sort of steering technique for steering a higher resolution image to wherever your eyes looking at that moment. I remember Mario actually starting off with that intention. But then they realized that, you know, it's probably simpler if we just keep that high resolution area only in the center, and not worry about as much anything outside of the center. Anyway, this is a problem that doesn't seem to fit well with today's flat panel displays in the way that we're cramming as many pixels as possible into the flat panel display. So what is the alternative approach that you are pursuing through Syria?
Well, I fully agree with that foliation, the importance of aviation, that's basically what I started and will only repeat that there are two approaches one is the fixed for radiation, that there will be just high resolution picture within some 30 degrees field of view and less around. And we in fact, have a VR headset, which has exactly this as well. So high resolution light field into two degrees, and just the classical flat screen around. And it's for most situations, okay, because we actually spent like 95% of the time within this 30 degrees field of view. And each time you want to look more to the side, you turn the hat, and then the headset moves with you actually. But of course, there is a huge room to improve it by really reacting to the, to the eye movement. And there is a second problem, you mentioned that even if you do it with the big display, you are still sending even the black black pixels. So it's an enormous amount of data. But I think compound photonics, at least announced some display, which is sending only the pixels which are being displayed. So that's one of the approaches, which may also improve the formation a lot. But if you do it with the flat screen, you will still not solve the problems fully. Because, you know, you still need to focus on that display to actually see the resolution of the display. And, for example, if the display is, I don't know two meters far and you focus to 50 centimeters, you will reduce the resolution to an acceptable level to something like if people understand the numbers 20 Pixels Per Degree. So then you go to 20 centimeters, and you get something like three Pixels Per Degree, which is terrible image is just unusable. So even if you have a foliation, and you don't focus on it at that image, you don't have the resolution, so then defoliation becomes useless. So to satisfy that the only good part of the eye, we need to also satisfy the lens in the eye. That's why they have the lens by why we would have a lens in the eye if it was not useful for something right. So the lens is really responsible only for the quality on the fovea. And the flat images simply don't do it. So for vision, fine, that's a big improvement. But as soon as you ignore that focus, it's basically useless. And we solve the second problem, we solve that focal problem that when the eye is actually trying to really focus on something in your hands, we actually display the image in a way that it is kind of shining from that distance. And this is what all the light field displays or holographic displays do so the difference is basically in the practicality and efficiency of the approach. So I dare to say that we provide high resolution light field image the sensation is really good at practically no extra cost compared to rendering of a flat image. And by the way, this is something which many people in industry considers impossible. There simply is impossible to provide high quality image without more and more or data. But that's not true. The bottleneck in the whole visualization pipeline is in fact, the eye itself.
Here, then the advantage to having a deep appreciation for exactly how the eye is perceiving information allows you to come up with a more elegant compression solution, if we if we might, to be able to compute efficiently the amount of data that you need in order to present to the AI, and still have the I perceive it as high quality.
Yes. And you mentioned that the classical displays can be kind of driven to do the job more efficiently as well. But our display or better to say, in our system, we have practically every ray of delight under control. So if we want to have a more race and more pixels, we have them for the parts where we don't need them, we don't have them,
you are suggesting you're implying that you're not using a flat panel display using some other way of generating and controlling the light. Can you describe the specifics the details there?
Sure, this is not a secret. But maybe I will start from what the IRS sees and then say how the device is actually making it. So the eyes actually delivering the image in a sequence through slightly different viewpoints around your pupil. So first thing to remember that the pupil is a certain size, few few millimeters, right. And we see slightly different things from the left part of the pupil and the right part of the pupil or up or down and continuously. So what we do artificially is that we provide a slightly different image through, let's say, very small area on the left side of the pupil and projected to the eye and different image through the right side of the eye. And we have there, let's say 10 to 20 view points which are entering your eye in very fast sequence. So you don't really see that there is any sequence and paint the image on your retina. And then this is enough to already feel and sense that the focal depth and how the project three is doing it is basically a light source and modulator type of system. So light source is just flashing the light beams on modulator and modulator is something like a mirror, which is painting pictures on it. So we always flash the light on the modulator and that modulator the mirror flashes, some shadow of whatever was on the modulator to the eye through specific viewpoint providing specific perspective of what you would like to see in front of you. So in some way, it's similar, like making a small hole in a foil or paper and trying to look through it. So firstly, we'll notice that looking through small hole, you see everything kind of always sharp, it doesn't really react to the eye focus. And that's, it's like pinhole camera effect. But if you make let's see two holes, you will actually see that the image is kind of doubling. So if you focus on your hand, it will appear once but something farther will be like two times. And then if you make many holes like an array of holes, and that foil look through it, you will see more or less normally, it will be bit affected by the edges, on net flow and so on. But we will see more or less normally. And they basically do this without that foil. So we provide the images, which would normally pass through that hole to your eye and create some image in front of you.
So what are the implications in this approach, since you're trying to really target you've noted 10 to 20 different views in what is perceived to be simultaneously even though it's sequentially in the mechanism through the pupil is the implication also then that you are moving the direction of that set of beams as the eye moves? are you incorporating eyetracking into this interest mechanism?
Not at this point. So we have the array which is bigger than the eye pupil to cover even the eye movement. But of course we are also we have already installed some eye trackers and work on the implementation of this type of Pupil for variation I would maybe call it so that we shine only the frames which actually entered the eye. And when I said 10 to 20. So these are actually the number of viewpoints with which entered the eyes. So we have more than that, just to have certain it's not redundancy. It's like what if
so I had a chance to see cereal. I think I saw it for the first time maybe at photonics West. In 2020. There was a demo that you had set up there who the last one before pandemics. Yeah, right for pandemics. And then I saw you again at ewe and in between the two you had reduced the size of your overall system to something that can be worn on the head. And I have to say that the visual experience the 3d experience of your light field display is excellent It just, it feels so comfortable. And it feels I just love the Depth of Field effect that comes naturally, when projecting light in this way into the eyes is just such a wonderful and delightful visual experience. One of the challenges that I observe is that it's still a bit of a contraption, right? It's still kind of a bulky thing on the head, there's still a tether from it to some compute device. What's the path, you're already kind of on this path? Clearly, between those two points that I saw you went from being, you know, not even wearable to something you can put your head, but what is the path ultimately to a smaller form factor to something that that we will be wearing day to day?
First of all, I'm really glad to hear that you enjoy the experience, because that's what it is about. And I was expecting the bus about the size. So maybe I will also guide through that history between before pandemics and after pandemic's. So what you saw at the SBI 2020, I think February, we have shown it also a month earlier at CES that was just freshly built prototype, which started working like really like a day before we packed it and got it to CES. And then it actually burnt on the way we had to fix it on the backstage and so on. And it was a tabletop device, or actually to one like a for VR version for AR version. And during pandemics. Luckily, with additional reinforcement by original guys from from Magic Leap, we scaled everything to kind of ring around your head, which is still big. And the only difference was mostly the scaled arm of the electronics and how to get it farther. So first of all, I would say there is no essential difference of the hardware, or between our hardware and other display hardware, which is already way smaller. In terms of like which type of components are there, the difference is that we cannot buy the drivers we cannot buy the modulators, we cannot buy the optics or the combiner for it, we have to make it specifically for us. So, at the end is the same system, you have some display driver, some type of display some light source, you put it together, but the other devices you really kind of buy already made, or you implement, you know, existing image processing and so on. And this doesn't exist for us from the software part up to the eye. And here's the main challenge to scale it down to really design all the drivers all the chips by ourselves demodulator by ourselves, the optics by ourselves and hold the software by blind. And even though it is not visible this type of progress, we already have more than a year, maybe year and a half of pure work on designing the small version of the of the glasses. But there will be just a huge step change between what we have now and what you will see them. So for example, right now, we have already design of our own modulator, which basically, clearly we have prototypes, which do the light field, right. So we must have some components that we basically abuse some existing components for very different purpose. So when there is the electronics, we have to make some bypass use FPGA to really format the data, get them there in the right way. And use modulators which are very suboptimal, either too slow, or too big and too expensive and too power hungry. So no way to get farther with the existing hardware. So we already designed our own modulator, which is like seven times seven millimeters and can be smaller. But we started with seven times seven millimeters running at six kilohertz, the design is practically finished ready for deep out and first samples around in August. And then we will not start doing the drivers putting the drivers on silicon before we have this. But we have all the let's say, the schematic that the function the guts of the device already tested on on the FPGA field programmable gate arrays. So we kind of know that it is working but to make it small is pretty costly. Above the relatively standard process, getting all this processing on our unique ASIC and the same is with a soft, okay, the software is different thing. It's not related to the size. And then there is the optics, which is also pretty challenging to do about nothing totally unique. And we also cannot use the existing waveguides or existing combiners because most of them are so called pupil replicating which means that each race kind of arriving to your eyes through many different points and this kills the light fields. So it basically still creates a flat image. So we have to make our own combiner. But fortunately, the approach which we have seems to be really good in many other ways too. I think you have seen the demo of that as Pe this year. It's the hologram on the glass. So it's a film with recorded hologram, it's really like, gobbler go deeper. It's simply film, which is doing some optics job. And it's beautifully transparent, it really looks like a clear glass compared to most of the waveguides. And at the same time, it's very efficient on the indirect selection. So it really gets a lot of light back to your eyes actually 50% in the prototypes, which we have, while the waveguides have way less than 1%, if I'm not wrong, and it's also very cheap. And we can put it on curved surface. So we can take a lens, basically, if you send us your glasses, if you have any, we can put there the film recorder, the hologram, and it will work like like a combiner. But again, no one else is doing it. For us. This was very similar approach to what Intel was doing for their round glasses and what the Canadian North was doing for North focus. But they had very small iBooks. So was different on our site, we have a big one. So we really cover a much bigger area around your pupil with that. And why we do it? Well, because this is the only combiners today, which can really provide the light field image to your eye and doesn't look terribly ugly. The small is really about posterior the electronic drivers, and the modulator itself. So it's we have to make delight. So the drivers of that demodulator or basically as small as it can be, but then it will be definitely acceptably small and even power hungry. And the rest is the content which is related to it or the content generation.
And sorry, even power hungry or power efficient.
It will be sufficiently power hungry in terms of like very little.
Oh, I see. So not very hungry. Not very hungry. Yeah,
I mean, it will be always more than the concepts which are really optimized for power efficiency, like laser beam steering, and maybe micro LEDs, but it will not be much more. And especially if we are better with it. Efficiency of the optics getting into the eyes, we may actually buy, buy a lot back.
Okay. One of the things that was certainly true for the demo that I saw in the fall, was it it's tethered device. Do you anticipate that that tether is going to be a required element to be able to deliver the type of content that you need to the classes efficiently? Effectively?
Yeah, well, this is actually a broader question. There's only about our display, because of course, they will ultimately consumer glasses should be as light as possible. So we will try to avoid having better battery or anything unnecessary. And this would be the argument for having it stated at least to some necklace or like a horseshoe on your neck or pocket or something like that, or quickly rechargeable. But you're probably speaking about where we compute the content. And that light field may need more than the other type of content. I think I've already touched it that we don't actually need more computing effort to generate light field than the flat image we have there the laptop and computer mostly because light field deserves richer content. And so if you are showing just some arrows and simple text, then of course we don't need a computer on it. But we want to render a nice nice images so we put it on the on the laptop at least. But we already managed to really extract the light field from the any rendering engine basically. So currently we use mostly unity the Unity just talking to our so called compositor which creates the light field out of it almost for free. So the overhead there is overhead but it's very small. So we really receive like this one viewpoint depths map and create the light field out of it. So this means that we are basically on the same level as other displays or types of displays because on this side we don't need more of anything or a little bit more Yes, but not much. So it doesn't have to be bettered if we are displaying the same kind of type of simple content which probably can squeeze into Android device despite because light field basically is you know doesn't make sense to use light field if you are showing just some you know like a hands free smartwatch type of image in front of you, then we will kind of always want to be connected either through via or via Slee to some more beefy source of content, at least the smartphone.
As you can think through all of the wonderful benefits of light field as it applies to the types of content you imagine. What do you project will be the key You driving use cases that will motivate people to, to seek out a light field based solution?
Well, it's definitely, truly 3d rich content. And especially when you want to see something in your personal space in your own hands, because there is a huge difference. So if you take basically any other AR headset today, and try to display something in your hand, it's kind of not really there, the eyes changing between the hand and digital content, though different eyes behave differently. So my eyes, for example, changing the focus really doesn't feel next to each other. If you, if you use a light field and will indeed be content in your hands, it's kind of feels that it is physically there. This is a pretty big difference. But you know, people will may not recognize it immediately as such a huge benefit until you really spend a longer time in it. But then there are applications where you really need it. And this is more an industry or or healthcare, especially in healthcare where you know, the patients and organs or teeth or bones, everything is usually in reality, kind of rather close to the to the doctor. And if you want to augment some information on top of it, you really want to see it in focus. You don't want to, you know, drive your vision wrongly, because if you do it every day whole day, it may be actually really problem for your vision permanently. This is not known today, but there is a long term negative effect on human vision. But I would bet that is
the long term negative effect on vision of doing VR, or just in general of forcing your eyes to look unnaturally at 3d content.
We'll kind of both right if the VR is wrong, and it is driving your eyes wrongly.
Yeah, I've heard some stories about military and how some pilots or folks or intelligence have to stare at imagery in there's benefit to having that imagery in 3d presented in 3d based on comprehension, and sort of decisions that can be made. But the effort to do so to consume this sort of 3d content requires a very unnatural way of looking at that content. And I've heard stories that folks will end up with very long term, if not permanent effects as a result of staring at this content in a very strange way. So it's interesting, you bring that up,
I should maybe mention that you one thing where light field is replaceable, or the let's say the focus must be truly correct. And that's vision test device, which we are actually making, we are really simulating the lenses but digitally, we can display multiple corrections at the same time. Which makes for example, division is way more practical, because you just see the difference right away. It's not like, you know, the interrogation by optometrists asking you what was better now or before. So you can actually bake this vision test alone without the optometrist. But it's also means that we can implement the vision correction into the headset, and we have it implemented. So I just remember, we had a demo to team of engineers. And there was there were a few people with really strong glasses, including like, minus 10 diopters. And even when you take these glasses inside the headset, physically, you actually affect the optical paths, I don't know what you actually see with strong glasses, you can try to mimic it. But without the glasses, we can just, you know, pull a slider to minus 10 diopters, and then just that the person was Wow, the image really pops up within the glasses, digitally, without affecting the optical paths. So the eye box is really where the eyes are, which is not the case of having glasses. We can apply there even astigmatism and all these additional connections on on top of it. So maybe this will be actually the first selling point that you will click there. Or even actually make the test to know the correction then we put there the correction and the headset is made for you.
I think about this as relates to VR. So in VR, you can where you can have an amazing VR experience without having to wear glasses, even if you have extreme vision correction needs. Yes, that's one direct implication. In AR Would you still need to correct light from the real world? On the way to your eye? Yes, that's still necessary. But all the digital content would also be visible, clearly to you dependent regardless of whatever your vision correction needs are.
Yes, so of course we cannot correct the real world glide. That's simply not what we are doing. But this can be corrected by the lens in front of you, right your glasses. And here is the benefit of that combiner which I mentioned earlier. We really place the film on any existing lands basically it's of course nice to kind of protect it and so on but basically we can use existing lenses put there the film and recorder, the hologram He actually includes that correction and puts it into the digital pass as well. And even if not, we can still do a correction by the display to match the reality and the digital content. And I think this is, this is pretty strong proposition of that flight field as well, that it's simply really takes care of your vision way better.
Yeah, that's phenomenal. You mentioned that you now have a product that is being created as a result of all of your research and all of your work. And a product that doesn't depend on being super small. You can take it as it is basically today and project these light fields in a way that allows for vision assessment more effectively than is being done. Otherwise, whenever I go to the doctor, I find a surprisingly old school experience where they're flipping Okay, this one better this one better, this one better, this one better. And you spend 15 minutes flipping between all these variations where you're, they're both clear, one of them looks squat, or the other one looks taller, I can see them both, I'm not quite sure, you know, you have this sort of strange dance as you get closer and closer to what is appropriate for your particular eyes. But what you're describing is that the doctor doesn't need to be there, this system can shine a variety of a full Lakefield set of information at me and be able to assess my image, or assess my vision in that process. Is that fair?
Yeah, exactly. Well, you can look at it. Like we can generate any 3d content, so we can create the room of the optometrist. And we can then simulate any transformation. So the lenses themselves? Well, that's the basic. And the second thing, which is impossible to do with classical lenses is that we can display the many collections at the same time, or even some kind of like a gradient, and you just say, Well, this looks best or this type of image pops up. And you don't need to tell it to any optometrist you because digital content, you can click on it, or have very simple interface just to choose or highlight something which is there, you can play a game, in fact, it can be you know, entertaining content, which at the end will give you some prescription. And well, it may sound like why we are doing it is not really augmented reality, virtual reality, right, we should focus at one target. Well, we had a huge discussion about it. Because the AR, which we are mostly presenting and not even the VR so much. It's a moonshot, there is basically no stepping stone on the way otherwise, if you do the moonshot, typically, it's to like, you know, 300 million or 150 million to really get it to a product. Even the technology itself was usually pretty expensive, before making, creating volume production, and so on. And we could follow this moonshot with risks, you know, that we will not convince enough people to invest and so on, or not making money on the way or really fail, which I don't think will happen. But this kind of secondary markets found us we're not really searching much for it, but simply said, Kumar, I mean, this would be really good to have. And, in fact, our all prototypes were basically mature for these applications. So that vision care, but also, the VR headset, it's a similar problem, it also can be a little bit bigger than the high end VR headset, are some really huge and still quite heavy, and we're very close to it already. So just because we already had it, we basically put the some packaging around a little bit, modified the parameters and started offering it elsewhere, basically created different units in the company, which are focused at different directions, just you know, not to have too big mess in the company. And it goes really well. And you will write this, especially the vision care business is like a textbook example of markets ready for disruption? Not much happened in last 100 years, you are still doing the same, even though the devices may be fancier and has some, some screen, you know, some controls display next to it, it's still the same. It still requires the guesswork of the optometrist and your answers. And we can do it better and digitally. So something like bringing the digital camera when there is only Kodak right.
It's a great analogy. And so you'd kind of hinted at one of the big struggles that every startup has is this challenge of focus. How do you stay focused? As you continue to pursue your grand vision? How do you describe your grand vision for for Syria?
Well, the grand vision is, of course, that the AR is the main communication platform in the future. And it's not our vision. I think it's a vision of a whole industry, or at least people with accepted as inevitable because I may get deeper to why I think it's inevitable and I think we may agree very much on that because I read the description on a our show about you. And we have very similar understanding that, you know, the world is three dimensional, we are used to see objects around us. And this is how the digital confirming information should be presented to our brain as well, the brain was built for that. And how we do it today is, you know, taking a phone from pocket, looking at it, sometimes tape, typing something, Googling something, and then receiving it from that screen in a hand. So this is not the final station, right? It's the same, like the thing that the telegraph or telephone was the final communication device, that will be better and augmented reality is exactly that case. So we have lightweight glasses, which, in fact, in some places in the world, most of people have anyway, and they do all this job on top of it simply First, understand the world around you, you should probably even better than you yourself, and, and help us to provide information which we want, which we need, and basically upskill us. So the glass is ultimately powered by artificial intelligence, and so on. Maybe the bigger part of our brain, in the same way, like Wikipedia is already bigger part of our memory than our brain itself, right? You just search for the information. So this is the grand vision, I'm really sure that are very much convinced that this will happen. And not that far. In the future. I think we touched a little bit the reasoning, why not because the eyes actually don't need so much information, it's not so difficult to deliver it to the eyes, it's just very new type of device, which still needs a lot of work and exploration to make it right. But we don't need to discover new physics or anything like that. It's just we have it, let's put it together. And while this is happening today, an old grand vision is of course to be part of that from one or the other direction. So when I mentioned that light field is of course good for more rich content, that's probably not the first smart glass for everyday use. The first molasses will be more like the hands free smartwatch type device that will pop up some notification, maybe a little bit understand what is in front of us maybe reading some QR codes and providing some information about it. But there will be or is already different paths which more rich content like Microsoft HoloLens magically, but now many others. And I see it like difference between big headphones in audio and earbuds. So there are bigger, bigger quality. And people still buy it, right. So it has its own use. So I think we will go through the direction. So we will provide richer content in slightly bulkier and more costly hardware for a specific purpose. So from first from industry to healthcare, where we have already number of projects, before it really gets to all day wearable glasses. Yeah, so that's the that's the grand vision to get there. And kind of the real vision. realistic vision is, of course, also how we get there is that we are a technology company. So we are trying to do best what startups are best at. So really doing the yearly exploration, r&d, product development, but not really volumes and marketing around it. So we expect to really sell the IP and know how and work with others on the final product. Yeah, and at the same time, because we make multiple different or we focus on few different markets, maybe split the company on the way so like cereal, medical, making vision care devices, then more like the high end AR we are in the middle where we have running projects. And then the moonshot, you know, the small projects that are ultimately delivered maybe two more types of devices made by others.
So in this plan on how to get there, you anticipate that ultimately, you'll be working with another partner in order to manufacture at scale based on all of your IP and your insights into creating the set of components in everything else, the software algorithms everything else necessary. Yes, yes.
Currently, I mean, you know, we had to make some business plan presented to investors and so on, we are not doing the end product delivering delivering to consumers. We are the know how company ultimately equivalent of something like irama Qualcomm or Dolby, you know, providing the know how background, of course, hope we're hoping to maybe switch and start working on our end product by ourselves. But for this we don't have budgeted.
So you kind of hinted at this but this challenge that entrepreneurs face, especially folks like you who have this this deep conviction about where the market is going and appreciation for one of the key missing components to be able to deliver a truly high quality 3d Comfortable 3d experiences and a path to get there. But you're asking investors go on a very long term ride with you. How do you manage these investors expectations along that ride?
Yeah, well, first is to have the right investors right. So, we have to find the match. And which it means that we are having investors which understand we are running towards very high potential market, but with very high risk as well. And, but how we show that we are on the path is by delivering on the promises on the way. So, we always have on milestones, initially only technical So, I can give an example. So, when we raised the first round, I showed simulation of the images, which we may deliver before the end of year. That was a mistake to say before the end of year because, of course, the the closing of the round was delayed by months and months, and this was shortening the time for us. But at the end, we managed so and it was really like between Christmas and New Year, we still didn't have that image what the which we promised to show. And it was in fact that that kind of creature, or the frog, which we often show we may see on our YouTube and so on. So I basically promised something like that. And just you know, 31st, December, the image was there, I took the photos, send it to them. And so milestone delivered, promises delivered. And we did this several times, basically, every year, he said next year, we will have the AR version. So and we headed that he and he actually saw it. And we delivered even more than what we promised, in fact, because I didn't think it will be real time rendered content. And it was so yeah, so when you asked how we are convincing investors that we are on the right way is by really delivering realist or promising realistic targets and then really delivering them. But I must say that we are also terribly lucky that this would not be possible without the people we managed to get in in the team. And that was total coincidence, because the Intel, the round first basically closed the whole project and the whole team was available. And we were fighting for the people to get away with magically, they basically came first. I think they they heard about the availability of team but we managed to get I would say that really the dream team out of out of there. And later even from the magic group, so they actually gained even more experienced in next two years with Magic Leap still our neighbors and joined us later. So yeah, this was a pure luck that we managed to deliver on promises also, because we had the people which were able to
do. Yeah, that's definitely one of the big challenges that a lot of startups face is finding the right talents. And especially for a company like yours, where you're really at the cutting edge, finding appropriate talent that can help accelerate your time to market and deliver on your promises is really a coup for you, as a founder. As a founder kind of as this experience, you went from being a researcher to now being the head of an advanced technology company, how have you evolved as a leader during this process?
I don't think much but but maybe the first thing which is really related to it is that realistic estimation of you know, what they promised and what we can deliver, I can imagine that if I didn't have that background really being close to that, that's technology close to that hardware, the estimations would be wrong. I mean, think kinds of promises of many startups are wrong, and the person who is telling it knows that. But fortunately, especially in Switzerland, the atmosphere is bit different. And it's really important to be closer to that reality. And we managed. And what's changed, of course, from the research courier to the startup is that kind of relation to the team, because in research, usually, it's really very much peers, you are working with peers, even if I was leading projects, the guys were my peers, and we were working more or less like in parallel. While at startup, I bring people together and needs to kind of put the team on the same path together. And this required I think, were more true leadership in the sense that I really try to do more, be the servant, you know, stay longer. Do the weekends, do the evenings, do the nights, that at the end, whenever there was this critical moment, and we had it every year when I mentioned we needed to deliver on the promises. This was always extremely tight. And I remember once we had really trip around the world, and we needed to pick everything for the morning and fly first to Japan and to us and around. And if the device didn't work still in the late afternoon, and we had to stay until the late now it's too put everything together and the guy stayed with me in our stayed with us, we put it to the box to the suitcase. And we're in the airport with really working device. Unfortunately, the plane was delayed because there was a typhoon in us, in Japan. So we didn't have to do it. But
we were ready, ready. It's really a different animal, being an effective founder, and leading the team in the way that you're describing, right, as a servant leader, as a, as a leader who's there committed as committed as the rest of the team. It's awesome. Yeah,
and maybe one of the one of the challenges is that we feel so good people from bigger corporations, they are used to more transactional relations that are some standards, and you may work basically for salary. But here it is, we try to be like a sports team, you know, playing together, totally flat hierarchy, you know, that the hierarchy is there is here only when we need to have a sharp decision. And it needs really, basically the totality in that sense. But otherwise try to be as flat as possible. And so far, it seems it really works well. So we are like a group of friends, even when during pandemics, we should have a more home office. So of course, you know, we kind of get the option of home office all the time. If you can work at home, do it. But the guys were still coming to the work anyway, already. Because we play a baby food, you know, the table football? Or have just a coffee together?
Yeah. So yeah, this
I think was very important.
That's fantastic. Let's wrap up a few in lightning round questions. First up, what commonly held belief about AR or spatial computing Do you disagree with?
The question is whether it is still commonly held belief. It's the kind of the number of pixels and the Resolution, which the experience needs. But I think in in popular media, it's still about, you know, 8k 4k 16k displays are coming. And most people probably still think that this is the way this is necessary that I simply is so good that you, you need this many pixels. And this, I think, is it's objectively wrong, I need way, way less than even the displays which we have now, or at least in a comparable amount of pixels distributed better. And this is one of the differences between what's the mainstream and I think will be the reality at the end. So the minimum amount of data delivering maximum experience, so the experience will be better than what we think it can be. And the amount of data will be lower that we think today it can be. And the second commonly held belief, I think it's the timing of the full scale AR, I think that there is still many people which think that this is science fiction, it's very, very far, when we will have really lightweight classes, which will provide like hyper realistic content in front of us. And here, I think that it will be significantly shorter that it is feasible, really, within this decade, having like glasses, which are really doing kind of all the job we need. And the reasoning comes from the fact that the eyes actually don't need so much, it's really about the efficiency about how the data are delivered to the eye, and also how important it is. So I still know many people which didn't have a smartphone, I suppose saying I will never have a smartphone, but the utility of the smartphone simply beat it all or many of these people, because you know, it's just your external memory at least. And the AR glasses, I really see them as it will be part of us. And the glasses at the same time will be creating something like a gigantic digital beast, which is the common knowledge, which is the common shared information. So that this will understand what we see what we want, what we what we like, and will be constantly helping us not only in the professional world, but also in a private world. And probably we cannot even imagine what it will be doing. So you know, he just walked out and can think about what what the glasses will be doing for me now. Maybe I don't want anything so they will do nothing for me. Small thing they can remind me that I should go somewhere. But I could also actually tell me that there is there is something available, which I like because it knows me and then present it in front of you show or have the communication like we have right now at zoom. Okay, people don't see us but we could see each other much more realistically in true 3d, right really standing next to each other. And this all is somehow visible much sooner than people normally saying. But of course this is just the
bet. I like it. vision of the future. Going back to this this first point, this notion that measuring the quality of the visual experience, especially within ar 3d, based on number of pixels on the panel, and based on the number of pixels that are in the display system is ineffectual in that the right sort of metric. What would you replace that with? How would you help people judge the quality of experience on paper before they make a purchase decision? Or is it going to be just the case? You have to see it to really understand it?
Yeah, I don't have an answer to the first part of the question. So I would say yes, seeing it will be probably the decisive point, you will take it on your eyes and just say for how it's really all there. But otherwise, I mean, ideal is that you distribute the image information in terms of pixel resolution and colors on the retina only by how the retina actually sees it. So when it doesn't sees colors, then you don't need many colors, when he doesn't see the resolution, you don't need that the resolution. And this is probably well, very little again, I would suggest people to really look at some text on the display, for example. And notice like that you read only one word and everything else is basically waste of pixels. It could be very low resolution. The problem is that the display doesn't know where we are looking, of course, right? So it's just all the redundant pixels around, what if we were looking at them. And I guess this can be kind of presented in numbers. So you will just know how big resolution you have in this part of the field of view, how good is the eye tracking and so on. But it may be also a bit too complicated to just, you know, describe it. So we'll be better just to, to see it,
what is the field of view that you anticipate being able to create with your wearable system,
we are targeting now around 60 degrees field of view for AR, but not full of light field. And we can do a bit more this really about the first is that they don't really have a physical limit on the field of view. In the combiner. For example, the waveguides typically have a physical limits, because the light simply cannot be bouncing inside by some bigger angles than some maximum. But we don't have it right. It's just shine on the hologram. And the hologram can cover the full lens. But then we are limited basically by the pixel resolution. Of course, the more pixels we spread, then we have more Pixels Per Degree. So we are targeting around 60 pixels, but we could do more. And we are working on basically 30 degrees field of view with light field in the center. So something like Vario for AR we have it in the VR headset and it feels reasonably good.
Excellent. Besides the one you're building, what tool or service do you wish existed in the AR market?
Well, I would really like to have already the first hence free smart which type of AR glasses for productivity. This was the main motivation while I was waiting for the first Android phones to pop up to appear because I really wanted to have the calendar, the email, the notifications and notebook in my pocket. And now it is a little bit with the smartwatch, but you kind of still need to look at it. So I would be happy. For example, the nurse focus were really around and were working fine. I have tried them but not my size. And I never bought any. But I can imagine that I will be one of those 11 of those which liked like to have it. I was waiting for the Google Glass when it was coming. But finally didn't reach you in Switzerland. Or they would maybe not be wearing the Google Glass from other reasons, right? Just a bit awkward. But the nurse focus Yes. And they are not really around just to have the pop up the notification.
Yeah, we'll see what Google does. Nurses now. It's been, what, two years almost since the acquisition. And maybe we'll see something soon from that combined North Google team that is an evolution of Google Glass. I look forward to that too.
I may try to speculate about it, right, because they switched from the free space combiner with the hologram. And as I mentioned, to waveguide and with still the same projection engine was the laser beam steering. And now Google bought also axiom. So the micro led company, so I can imagine this group of together as well. But obviously, Google's business or data, right information. So anything which just gets the information to the eye and understands a little bit of something outside and by will be I'm a Google guy, you know, I have all my stuff is from Google, so I will I will definitely go for it.
Yeah, yeah. The combination of those technologies is quite interesting. The implication is also that they were not satisfied ultimately with what North had created, and had planned for v2 of the North vocals, otherwise we would see that in the commercial product, right? They went with a different sort of combiner optic solar And now you noted that they just are rumored to have bought Raxiom, which is a micro LED technology company companies actually does has a lot of patents behind light field displays also kind of dragonfly light field displays. And we'll see how that kind of all that technology comes together, I want to see something I want to see, I'm gonna see a product that kind of brings these technologies together. It'd be amazing. Me too. What book have you read recently that you found to be deeply insightful or profound?
Yeah, I would say okay, during pandemics I actually started reading maybe five times more than before. So like verify should mention some books. The one in the last two years, I've definitely liked more than the classic that was the guns, germs and steel by Jared Diamond. But this is really known. But maybe among the less known books, I really enjoy the the biography of George Westinghouse. That's super interesting. Barisan the biography doesn't have a best rating if you find a book, but it's mostly because of editing the story of the guy you he was very innovative, you know, how he approached the people how he managed to get the better ones, he got a Tesla from Amazon, right? And actually was responsible for massive revolution in electrification. Right, amazing guy. And there was a one interesting book, which I don't, I think it's not very well known. And most of the book was not so interesting, but it was the righteous mind by Jonathan Hite. There is one later chapter about why people became different from, let's say, the other apes. And it's that kind of be in us, you know, that we became more social animals than, for example, chimpanzees, they will never collaborate on our way that, you know, one started shaking a branch and put down the fluids and the others will eat them won't share it with that individual, which did the job and a human started doing it together, right, that shared a task or distributed tasks. And there was very interesting insight in what is creating the bee behavior in us. And it's a lot like the synchronized movement is described as mirror brain. So for example, when I do some movement with my hands, the same part of the brain, in your head will will be triggered, then this actually explains the dancing, the marching, all the singing, this is all the behavior where we are becoming more and more bees. And then you can see it in a lot in the Asian cultures, like the Chinese are dancing every you often can see every evening people making like synchronized dances on the street. Right. And this, in my opinion, and according to the book really explains that they are way more like communal, or, you know, the cultures in Asia are way more like beasts than, than in the West, we are maybe dancing to little marching along here. But that's for people can usually focused on the wrong things. And the end,
so it's super interesting. Last one for you here. If you could sit down and have coffee with your 25 year old self, what advice would you share with 25 year old Thomas,
I would definitely recommend myself to do what I did anyway. So you know, first thing, I know where it ended. And it worked. So basically, it means to learn, learn, learn, as long as you feel you still need to learn. So not to choose what to do because it's better paid or something like that, but because you learn there more. And this is exactly what I was doing. So even though it was maybe not obviously, the best pass, I knew I will learn there more. So that was working at CERN was definitely definitely one of the two things and here at the university as well. But one thing I would do recommend myself to do differently as and this is to look at the business world more positively. I looked at you know, I was the geeky guy researcher, you know, things and technical projects and devices and business was some greedy fight for money. But in reality business is actually a very practical way how to deliver useful things and ideally totally new things not to try just to you know, enter someone else's market and sell them the same thing but to do an entirely new thing which will really make at least something better. And here the communication technology is one of the tools which we as a people have and make us extremely more efficient and you know makes at the end the life really better because we can organize better we can distribute food tools, energy dusks way better than before. And that's probably also the reason why the communication technologies became such a total science fiction, right? Which other things became so fascinating. Like, if you look at your smartphone, it's full of so amazing things. And because it's so needed, right, it wouldn't probably get to this level, if it was not so useful at the end. And I see the augmented reality will be asking you rotate as well, more than the smartphone and internet together, it will really understand the world, it will understand us better, it will merge the digital world with the reality. And at the end, it will really improve the life of most of the people on the planet, in my opinion. So that will be the recommendation to look at the business in this way.
Yeah, very nice. As you say, this is imagining what is it in sci fi books, sci fi movies that we project forward, the most. And the three things that come to mind, as I'm thinking about this, reflecting on this right now. One is communication technology, we always have this kind of future imagination of what communication technology and how it evolve. One is transportation technology, their ability to move from one point to another. And the third is the way, to a lesser extent, there's a lot of exploration about the way that society interacts together, like how humans interact with each other. Is it a post apocalyptic sort of world in which we have one particular organization? Or is it some interaction with some other alien race, and the different sort of organizations of the beings and the different races and how they compare something that is, seems to be those three things, this notion of how we communicate, how we transport, and then how we relate. And anyway, I'm gonna spend more time. Now that has come to mind and spend more time thinking about this.
I definitely agree a lot of us evolving. And often, especially in the communication technology, I think we got farther than the science fiction writers, right? For example, not in the transportation, we should have been already on Mars, you know, many generations. We are not, or at least on the moon, but the communication technology, and the science fiction books and movies was behind what we have today very often, right? Like, if he shows some smart, which was gigantic, or films that and so on, it got far it got faster, farther, right. Yeah,
you're absolutely right. And I
would say maybe even the social relations, okay, maybe this is really not the right time exactly. today. But if you look at the esteem of books, and so on, the future civilizations are still very conflicting, like in the 19th century, right. I say, Today is not the right time, because we have Russia smashing Ukraine, absolutely, without any sense. But so far, the world was really peaceful, right? Not like in those books. And I really think this will evolve that conflict will not make any sense. It doesn't make sense anymore. It doesn't make a winner, only losers. And it will be more and more like that. So the world will really become less violent, and more peaceful, which was actually happening, longtime or almost all the time, you just don't feel it. And yeah, but the communication technology, and this is probably getting farther than the science fiction writers, in my opinion. And I actually liked like one of the descriptions of what was driving the science fiction, fantasy as well. And this was, you know, starting with the steampunk, because you had the steam engines and everything was the mechanical, then there was an atom bank. So everything was a nuclear. And I don't see any solar bank, I must say, it feels like you know, we will forever stay in the atomic bank, because there is no, at least we don't know about any other more fundamental source of energy than the nuclear.
That's fair. That's true. Maybe there'll be some sort of photonic punk. At some point, are you able to better master light as we kind of evolve towards this more immersive augmented reality enhanced vision of the future? We'll see. Any closing thoughts you'd like to share?
I don't have really anything in mind. Okay. Thanks. Thanks a lot. Really, I really appreciate that you do the service to the community? Because I think the AR still needs a lot of evangelism, or evangelization in the public, to get a support, right to get the funnel the investment into these efforts, and ultimately, really find the right solution.
Yeah, I appreciate that. I appreciate that. Thank you. Where can people go to learn more about you and your work at sea real?
We will have a new vibe very soon. So maybe actually, the podcast will be released already when we have the new web launched. Otherwise, YouTube not only our channel, there were other guys making video about us. And of course, you can always visit us in Lausanne or we can meet at exhibitions. We plan to go to display we can sell was in May,
for example, in will you be at ewe in end of May.
Now we had to decide or didn't have to, but kind of needed to decide between one or the other. And because we are a bit more related technology company more the hardware we go to display we cannot do aw II.
Well, I look forward to seeing you the next conference that we're both at. I don't think I'm going to make display week this time. Tomorrow. Thank you so much for this conversation. I really appreciate it.
Thanks a lot.
Before you go, I want to tell you about the next episode. In it I speak with Nils Pihl, CEO and co founder of Auki labs, a company solving the limitations of GPS to accurately position virtual content, particularly for shared experiences. Nils and the team at Auki are building advanced peer to peer positioning protocols and AR cloud infrastructure to enable a new era of spatial computing. Prior to founding Auki labs meals was a behavioral engineer who studied meme theory. Between the exploration of meme theory and the deep dive into a unique positioning technology. I think you'll really enjoy the conversation. Please follow or subscribe to the podcast you don't miss this or other great episodes. Until next time