My definition of AR of augmented reality is more broad than sum. For me AR refers to the ability to directly observe the real world while incorporating digital content relevant to that moment in space or time. This content could be basic 2d content, or more immersive 3d experiences tied to the real world which some call mixed reality. As you know, we can experience AR through a handheld device such as a smartphone that captures the real world with its camera and then incorporates digital elements for display on screen. We call this video pass through. The same concept applies to a headworn virtual reality rig that has cameras to perform the same type of data mix. But the real goal is to create a fashionable headworn device that allows us to directly view the real world while also projecting digital content into our field of view. Rather than video pass through. We call this see through video pass through is an easier problem to solve but see through optics allow us to be and feel more natural in our interaction with the real world and with other humans. But why do we need either one? Why does Tim Cook of Apple think AR is as big of an opportunity as smartphones? Why are Facebook Microsoft Google snap and a bunch of startups spending many billions of dollars a year trying to create a headworn device and the complimentary software and experiences. The cynical view is they have to do something new because smartphone sales have plateaued. But I think there's something more fundamental about AR glasses. If you're listening to this, you probably already buy into the potential of AR but I'm going to lay out the argument again in case it helps clarify something for you or it triggers A new idea. We are fundamentally visual creatures who think in three dimensions and live in a physical world. Right now, we are either engaging with that physical world, or our eyes are glued to a 2d screen. But there are times when our engagement with the real world, including the humans within it would be made more efficient, more effective, more safe, more human, if we could receive the right information relevant to that moment, without disengaging with the real world. And there are times when there's even more value if that information can be directly tied or locked to the real world. With the computers and smartphones we use today, the process of gathering and applying the relevant info in these moments is haphazard and inefficient, and often ineffective or unsafe. This is because we have to physically switch between the real world and the device. We often switch back and forth many times as we slowly comprehend and make progress. Before we can act on the information, we have to understand it. And before we can understand it, we have to find it, a system that has enough context to help us quickly find the right information, a system that delivers the information step by step and ties that info to the real world. A system that allows us to keep our hands and our eyes available for engagement is a system that could help in a number of situations. Let's take a look at a couple of examples. Let's say we're trying to assemble something to go through a set of steps in a process. This could be to assemble a subcomponent on an assembly line in a manufacturing plant to cook a new recipe in the kitchen to complete a new exercise routine to repair a lawnmower or maintain an aircraft engine to navigate to an unfamiliar location or to find the right item in a warehouse. If you're already an expert, maybe the availability of hands free heads up contextual information is less helpful. But we don't start out as experts and many of the things we do are done infrequently enough or are different enough that we benefit from some guidance. Although even for the experts, like airline pilots, it checklists ensures higher safety and quality. And that guidance is substantially more helpful if presented in small increments at the right moment with our hands free to take action. The same goes for more complex situations when we need to diagnose and resolve a problem. These are cases where an expert needs to be brought in to assess the problem in guide or help execute a solution. This could be to fix an oil rig to solve a problem in a manufacturing line, or to save a loved one's life after a tragic accident. I found myself in a situation once where I was attempting to perform CPR and receive instruction and other life saving procedures. While on the phone with a 911 dispatcher. It was challenging to juggle everything in that moment. Getting timely access to the right information, while simultaneously being able to act on the information can make a big difference. another batch of situations where mixing the real world with digital info is beneficial is when we want to see something that doesn't currently exist or is hidden from view. This could be to plan and execute a home remodel project with an understanding of the electrical plumbing and framing that sits within the wall or during brain surgery to precisely identify the location, angle and depth to drill a hole. It could also be to see an individual product or a bigger development project in the right location in the real world before we buy it or before we build it. There are many other examples of the value of visualizing data around art or social expression or entertainment. But generally speaking, these concepts around guided instruction, remote collaboration and data visualization are the areas that are most ripe for augmented reality. More pragmatically, the value that AR brings in these situations comes primarily from a reduction in the time from intention to action, meaning smart glasses one prove the time between our desire to know something or do something. And until that action is complete and completed correctly. It will do this by more than an order of magnitude by more than 10 times over today's computers and smartphones. even beyond these core use cases, I can imagine the glasses becoming a window into the information or value provided by all of the computing devices around us not just our phones, but also cars, set top boxes and IoT devices. I also see an opportunity to rethink how personal computing fits into our lives, making it more human centric, conforming better to how we move and engage with the world and relate to each other. As we've heard on this podcast, there's a lot that goes into making AR glasses a reality. On the hardware device, we need a good enough visual experience within a comfortable and socially acceptable form factor. This combination is proven to be exceedingly difficult. I've written a fair amount about this at AR insider and I'll include the links in the show notes. The device also needs to understand enough context to serve up relevant info at the right moment and the right spot within our field of view. It will do this with some local compute, which could be our smartphone or some integrated processing. And it will increasingly rely on computing in the cloud, especially resources at the edge that are accessible with high bandwidth and low latency. Some experiences are dependent upon a precise understanding of where we are in the world and what's around us. So we'll need a new kind of map of the world including a map of our private spaces. There are privacy concerns bound and about 100 other big and small challenges to solve before these devices become mainstream. But I still believe the biggest hurdle is that challenge of getting the display and optics for air glasses to be good enough to deliver decent visual quality and both physical and social comfort. There's no Moore's law when it comes to displays or optics. They progress in fits and starts over a relatively long iteration cycles. And sometimes it feels like we're moving backwards more than we're moving forward. For example, while Microsoft announced and began to produce HoloLens two in 2019, they didn't really distribute in any quantity until 2020. That's when we got an objective look, and the results were disappointing. While there were some significant improvements in the ergonomics and capabilities of the device, the visual quality took a notable step backward. The engineering accomplishments of getting their laser beam scanning display to work with their fancy diffractive waveguides was really impressive, the field of view did improve, but just about everything else about the visual quality got worse. It was a commendable and worthy experiment though, and I can't wait to see what they do with the HoloLens three. If we extrapolate from the release dates of HoloLens one and two, we might expect v3 in late 2022 or early 2023. Until then, I think we'll see continued focus on expanding the Microsoft Azure Cloud and a lot of learnings around ergonomics and user experience. Do you remember that big $500 million US Army contract that Microsoft one beating out Magic Leap after seeing some of the work done for the contract? I don't think Microsoft solution is suitable for soldiers in the field. The current generation of HoloLens technology, even when modified for the military, is unwieldy at best, and a dangerous distraction and safety risk at worst. I don't know what that means for the military contract itself. But I don't think HoloLens two is the answer for soldiers in the field. But I couldn't see the device finding a home for use in simulation training. Speaking of Magic Leap, there are 2020 went pretty much as expected, they were not able to complete an acquisition or raise more money by staying the trajectory they are on. Instead, they shed a lot of jobs, whole teams of employees really, Roni Abbott stepped down as CEO, and they hired Peggy Johnson, who was a former Microsoft executive. Now they are firmly focused on the enterprise. With this new focus, they announced a new $350 million round of financing. I foresee another interesting year for Magic Leap, one that could surprise with a new product announcement or an announcement that they are getting acquired. During last year's annual update, I read magic leaps woes in the mood of the investment community as a sign that we were approaching the bottom of the trough in the hype cycle as it relates to AR glasses. I think we hit bottom in the first half of 2020. With the struggles of Magic Leap, the abandonment of the audio only Bose glasses, and the disappointing acquisition of North by Google assign that North strategy or technology are not working well enough. But heading into 2021. The hype is building again, fueled by the hope that Facebook and Apple will release amazing products in the not too distant future. But it's hard to build a venture backed software company dependent upon hardware that hasn't yet been released. Just because COVID happened and now everybody is aware and excited by the potential for VR and AR doesn't mean the bleeding edge of displays and optics moves any faster. I don't believe the plans or realities of any of the major tech players were significantly altered by COVID. If anything, it's temporarily slowed down some research or development. While progress remains slow. With the bleeding edge. All of the current more mature technologies around AR had an amazing 2020 fueled by the changes in circumstances and mindsets as a result of COVID. For many businesses, a persistent resistance to change was replaced by a desire and need to change. Some of the software and hardware companies that were struggling to get some traction in late 2019 saw an explosion in adoption during 2020. Recent guests on the podcast Justin broad of Osso VR and Angelo struck lattanzio of apprentice.io. Both spoke to this the frictions or inadequate pneus of current hardware were overcome by the real impact that the software solutions can have. Other past guests that had a good 2020 include include a team that leverages the Microsoft HoloLens to enable teams to rapidly co create an AR for use in product development or training for things like manufacturing, spatial, the collaboration tool, built out cross platform extensions for web mobile and VR to complement its AR collaboration experience. Eighth wall sorry, a lot of adoption of its web AR technologies in real where has seen continued growth in the adoption of its head mounted display for enterprise. It has become a platform of choice for many software vendors. The next year, we'll see smaller increments of innovation, with a focus on better integration into existing workflows and infrastructure. The result should be a significant acceleration of revenues. One of the companies that continues to make strides is Lenovo. I started paying more attention to them about a year, year and a half ago or so, when they started hiring several very experienced engineers, product managers and business development executives from odg Epson and elsewhere. Lenovo has the perfect playground within which to make a lot of progress. And they are, they have a bunch of enterprise customers who already buy their hardware and software offerings for data centers and Corporate computers. Now they can deliver AR focused hardware and software solutions to those existing customers in a way that meshes perfectly with support maintenance, deployment and other activities they're already doing with a Novell hardware. For example, when a rack of computers in a data center experiences a problem, a Lenovo customer can pull on a Lenovo AR headset and consult a remote expert using Lenovo or some authorized third party software to resolve the problem. I get a chance to chat with Mike losee and Nathan Pettijohn of Lenovo about their efforts and the episode coming out later this month. When preparing for that interview, I was struck by a slide that I think Bridgestone the tire company had put together on it, they described a full enterprise lifecycle of VR and AR, and it went something like this. early in the process. VR is used for design review, where they collaborate on a concept before it's built for the factory. Then AR is used for design review to visualize the product or modification on the factory floor before they build it or install it. Then back to VR for operator training in the classroom before they get to the factory floor, then back to AR again for operator training using guided instruction with the actual equipment, and then more AR use in the steady flow of operations as a way to visualize real time data from various IoT devices, and possibly as guidance in completing work tasks. one takeaway from that in 2021, enterprise customers will be more sophisticated on the whole, they'll need less education, and more product integration and differentiation. Another takeaway, it's not one or the other. VR and AR are both important in the enterprise. VR is useful early in the product development process. And in in classroom training. AR is useful later in the product development process and in infield training, as well as in real time operations. I think about AR is focused on here in VR is focused on there. Which reminds me putting video cameras on VR devices does not make them AR glasses. It does make them better VR devices though, they become safer to use in our homes or offices. And they're able to bring a bit of the real world into the virtual experience, which can make them better for some training scenarios or for some gameplay or for personal productivity.