And actually, I would you know, I was thinking about it when you mentioned that there I don't think I did any AR VR work in my time at CyArk. But I did stuff kind of adjacent to it, it was all very new because I, I graduated from there in 2016. Before I think before like the Rift and the Vive came out like the the Oculus DK one and DK, DK two are public. But but none of the like non developer kit headsets were out yet. But shortly after I graduated, I fell into it. But in my time at CyArk, I was, I guess I'll start with, you know, my background in architecture. You know, took took me a few years to transition out of it. I studied architecture in undergrad in Chicago. And then I worked in architecture for a couple of years, before I moved to LA to do my masters at SR in architecture, and it was really while I was at CyArk, that I realized I didn't want to do architecture anymore. You know what the reason I went to CyArk was, I had seen all of the work coming out of the school, it's very experimental, it's exciting. They do really weird stuff or architecture. And and I wanted to, you know, I thought that that was what I wanted to do what I was really interested in get away from just the making of buildings. And it was my my time there in my first year out of the tube that I realized that my problem with architecture is not architecture, it's me. It's that I want to practice architecture. I didn't want to do architecture. And, but but I also realized that at the same time that the skills I had gained the hard and soft skills I had gained through architecture like 3d modeling, rendering texturing were things that I could use in other industries out here in LA. I was I was really inspired by oh man, and I can't slip on his name. The collective podcast by ash Thorpe, big fan of ash dorks work. He's kind of now like a director, but he started as a graphic designer. And at the time, he did very similar to what you do here, Alessio, with his podcast, where he would just interview other people in the creative industry in LA. And I was sitting, you know, in school, at CyArk experimental things with 3d software and Maya, rendering it in V Ray, animation, getting into things like processing, P five, JS, unreal, and unity and realizing that the people that Ash were interviewing, who worked in like Media and Entertainment, who worked in Hollywood, who worked on like films, and television and video games, all use the same software. And that really inspired me to try and break out of architecture and start pivoting my career into something that I was more personally interested in. And that was, that was years ago. And since then, I've worked doing anything from 3d art to what I do now, which is much more focused on building cloud software and systems. So that's a little bit about me. I guess I should mention that, you know, my, so my time at CyArk I was I was really interested in like, is it okay if I drop some links in the chat? Yeah, absolutely. As I go, even the share
mostly English, okay. Yeah. Cool.
Is it okay to drop this Yes, perfect. Okay, my time at CyArk I was interested in in 3d AR visual effects, things like that. So I got into you know, animation and rendering. Trying to say push like, what what is architecture? And why you know, why they are containerization what is a building and that really means from offline rendering in like Maya, in V ray to things like Unity getting into real time. Shortly after CyArk i i joined with a, an indie game studio called plethora project to help them finish finish their first video game called unlock code. And that's really where I got the opportunity. Through my experience with 3d art, I got the opportunity to go full time with unity and start developing. And through that I learned to program video games and happened to be there at the time that like the Oculus Rift came out vive came out AR kit came out these these really pivotal packages and pieces of hardware for XR. And, and while there we were exploring, what do we do after block hood after we released this first video game, and and began exploring like, what what can we what can we do with with VR? We ended up porting Glock hood to rift and Vive and 2017 and released VR video game back then. You know, it's one of those things that like we've released this so long ago that there was no one on the market then. There are so few people had headsets. And we're playing VR games that it didn't like it didn't attract much of an audience. But it was So great experience to, you know, get that get that time learning to develop with the headsets and unity back then, which really isn't that long ago. But in XR, you know, like, six years ago was is a long time after after a third project, I went to a company that called killer graph that's known for its architectural visualization. And while I was there while they were kind of transitioning from archivists to a branding company, and we did a lot of VR projects, a lot of AR projects, developing visualizations for real estate around LA, and then the rest of the country really. So and it was a good experience to work with some of the some 3d artists that are kind of the top of their game that are some of the best arc vis artists, I think in, in LA in the US. And yeah, after that, I jumped into more creative and began working for a creative agency called buck that's based here in LA, and did kind of all sorts of wild VR, AR 3d Real Time apps within we I started there, we released an app called slapstick, that is this unique little AR app that allows you to attach animated stickers on the environment around you. And then also invented kind of a record and playback feature for the app back in 2019, which was, it was fun to create this unique feature that that, you know, I hadn't really seen in AR yet. And now it's something that I think is part of AR core. But you can, the idea was that you can you can place these animated stickers that are created by box 2d artists around in your environment. In 3d in AR, you like record a little video. And then you can share it right. But the idea with a recording playback feature was maybe you're at an event and you don't want to spend the time placing the stickers while you're at an event, you just want to record a video. And so we created a feature that allowed that recorded the the AR space every time you recorded a video. And then when you recall that video, he went to your gallery pulled it up. You had that that spatial understanding and could place stickers in the video in the AR space. I think the the second video on that link explains that feature. But I spent a couple of years there and worked on AR apps. We did a pretty cool AR app with Ally Bank, where they sent out piggy banks to like the first 10,000 users that allowed little environments to pop up in AR everything was interactable. I worked on real time virtual productions on led volumes. We built VR tools for helping big tech companies understand space when they were planning their conferences and events. And then I jumped ship and went to Amazon. And that was about two years ago. And at Amazon, I work on the AWS side, the Amazon Web Services. And if you're not familiar with AWS AWS is the kind of the tech brains behind amazon. But we're also the division that builds the cloud. You know we offer close to 300 services now that you can use for building cloud applications and systems. And, and there at AWS, I work on a prototyping team that works with third party companies that that use the AWS services. And and really, you know, my, my position is one that explores emerging technologies like AR VR and real time 3d, and and helps the these companies figure out, you know, what they can do with these emerging technologies and how they connect to the cloud. So, you know, we are we build everything from, like 3d Asset Management Systems, 3d pipelines, VR, real time translation applications. Let me think it's a pretty broad swath of stuff. We've we've done a lot of work with Nvidia on the verse If you're familiar, and specifically Omniverse nucleus, and much, much more. So that is kind of me. I know it was a lot. So if you have any questions, go ahead.
We're gonna leave the space to everyone if you guys want to come up with anything but yeah, I mean, Kevin just mentioned to me an actor, I feel like for the future of AR and VR, or many, really technologies, the fact that like, you mentioned the omniverse. And micro services like Amazon, there are a lot of, you know, similar solutions, but of course, Amazon is one of the biggest one. You can just like get your small microservice. And just get that in your application. Usually in the past have been like, as freelance or just like for doing things for fun. I've been looking into, you know, like signing in, like so login authorization, s3 That is like the one for like, basically creating a bucket and hosting. And then there are so many other ones for analytics, and, you know, the better they also have a unity SDK. And yeah, there is a lot that can go into apps. When you start to make like an app, that is a product.
Yeah, and, you know, the, I would say, the number of companies that work with AWS services is absolutely mind boggling. And it's companies from Netflix to Disney to Nvidia, but also, you know, startups you know, there's there's a good chance that discord uses AWS, I don't know that specifically. But there's there's a good chance. You know, I think we we work with many, many XR startups that are that are in this space and trying to figure out what what they're building. But we also work with, like the government's, the NASS tech, recently announced that they hadn't completely migrated to AWS data centers. Uber and Lyft. Both use AWS it's it's pretty crazy. The just the breadth of companies that run on like AWS. Yeah. And as Alessio was mentioning, you know, they're their services for everything. You can integrate AWS into your unity application into your unreal application. And add in like user sign in, you can have your 3d assets for your your VR AR application and your video games stored in an s3 bucket and download it dynamically. Databases there are numerous types of databases that the AWS supports and hosts but also like big data analytics services, AI, ml services, IoT services. You know, we've got a couple of new ones we have a new service called sim space Weaver that is this simulation and scaling service that allows you to run massive scale simulations on AWS, you know, if you if you wanted to simulate, like a million people in LA Vegas at a convention, you there's a service for that on AWS. And that service has a an unreal plugin so that you can tie your real life simulation into your unreal application and visualize it. So it's, it's pretty crazy. You know, the things that my team is working on, I'm specifically focused on spatial computing. And, and spatial computing is rather new territory for Amazon. You know, we don't have a headset, we don't make an AR library. But we we support all of them, are we, you know, we try our best to we, you know, we don't have our own engine, but we support in unity. And so I take them back, we do have our own engine. We open sourced it, sorry, we open sourced it, it's now open 3d engine. But we we also support Unity and Unreal. Yeah, Epic Games is a big customer of AWS fortnight all runs on AWS. They talk regularly at our conferences. What was I gonna say? Oh, yeah, so spatial computing is new. AWS. And so what we're, you know, we're trying to figure out right now, what my team is trying to figure out is what we need to build first. And we're like, working with all sorts of companies trying to figure out where the challenges are, what are the pain points right now in XR? And how can we address those. And so we we work with the company, identify a pain point, and build a prototype around that pain point with with the goal that, you know, hopefully, we can open source that prototype, we can get it up on GitHub, we can write a blog about it, more people can start using it. And then and then we can we can push that towards an AWS service. And eventually, AWS will have spatial computing services that address these pain points and XR.
Out How long have you guys been working on that? Like the AWS SDK for dotnet? Like the AWS
SDK for dotnet, I don't know how long it's been around, but it's, you know, fully robust. It supports anything that our other SDK support. And I would say there has been some challenge in the past, because we did have a unity SDK that was specific for developing in Unity. But we've since moved away from that, and are just recommending the full dotnet SDK because that gives you everything. So and some of that some of that language is still public, like you can still find the Unity SDK, which which can be confusing.
I think I came across it initially, what I was what it was the mobile like the Mobile SDK for unity, and yeah, yeah, it was like comparing a lot of different database systems at the time. And I ended up going with a different one that was like on Digital Ocean, but it was more because the game server that went with it. But now I'm like, try this.
If you're specifically looking for like game servers, I would take a look at depending on what you need, you know, what, how much power you need. We have a service called Game lift. That's specifically for hosting game servers. There's a there's a whole section of AWS focused on building game technology and a number of services that are maintained by that group. And recently, they released a cool plugin called Game kit that allows you to kind of streamline some of the more common things that you might be building into a game like I think user off leader boards, things like that. And it has a Yeah, it has a plugin for Unreal and I think that the Unity one is public. Yeah.
Yeah. Interesting deal
with user odds. So like lot user log in, like account management for a bunch of users and the game. And then like a good.
Yeah. Yeah, you can do that I would pray for user off. Like, we have a service called Cognito, which is like our backbone, user auth. Service. But you can you can manage, like user accounts in Cognito. But you can also link it to like Google or Facebook, or like, third party logins, if you have like, October or something.
Okay. Yeah, that's been, that's been an issue, because so I've been building this stupid project for a long time. It's like a VR calendar, and getting a lot close. And then the biggest, like pain, because you have to have a browser in the game engine or something weird like that. That's the way I got it working eventually. Because if you if you send a message to your, say, your request browser from like a unity, compiled binary, or executable, or whatever, then that will go the browser, but there's no way to like, okay, take that token and bring it back into the game engine. So yeah, that was something.
Okay. It's not public yet. But I'm, it's close enough. And I'm fine with it. I'm working on getting a sample project published that shows only how to do user off in unity with AWS, this, this specific thing that you're talking about user off and off flows for VR, for AR for unity, is something that many, like many, many people struggle with. And it's something that I don't know why we don't already have, like a pain. Yeah, and it's a it's a complex thing. The amount of communication that goes back between the client, and the back is just for off is intense. And to expect, like an XR developer to, like, handle that themselves, I think is insane. So hopefully, that comes out sooner than later. But we can actually call that someone out either. Okay, nevermind.
I believe, how do I follow you? So I see when this stuff because
I I mean, I LinkedIn, or I'll have it probably on my GitHub. So anything I'll publish to GitHub will go to like the AWS samples. GitHub, which is that right there. Oh, cool. And we've got, actually, oh, I should give you this link, this links a better one. And I didn't like to leave this message. I also maintain a, the spatial computing index at AWS, which is just a public list of all of our sample projects related to spatial computing. And actually, I need to update it. So I'll update it on here. You know, feel free to connect with me on GitHub or LinkedIn or something and and I can message you directly when it comes out. Okay. But that I mean, this is this is like the the sort of things that you know, my team tries to identify is these these building blocks that are pain points that are like challenges for XR developers. Because it's it's not just like the individual XR developers who are new to xr that struggle with this, it's it's everybody in the space, no matter where they work. And, you know, that's what we try to address so that XR can grow and scale. Yeah, yeah. And then and then the goal is like, once once you can have your user sign in, we can start doing like, cool things, right? Like, like once, once, user office simple, and you can just integrate it If we can start doing things like using AI ml services for real time translation in XR Alessio, I think I told you about this back at AW II last year. I was working on at the time, and it ended up taking longer than I had anticipated to get public. But we got the blog blog post out in December, and have we have their repo for it staged on GitHub. So it's almost public. So hopefully, in the coming weeks, you'll be able to just download this project. And, and unlap like play around with real time translation for I think it's 12 languages in VR.
That's amazing. So
yeah, yeah, I don't know. I, I'm sure that the translations still struggle, like any other real time translation. You know, sometimes the words aren't right, or the meaning isn't right. It's pretty dependent on your network connection. Also, like I tested it out at a cafe here in LA. And it wasn't great. Like the latency was pretty bad. But at home, on like, my home network, you know, it's the latency is less than a second. So, yeah, it's pretty cool. Yeah, very cool. Because these are like, these are like the sort of things that I see is kind of the next step for XR, right? You know, I think the next generation of xr is going to be dependent upon the cloud. We'll always have like, the standalone games and experiences, right. Just just like video games have standalone single player games. But I think for us to to grow XR for us to hopefully, someday realize the metaverse like we need the cloud. Just the end hardware isn't powerful enough to to power the scale that we're talking about. So yeah,
I wonder since you mentioned to realize the metaverse. Yeah. Since you weren't there. I was doing like, I also asked you this question, but I don't want you to say something that you can say, like, you know, visions or reveal products of Amazon, like, don't like something maybe you think personally. What's what's going to be for you like a point of arrival when you say okay, something is really really scaling up because I agree with you that there is a problem of like, of scale. To adopt these things. Like we saw like recently, I saw a lot of saw a lot of tweets, for example, right? That they say before there was the boom of XR, then there was the boom of crypto now AI is the boom is more successful because it's way easier accessing to it. And I mean, besides the fact that totally different technologies between all of them. And AI, I think is more flexible and adapts really to everything is like a plugin is not really like a way to visualize things more a plugin that you just use for everything. But like, there is definitely a way to access that that is like easy, intuitive, fun, rewarding, addicting almost. Because you're talking to something that is nice to ask for you. So you're just curious about what what's this thing gonna say? What about the cloud? How this for you? How this kind of Metaverse vision in your opinion, can become a such in what's your version of it? Because I feel like we hear a lot of versions around but what's your version?
That's a tough one. I think I've always been more interested in the idea of the AR cloud than then like the strict kind of Neil Stevenson metaphors right. I think it I think for it to happen or like us to realize it and it needs to be more available and more social right, more open, less isolating. So I think the the idea of having contextual data on hanged? I think that that's kind of more my vision of the metaverse, right? More of like a what is the hyper reality? But less dystopian? Yeah, yeah. Like I want, I want Google Glass. You know, the hardware needs to shrink, obviously, it needs to shrink and physical size and price. So that it's more consumer. Right. But, but also our ability to stream data to stay connected, but also to have data in context, right. Like, right now, the hardware isn't consumer ready, which I think we have seen from the movement of Magic Leap and Microsoft in the shift to focusing on industry. So the hardware, isn't there. Our ability to get data in context, isn't there as well. And our ability to stay connected is is there in cities, right in major cities, right, like 5g has has enabled that. But that's still pretty limiting, right? So I think these things all need to grow. In order for us to realize really where I think the metaverse is going, or needs to go, in order to be realized. Yeah, so going off of that,
you know, it's a really interesting idea. And you mentioned with the AR cloud kind of concept. So for these hardware devices that you know, maybe there's not, it's not quite small enough for the actual device for like an AR device. For example. Do you see a world where, you know, with these highly connected devices with 5g, like you mentioned, where that can connect up to the cloud and get greater computational resources kind of sent down? So you don't have to have so much hardware? Come back that on the device itself?
Yeah, yeah, definitely, you're starting to see some of that actually think the Magic Leap is doing some of that the Magic Leap to figure out what you want. That's called with the shared spatial understanding. By but yeah, you're starting to see some of that it's still very new. So AWS actually has a service called wavelength that allows you to we've been, we've been partnering with like, telcos like Verizon and Vodafone KDDI. In Asia, we've been partnering with these telecommunication companies and moving our data centers, our servers to the telecommunication data centers, which allow you to run your application just at the very edge of the clock, right, it physically closer to the person's device, who's on the 5g network. And this by doing this, you can have latencies that are like sub 10 milliseconds. And that's like, for XR to stream of from the cloud to an end user, you know, you need it to be like sub 10 milliseconds, right? You have to hit that like 90 frames per second. But this is this is like, very, very new and still really limiting in scale and availability. And so I think we'll get there but I you know, I don't know how long that that will take. We also you know right now I think the only devices that support XR and five G are mobile phones. I think I could be wrong about that. And so you know, once we have headsets that support 5g Euro six g then I think we'll start to see some really interesting things. But you know, we'll we'll see.
That's really cool. Let's say you gotta get the magically guys on that. Yeah. For 5g. Why should you use mirror when you draw on a whiteboard? If no one sees it Yeah,
I was having some issues with my audio. Yeah, I mean, I don't know if you guys saw I mean really here not promoting again, but it was mind blowing for me to see what what they did. I didn't even work on that project but they did see is this kind of remote rendering thing. It was mind blowing to see like AR in that way I've never seen it in my life like so. So sharp, you know. Thanks so shadows, they were so realistic, and definitely a quality that, you know, I've never experienced before my work and I was like okay, so he was very much of a big step. And yeah, the goal I think in XR generally is like reducing the hardware. We're seeing a lot of new hardware coming out, which is way lighter than was before. I mean, the new vive it's pretty light compared to the old HTC Vive. So like that was like, I remember when I was working on Morphoses, like we were using the Vive, Cosmo and steel, vive cosmos and green, but like huge, and now you have like HTC Vive that looks like this little glasses, almost like kind of Gator style, something that I still wouldn't use to go grab a coffee, but it's definitely different. So yeah, getting their
difference in weight between the Magic Leap two and one was huge pause on it.
You had a chance to try though, between the what was that? Have you had a chance to try it?
I used it at a I think they used it at one of the conferences. I don't remember which one. The difference between it and like a quest Pro is insane. Which I tend to on your
I always tend to like I mean, this is more of a general conversation. But I feel like comparing like passthru and C two is always tough for me, like those are two different things. It's a different, you know, like you're shooting for a different. If in AR you're trying to really track this thing and making it look good with the shadow. And, you know, with the reflection of your lenses in passthrough, you're probably trying to get the fastest latencies zero latency with a frame rate instead is like a different kind of a different mission to me, like there is a lot of overlaps. But this feels a little different.
Going back to streaming, if you haven't checked out in videos, cloud XR, it's it's a pretty cool library for specifically built for streaming XR applications. It's still a you know, has its rough edges. But again, you know, like, this is something that's only a couple years old. So this this notion of three main XR from the cloud. And really streaming anything 3d from the cloud is still very new. But I think we'll get there. There's some cool. There's a startup in Japan called Mo, Laurie. That my worry. Yeah, I mean, I think
my worry that my blood boil, definitely this one below stream and extended reality content set
their their streaming software is I find that their streaming software is really interesting because it does like kind of like a multipass thing where it will. It allows you to render specific content in different locations, as far as I understand it. So if you have like low fidelity, low quality, low poly content that can run on the headset or on the device, you can render that on the device, and then just stream that, like the high fidelity content from the cloud. And I think this, like split rendering technique is really what we're going to see in the future, right, like, because the hardware, right, like the phones, the headsets, they can render content. So you know, why not take advantage of that. But the cloud is really where you want to render that that high fidelity content that struggles on the lower hardware. So being able to split that string less pixels and get that high fidelity content into the headsets, I think is what's going to eventually win. Yeah.
And you can you're going to put that less time sensitive computation to the cloud as well.
Yeah, yeah.
Very interesting. Yeah. See, why is that fool doing these things? So you get to know, things you didn't know? Yes, I mean, there's,
there's a lot of crazy stuff happening at the moment. And I guess, on the other end of things, you know, we've we've been also looking at how to make it easier to create 3d content, one of the main challenges in 3d is that it's still relatively difficult to create 3d. You know, being a 3d artist is rather niche of a skill. But in order to have three content everywhere, you need to be able to produce it quickly. And at scale. So looking at, like photogrammetry solutions, and how you can you can use photogrammetry to quickly generate content, generate 3d models, using using the cloud to scale that up so that you can take photos, upload any amount of photos and generate any amount of 3d models in a matter of minutes. It is this something that we've worked on? And we have, like, also looked at using drones, right there, there are a number of companies now that are doing have fleets of drones for various tasks, like surveying, or, like emergency response? And how can you use that data that you've captured from your drones and, and generate 3d models? And sometimes these are really big models, like a whole factory, right? And, and so how can you how can you take the photos from the drones upload it while you're still on the in the field, you know, upload it through 5g, turn it into a 3d model and have it ready for you when you you know, are back at your laptop. I think these these sorts of things are like this, making it easier to create 3d content, but also making it easier to render 3d content. There's at some point a convergence where, like, where things are going to become much easier for everyone to have 3d, and, you know, grow stage go and grow XR. What else was I going to say? Oh, okay, there we go. That's what I wanted to Well, one thing I wanted to make sure to bring up unless was, well, one thing I've been really interested in recently are there are some new 3d 3d file formats. This is slightly boring, because it's about 3d file formats and 3d specifications. But there's some new ones that I think are really interesting. You know, there's, there's glTF and USD. Well, that's, that's a different conversation. I think those have their merits in their use cases, and they're going to help the future as well. But there, there are some that new formats that we're seeing that are 3d tile formats. And this was kind of something that like cesium, if you're familiar with them, they have a 3d title format, and then there's an open source version called entwined. But, and then there's here the real power is looking at entwine in a viewer, so there's an open source viewer called poetry. There we go. That's, that's what you want to look at. But these are these are specifically intended for large scale point clouds. And this is like something that blows my mind is the USGS the United States Geological Survey. I think our service is He has 3d scanned the entire United States. And all of the 3d scanned files are made public on AWS. There's a link off find it at some point. So you like you can you can just download. There's like one Point Cloud file, that's all of Los Angeles. It's insane. And you can just download it, it's public. But it's it's these formats that are kind of splitting, they're taking point clouds, splitting them. Yeah, that's it. I think splitting them into like an arbitrary into different tiles, and allows you to dynamically render the point cloud. And so it's very much like an LOD. In gaming or in Unity or Unreal, the chunk of the point of the camera is closer to his rendered in more detail, whereas the chunks that are further away, are rendered with less points, right. And this, this allows you to have these really large point clouds rendered in the browser, you know, you can have a point cloud of all of Los Angeles running in your browser on your MacBook. And it's that that kind of thing, like blows my mind. Right? They might, you know, my, what I want to see is, how do I import this to my quest? How can I import this to magically? How can I port it to AR kit, right? I want to be able to see an entire model of Los Angeles on my table in my iPhone, right. So I think, in addition to the things we were talking about, like, in addition to making it easier to create 3d models, making it easier to stream the 3d, I think making it easier to render 3d right to run the 3d with these types of file formats, is going to change a lot of things, right. And I think, to get to the scale of a Metaverse, we need things to move towards file formats that natively support dynamic scaling. Because, you know, you don't want it for the future, like what we don't want is the 3d, the graphic quality of horizons, right? Like we want to the graphic quality of real life. But in order to get there, like, we need different file formats, and so these like entwine, the work caesium has been doing, I think, are super fascinating, because they're starting to get there. But these these things, like everything else are new, they're only a couple years old. Yeah.
Since you mentioned we want to stuff that looks good on VR. Just want to mention the last guest that we had, he was there, they are trying some different techniques to have a very high quality VR, I mean, nothing like file format. I think what you mentioned it's mind blowing, I hope that you know, I think that will be the the way to do things. Yeah, to see things how they really need to look like in real life. Yeah, yeah, it really just like leaves you know,
there techniques already that allow for you know, obviously, that allow for really high fidelity computer graphics to run on on low end devices. And these are you know, techniques that that spatial that XR have has adopted from from gaming, like texture atlasing, LOD s, you know, dynamically loading and unloading content. And I think all of that will will still play a part. You know, the question is, how do we simplify that? How do we make it easier? How do we make it more foundational, so you don't even have to think about it. You don't have to make your LEDs the LEDs are made for you. Yeah,
amazing with the LIDAR. And with the the drone stuff. I've did a lot of drone work for power grid inspections. We've looked through through a lot of these, and I, I've been a fan of poetry, I was surprised, like, I'm pretty sure that one was like a PhD project or like somebody's like, university research. And then now that like, you have like USGS using it and all this stuff. That's just crazy. It's
crazy, like the group that seemed to be developing this open source geospatial software seems so small to me. Like I'm, I'm like, My mind is blown by the amount of tech that this this small group have built, and the potential impact that it can have.
And then you got like big legacy companies like scrambling like, how do we, how do we do this? It's, yeah, it's like, there's COVID People figure this out? Yeah. Yeah. Do you know, I'm kind of curious. I don't know if you know the answer to this. But like, I eventually want to find myself doing, like, profitably doing those kinds of open source projects. But it seems like it's really tricky to solve that problem. Without, you know, the initial funding to spend the time on it. I don't know if you've come across that and
not? I mean, not specifically, I think the the way to do it is to like, obviously, you have to have another revenue stream, right. So that you're, you're able to fund your open source development. And I think I think the key is, is dog fooding build, build your open source software, and use it in whatever your revenue stream is for that, that actually, you know, you make money on. So my, my, it's actually something I've been thinking about for the term dog food, dog fooding. That's a that's like a tech term that's like, you know, everything we build at AWS, other other parties at Amazon US. So like, the game, the game teams at AWS build services for running game servers. And then Amazon games use uses those services for building games. It's the same at like how Google and Facebook work. And it's probably like, similar to things that Magic Leap does. But you build tools, and then your teams use your own tools, and you make them read like, Yeah, I think that's the key is like, you figure out like a revenue stream like you, you build. You bring in like projects, right? Like one off projects with clients. You develop the software for that client by doing that you use your other software that you develop, that you actually open source. I think you know, the key to that is bringing in clients,
right? Yeah. A question on like, the metaverse when I, one of the first ways I learned about the metaverse was like, digital twin, another world kind of thing. And like the, you know, a lot of the GIS data geospatial fully, kind of being all hosted somewhere. And then there's even like a local author to Columbus, one of his books, I think of shade, what the idea was like, You're in this live stream, where it's like, a constant, you know, like, like, everything's up to date. If a car drives through a building on accident, like it's updated pretty quickly, just like from a bunch of cameras that like catch or capture it, or a tree falls down or whatever, like, they're able to update it really quickly. Do you? How do you see that? Like, I know recently, meta and Microsoft and a few others announced like project over overture Foundation, to like, combine their efforts on maps. But it didn't sound like it was it sounds like it was more of just a fancy base layer to like roads and streets and pads and stuff but and they were like it doesn't even have like The number of lanes on a road, it just has like road. But it's like they're trying to abstract it to the most basic stuff. But do you see what I mean? How like Google Earth has a lot of data there. And then ESRI has a lot of geospatial data. They're like, how do you see the big things coming together? And are Manx? I think and like having a shared, like version of the world? Or do you see that was kind of like, there's always gonna be multiple?
I mean, I think it would be nice if, you know, these companies could come to an agreement, I think developing and maintaining a digital twin of the earth of all of our cities, all of our infrastructure is going to be such a large task, that a city, it's not something that single company can do, right? I think that it's going to be a legal challenge for an agreement to be struck between all of these, these different parties, right. It's also something that, you know, if you could, if you could be that that person that had the digital twin of the earth, of our cities of our infrastructure, it will be insanely profitable. And so getting over that notion, and getting to like the idea of sharing that profit, I think is going to be the the challenge. I but I don't think it's something that a single company can maintain, you know, look at the the amount of effort that Google puts into just maintaining Street View, Mark, right, like the fleet of vehicles, the frequency that the vehicles have to be driving. Is, is crazy. And, you know, in order to have that update, in any manner that's close to real time, even, like on a weekly basis, like you, you can't do it on. Yeah, yeah, it would be really cool. If we can build that. But I, you know, I think it's going to be a legal battle. I'm sure it's something that like Facebook, and you said Facebook and Microsoft or Facebook, and
it was them. Facebook, Microsoft, there are a couple other groups like I think John Deere was there to overture,
John Deere is a company that I think is really fascinating. They they actually do a lot with technology. Which, you know, I find, you know,
or Tom Tom.
Tom, Tom. Okay, that makes more sense.
You guys are on it to actually I didn't realize that it was on metal Microsoft sometime.
Okay. We do Mm
hmm. I just went to the site. And although, well, I wonder. Yeah, sorry. You're in there. Okay.
I'm surprised my boss.
I know. All right. Although, from Ai, they had a talk at South by and it was like, it sounded because matte box does something similar. So bat box Grooms and curates data from OpenStreetMaps. The whole pitch of this was like, This is gonna be a fancy layer that does the same that basically builds on top of it's like a Google Business layer, but owned by these groups instead of Google. And that greatly simplifies things and it sounds like they're gonna do they had reference frames or something like that, like, it sounds like what they wanted to do was really basic. Like, here's the centerline for the street, the center point for building stuff like that, and then almost a semantic connection of it. So that when you're doing that, you're building an app and styling and everything. There's like a much more intuitive, like logic flow to AI. They were you know, I didn't get it all my answers from that set. We'll see.
Sounds like they're they're trying to create, like, a structured geospatial data. Like how can you describe a map in a way that makes sense to a computer? It reminded me of there's a PDF from a talk that I think Tony Parisi who is one of the early XR people. Yeah, one of the creators of the virtual reality modeling language, VR. Now, that was like, an early thing in the 90s. One thing that they were looking at a paper that that he worked on, I believe it was him off to look up the paper, I'll drop it in the chat at some later. But they were trying to come up with a language for modeling spaces for describing spaces, right? Like, how can you can you have a programming language that describes a room so that you can you can procedurally create 3d spaces, and then and also transfer 3d spaces, right? I think, you know, if you were able to have a language that described space, we wouldn't necessarily need to transfer the pixels and the Vert data of the 3d meshes, you could just transfer transfer the space definition. And then the end devices could generate the space based off of the definition, which might speed up for File Transfer for 3d is something that was never really realized. And I, you know, I don't know if you can create a language that describes bass in enough detail. But it's, you know, it's something that I it's a thought, you know, something that I think about, all right, and I find fascinating.
It makes me think of a glTF. But like, that's also not like, doesn't have anything with longitude, latitude or any of that. So you can do a they have a geo pose, you can add on to DLT apps, but you can't build like a world, like from the ground up with that. So yeah, that's something that I that's an interesting way to look at it. I like that approach. And I hope that's what they're doing.
As we'll see, as
to switch a little bit topic. I just go into the AI reom. Since it's been so hot the scene lately. I mean, it's been four years, but it's just popping, probably. It's just like, you know, it's just like when the word Metaverse starts buzzing around. But you know, everyone worked on XR for years, and still a long way to go. I think it's the same case for AI. But there are applications that like what I always say and I think is that like, you know, the metaverse and all of these XR related topics are dealing with simulating reality and 3d environment in lights and things that are not actually just images and text. So it's totally different way to maybe like spread our technology. So of course it's the relevance of AI right now it's you know, it's kind of like killing is like doing so well because even if gives you like a 70% accuracy, steal a big help for your stuff. I think there is a lot to learn there and I've seen a lot of people who start to bring into for example unity, some charge up the principal. I saw some tutorials around also deal maker did a lot of charge up the exploration where you kind of you have like a prompt navigation of your engine and your kinda like scraping words from your direction and trying to interpret them with you know, Unity stuff unity API. I don't think that Amazon is not seeing what is happening. So I probably like they have their own their own solutions. These questions don't Amazon related. But since you are so into it, all of this, I wonder what's your vision on it? If you ever try some product like by open AI or Bing if you try them, and you had some impressions, some interesting episodes you have to share.
Yeah, and in the explosion of challenging PT has been insane.
Oh, I just sort of mentioned I use GPT to grow a full s3 service for Amazon.
Yeah, yeah. Yeah. No, it's It's insane. Like, you can use it for so many things. And it's so good. And it really like boggles me personally, because my, my thesis project has Hierarch was a language model. It was like a chatbot, like, a very early attempt at something like changing PT, but specifically for architecture, and it was
you still have any website, or you can share it your project.
Yeah, the old like, Instagrams probably the best. But like, it was like an attempt to train a neural net to write architecture, a theory and criticism. And so you could feed in an image, and it would recognize stuff in the image, feed that into the language model. And that was trained on a data set of architecture, architectural theory and criticism. And the goal was to, you know, create something that could create believable architecture criticism. And I think, I think it did a pretty decent job. It required a lot of editing, right, like, for the quotes on the images here, that the quotes on the Instagram, the quotes are, what it said about the image. But like that, that I might have, like, produced 30 quotes and take one. It could produce like, any number of quotes you wanted, instantly. But some of them were really terrible. And it certainly couldn't do paragraph, it certainly couldn't do, you know, code, it was trained on a very specific thing.
Which is actually great. I think this is like, I mean, the one is say, but like, this is a very, so it's very current, this project. And I and I wanted to mention it, because now what is happening is that there is open AI or other solution that gives you an API with pre trained models, and then you fine tune them on your special use case. You started with the fine tuning, let's say, because, of course, the thesis has like a very limited budget of time, and money, and resources and research. So you can spend no, you can do as much as you can. And as a solo developer. I think so. And yeah, you were also in that moment, kind of like transitioning. So I think it's incredible, that project, especially because if you ever participate some art shows or architecture context, reviews, like you know that there is a certain specific language to kind of like regurgitated self every time they say always the same thing. And you're like, dang, like, I can definitely be of like something that resembled that, that vibe, you know, and yeah, it's very AI. feasible, I have to say, yeah.
Yeah, the language of architecture criticism is very AI feasible. It's a language that's, like highly structured. And at the same time, like abstracted, and I think it's a way of defense but whatever. But like So seeing, seeing, like, what Chad GPT can do, and knowing, like what this could do. Long ago was seven years ago, it's mind boggling. Like, I think it's insane how far like natural language processing has come in seven years. Like the stuff I was working with, at this time, when I did this project, it was like the same type of language model that was used in like, the, it was the state of the art at the time, really. And it was not good. So I think it's crazy. So they're getting back to like the AI. And, and where I think it will help, you know, some of the first things that I'm seeing and I'm using recently at work, I've been using Amazon code whisperer, which is kind of the same thing as GitHub code, co pilot. It's sort of a setup kind of predicts what you're writing, right? So while you're writing code, it suggests like, I think you're trying to write this entire function, which is pretty cool. Like, you can have it write an entire function for you in your IDE while you're writing code. And using things like chat, GPT, GitHub, copilot, Amazon code whisper to streamline the process of development is, you know, obviously, a first step.
Question, sorry to interrupt you. Do they let you use the code whisper as a kind of mandatory plugin in your workflow? I don't know. I don't even know if you can answer. But if you can, don't do it, because
it's, I think it's, it's totally optional. And it's, I mean, code Whisperer is a pretty new service that I think we announced last year. And so it's just now being adopted by by teams. You know, things like I couldn't use co pilot, because GitHub is owned by Microsoft, and Microsoft and Amazon are competitors. And that data that's analyzed by co pilot might be uploaded to a Microsoft server, right? Sure. But I see them as being similar services, right? They're both trying to do the same function in predicting and aiding in development of software. I think chatty, PT goes, you know, obviously goes beyond that. Like, last night, I was watching a demo of a guy in Unity who had like that the prompt the text, the prompt that you were talking about. Alessio? And he generated, like, with like, six text prompts. one liners, you know, he generated like, 100 randomly placed cubes. Oh, that's
that Kenjiro that Kenjiro. Think is that guy?
Yeah. And that I think that's insane. You know, and I think the next step for this stuff, at least for like, what XR is, like, can you use it to generate 3d models, right? Like, you can, you can already, but they're not great. So Can Can we use it to generate 3d content? That is something that you want to use in your project? Right? Like if I'm building a game like it, I don't want a 3d model, or I'm a software developer, and I don't know how to 3d model. How do I get content without hiring someone? Can I use a chat GPT to create that for me? I think that would change a lot of things. Yeah.
Yeah, it's really like I think at that time, that moment, like your contribution is really how to interface these kinds of comments with the with something like a software like Unity because I mean, copilot literally, or some other similar tools. Like what do you do? Just to be clear if you ever if you guys have ever used it, like it's not a you say? One little thing you're like, Can you ride It script that does move my queue and create Saturday 10 cubes and like you delete it. It's like a friend of yours comes to you and you're like, can you make an app for me and daddy law like a bunch of stuff. And then he goes in and it's just like created for you. But I mean, for example, like, if you need, unity is very much like game object based scripting. So like you place a lot of scripts and different game objects. I wonder in that case, like, you should really say, Hey, you should create like, you know, I want the architecture of my project or my seem to look like this way. And yeah, I wonder that kind of gap and breach once he's very much optimized, I think that's gonna work very well. But for some game managers, I can't imagine like, prompting a full project or the kind of project I do, I think it may be could be, start to become like a pain to say exactly what it is because you go to such a level of specificity sometimes. Or you maybe need to go to get like a code review. And like changing a little thing, and you need to find that little thing like definitely a helper, I see that if someone tries to fill that gap. I think that's the that's the startup right now. I think that's that gap between like inputting prompt and arriving to something that's the real startup that is going to earn in this period of history.
Yeah, yeah. I think, like AI is going to be more of a helper. Right. At least, that's my hope. And that's how I look at it, I think it's going to be there to aid us in our creation and hopefully streamline things in life. Right. But I don't think AI is really take away our jobs, at least than an XR. You know, there's there's other at least in creative industries, right. And I include software development in that I think software development is incredibly creative process. And, you know, AI is can you create things, but to your point on SEO at the complexity of what we create, I think it will struggle, I think you will, you always need like, a way to think you'll have to think about things in multiple ways, right? And I don't know if we'll get back with it.
Or I think some process that I see happening now is like kind of reverse engineering that like, candidate prompt, I saw some times on Twitter, there were people like create this game. And it creates a full game for you like an exit file. Or you can literally like the approach will be create a Unity project where there are these things. And then you open this project and you see what's up. It's like, and then you open it and you try to see okay, oh, this is this is kind of working. But there's a gap here. I think that kind of sometimes I feel like the exercise is also in your mind to say what am I going to ask for? Because I was doing that for Amazon the other night, because I was just like messing around with s3. And and I was like, okay, I can find the documentation because they were the problem you were mentioning, like there is a unity SDK and the NET SDK, so there were some kind of I don't know if you guys know, but there is an a conflict between AR kit and Amazon core DLL library. So it's a very well known issue. I saw it on so many issues on GitHub, and I was like, oh, okay, so since there are so many issues, I'm gonna just like turn to charge DMC WhatsApp. And I really just took me a while to create a prompt that was very effective, because every time that I was asking one thing at a time, I noticed that it was like, oh, but you can do this. And then he was getting my code that I wrote above and just changing. So I'm like, am I going to get a new code or new code is like bluff, and I need to check it. And I noticed that the most like, when you ask him a finished product is more reliable. That's the goal of my assertion. You're using it for a bit like the most, the most complete is what you're asking, because it kind of knows where it starts and where it ends. It's going to give you the full product, and it's in my opinion, from what experiences Using the one night is more reliable because if you just ask very specific part of the code he won't give you maybe it will start to get elucidated and just give you very some good stuff and some random things that makes sense with with that confidence is like oh yeah, I can do it's like completely wrong and you're like you seem to be so sure of either
I really liked how you describe the chat GPT I like the hallucinations and it's confidence you like Humana Humana eyes? And that's not a word you made a few men you know I think that's what's fascinating
that one of their when they released chat TBD, four or no just TPT for one of the disclaimers, they put it at the top was like this thing hallucinates. And it was like, like, well, I guess
are we like going in a circle like the when when Google released TensorFlow back in 2015. It like came with the deep dreams. And we got like those those crazy like neural net images with like Buck like humans, human faces that are like built out of bugs. So I feel like you know, we're like at that time. They're like, oh, like, AI is crazy. It has these crazy, like hallucinations. Now we're like, we're just going in a circle. Yeah,
it's it's J. Taylor AI again.
They were so big. I remember those images, you would see them all over in design school, especially. Guys, thanks for joining. But it's getting it's getting late, is getting laid off for everyone. Yeah. Thanks for joining everyone. We're gonna try to just make some conclusion now. Yeah. Yeah, I don't know. Like, MC. We didn't speak really to the questions that I prepared. But that's much better when we did. I think that maybe keeping things real, and not speculating way too much. Since we're still in the realm of XR, a part that I get to work on a lot is like interaction design. So maybe like how you how you do things, right? Like there is no such I really didn't find like a model of interaction that a lot of people say this is the one like, everyone seems to have their own preferences. And every product that comes out has a different way to do it. Like if you go like I did like some episode with basil with shapes XR. It's totally different thing. Like if you use Oculus SDK, if using Suraiya SDK, it Everyone has his own, like, oh, let's do it this way. And it works. But I think the only the only really, there is a layer that is more like a common layer for mapping the inputs. There is not a common layer for mapping interaction, which is super interesting to me. And so I wonder, from your experience, you work on different things like what is the what is the one that you like the most? You know, like, for example, Oculus just released recently, some new stuff for direct interaction that has been there for a while, but it's never been really like Oh, direct interaction, you know. Now there is like more branding. I noticed that there is an industry more branding, more branding strength on the on the interaction. And on Twitter, of course, there are people that post that constantly the best stuff I've ever seen and constantly, it's pretty inspiring to me, but what if you haven't tried anything? What will be your favorite so far?
I kind of wish Tarun hadn't dropped up before this question because he does. He does UI UX stuff. And has done UI UX for for XR. So I'm sure he has a strong opinion about this question. And I would have been interested in hearing that. For me, for me. I think I'm I've always my issue with like interaction in XR has always been I feel like we do Transition specifically, like interacting with user interface, right? I think we transition from this understanding of user interface as two dimensional. Because we have screens we have, we have computer screens, we have mobile devices, and it's all 2d, right? With paper, but we brought that into XR with us. And so a lot of xr experiences have still been heavily focused on 2d interaction. You know, if you're interacting with an inventory, or a menu, a lot of times it's 2d. But I don't see why that really needs to be the case. I think, you know, some things should stay 2d, some things make more sensitivity. But I think there's much more that we can do in three dimensions that hasn't been adopted. Widely, right. And even our menus, I think, can be 2d. But you know, I think for XR, like why, if I'm looking at an inventory, why not have the inventory presented as, like 3d objects on a on a table or on on something on a plane and a bucket, whatever? Why is it a list of icons or names? You know, or even, like, just give it to me as like, three 3d objects on a pallet and I, like, expands out of my hand, you know, I want things that I can interact with, in three dimensions. And then when we get to that point, like, I, you know, why, why don't the items in my inventory have like physics, you know, like, Why do I can't tell you like, pick them up and throw them around? Like, in terms of like, interaction, I've always, you know, I think it's natural for us to want things to be spatial. And for us to want things to be physics based. That's how our lives are outside of computers outside of the internet. And I think there's a lot more that we can do to mimic our actual lives into our XR experiences. Yeah, yeah. I don't do as much with interaction as I used to these days. So I might get updated on the packages, and offerings available. But yeah, those are generally my thoughts.