Getting to Level 5 - The Path to Autonomy by HERE Technologies | Disrupt SF (Day 1)
2:40AM Sep 6, 2018
driver assistance systems
Good morning. My name is Doug Newcomb. I'm the president and co founder of the C three group and we're gonna have a great discussion today about automated vehicle technology, abs and mapping. It's a great topic for me because I'm a car technology geek as well, the map geek. So I'm very interested to hear what Neil Omen from here has to say. Now, everyone knows that AV technology, especially here in San Francisco, but you see it on the streets. If you're not from here. I'm sure you see it in the news, very important technologies growing by leaps and bounds. I believe that has a potential to transform so much of what we do, I think it's going to have the impact on par with the internet. And the last count there has been about close to 100 billion dollars invested in AV technology over the last six years. But one of the issues around this technology is how do we make the jump from humans driving. And now humans are driving with assistance, these driver assistance systems as they called to full autonomy. And we're going to discuss some of the importance around mapping data and how it relates to artificial intelligence and machine learning. And then some of the challenges that we face in getting from that partial autonomy to full autonomy and very happy to have Neil Omen from here. Join me on the Let me tell you a bit about himself. And what he does it at here. Here's a world's leading mapping and location data company. They were had a huge investment a couple years ago, 2015, actually by trio of German luxury automakers, BMW, Audi, and Mercedes Benz. They've also have investments from continental Bosch pioneer gives you an idea of how important mapping is to automated vehicles here is comprised of about 8000 people, 200 offices in 4054 countries. And their goal is to use the power of location data to transform lives. So Neil, tell us a bit about yourself and what you do it here.
Thanks, Doug. So my role at here is I currently lead a team of about 7075 ish data scientists and machine learning engineers in a unit called here research. It's a cross cutting in it that's focused on learning about how to apply and in researching how to apply advanced machine learning AI technology, if you will to both mapmaking and to system automated systems for driver assistance from from a navigation standpoint, for the most part, I mean, for ultimately a matchmaking company. So navigation oriented assistance to drivers is what we do, my team focuses on primarily figuring out how to automate data extraction for the purposes of both driver assistance systems and for the purposes of mapmaking map map updating, which we'll talk a little bit more about us, we go,
great. Now, I know there's been a lot of talk lately about the Why do you need maps for automation, especially the, you know, the higher levels of automation,
once you tell us a bit about the importance of from your perspective of maps for this getting to this higher levels of autonomy? Sure. So if
you think about maps, from a human perspective, maps basically use map to tell you a little bit about where you're at, and the context of where you're at. And to help you understand where you're what to do to get where you want to go. And for robots, or for automated vehicles, it's no different, they need a system, a reference system to use for over the horizon route planning for kind of where am I what's my context, what's in my immediate surroundings. And if you think about the concept of a live map, and we call it live HD map, or high definition map, the notion that the vehicle needs to know you know, which lane it's in, do I need to be in in lane one, two, or three, as I plan for the exit, or a plant for to make a right turn left turn at the next corner, all those things are sort of beyond the sensor system of the car, right. So if you think of an automated vehicle, as it's a set of eyes, and ears, and all kinds of other sensing technologies, but they have a limited range, and a map provides a reference system to enable that that sensing system to locate itself and to make better sense of its surroundings, look over the horizon, plan routes, etc. So pretty much all the automotive automobile makers on the planet are in one form or another, looking at involved in HD map making research for the purpose of automated vehicles. We work with over 20 companies in their r&d cycle as they're planning the systems that do the Enable automated vehicles. So everyone's the debate is different there. Do you really need a map? And for the near in the immediate vicinity of the vehicle? The answer is probably not so much. But in the knots in in the extra immediate or just beyond the horizon of the sensor system. You absolutely need the reference system. Yeah,
so like, for me, driving in San Francisco, I, you know, I visit here driven here, but they still have a hard time getting around it put me in another city. I know, Los Angeles, for example, I know where I'm going, how to get there. So the maps in a sense, are giving that intelligence to the well the drivers now but also autonomous vehicles in a future. Yeah,
I mean, you think of the map plus the vehicle control system as the as the human I mean, the the, the human control system, you have a map in your head of where you know, where you typically go, and I can make decisions, we make decisions. But But even then, so you already have the map in your head, right? That that reference system in the vehicle for the for the robot, so to speak, that's what the the map supplies for the robot is a reference system of them of where it's at where it needs to, and how to get from where it needs to go. I mean, under a fully automated scenario odds, a route planning and eta become dramatically more important than they already are for for you, when you use your handheld for eta is on maps, and what have you, they become even more important for robots. So I can do that, you know, pretty easily driving up driving for years, obviously, it's going to be a lot hard for a robot to do that, what are some of the key problems that you guys are trying to solve? Well, I think the tricky part, I mean so mapmaking in general is a massive if you think about map making, on a global scale, you know, whether it's us, or whether it's Google, or whether it's Open Street Map, it is a massive data mining operation, right? It's extracting details about over many different timescales, details about the physical environment, about roads and street network, about different businesses on various roads and streets. And certain parts of it are relatively static, but certain parts are some very, very dynamic traffic, for example, is a very dynamic problem. Interestingly, enough pop ups of road closures are relatively dynamic problem, they like they kind of change from day to day, and communicating all of that information that really impacts how a an automated vehicle or even how a driver just on a standard definition map how a driver makes decisions on on, you know which route to take, which whether it's be using the guidance system of the route planning or of the of the mapping software or some other way that is a very sophisticated problem to give that planning system the information, the immediate information, it needs to make good routing decisions, for example. And so the extraction of information from both
the many sources let's say it'll massive probe data sources. In our case, we build HD maps with what we call the here true car true car, which is a vehicle with a high precision set of cameras and LIDAR and high precision GPS system that drives around drives around this roads and streets of the world. You have a good size fleet collects massive amounts, petabytes of data, image, both image data and LIDAR data from which we extract detail HD detail level map information. We also use in that same information to update our arsenal standard definition maps for human human consumption that you see in the dashboard of blood of the, you know, 85% of the automotive systems on the planet. us here map data behind the scenes now,
the maps are always changing and where they have to change, especially when we get to a vs What do you guys do? I mean, here, vehicle goes through. And then something changes after that. Yeah, so great.
So as as you just pointed out mapmaking has a static component. Right. So Rosen street network changes, but relatively long time scales. And then there are elements of the map that changed literally minute to minute and those minute to minute chief ages are very important from from a particularly automated vehicle perspective. You know, humans adapt pretty well to changing environments. Robots probably don't it up so much without some sort of information to help them understand how to adapt. So we do HD mapmaking using as I say the here to vehicle to collect the sort of the base map details currently engaged in a couple of joint research projects with Autumn automakers, including our owners, for using the sensor systems on the cars, right sensor systems on you can do those radar, right cameras, radar, whatever to supply map update information, we call it the self healing map. So map updating information comes back off the vehicle. Not not raw images, probably never see raw images of vehicles just for bandwidth reasons alone, or maybe Nevers a long time. But
that those details come off the vehicle back to the cloud, update the map and then are distributed to other vehicles around them. So if you think about the time scale necessary to really do automated driving, right, so I need to know about that accident that's up around the corner. And because I might, I'm going to need to right turn go around and take a detour within minutes of it happening, right. So the question is, is how quickly can we use the ecosystem of distributed vehicles to sense the environment and get those kind of details out of the environment, feed them back to the cloud, redistribute them to the vehicles in the vicinity of those kinds of changing math details, such that it makes an automated fleet effective sounds
like a lot of data, move it back and forth from the vehicle to the cloud, and back, and then you're talking about multiple vehicles to I know, there's been a lot of discussion of these, when I'm covering it about, you know, edge computing versus cloud, how are you guys approaching that. So if
you tried to do what I just described off the vehicle, you basically couldn't do it right, you couldn't even come close today. So what already is going on. And we have research going on, on on various kinds of edge computing. But what needs to happen is the car sensors collect raw information behind those sensors are is a computing environment that extract information from from the images, and the LIDAR and the radars that that are part of the car sensor system. And, you know, so if I see a construction site, I can, I can extract that that's a construction sign, reduce that information to just a few bites or maybe kilobytes and send them to the cloud, update them out, and then redistribute the map. So on an individual vehicle basis, it's not a ton of data. But if you start thinking about multiplying it by millions of vehicles, now, we're starting to talk about massive systems to, to handle the data influx, the data processing, and the redistribution of map update details to to interested vehicles, if you will, out there. And is that something that's happening now, that is something that is happening now. So we are currently working on the problem of self healing maps at a couple of different reasons, labs around the country, in collaboration with our automotive partners, and, and taking information off the vehicles as I just described updating maps, redistributing map information to the vehicles. So if I'm driving around San Francisco, and there's a temporary traffic, well, let's say construction or something out that that that will be pretty quickly noted by this, this fleet of vehicles. So that's the idea. So it's very much a research and development problem at the moment. But you may have noticed, I think, last year about this time, we acquired a company in Germany that does over the air updates to automotive systems. So the notion of sort of starting to put the pieces in place to enable that ecosystem, but definitely the philosophy will be that vehicles collect information, send that information, that census information back to the cloud, and it gets processed into map update details. And in in you actually, the urine certainty levels are the levels of uncertainty associated with a particular event or particular sign, and particular road closure goes up as the number of vehicles who observe, observe that communicate back to the cloud, so increases your confidence, that that that event that that feature of the map, if you will, is actually has actually
changed, but it is a massive data processing collection. And, and, and there's a lot of sort of machine learning going on along the way, or at least deployed models. Now, you were telling me we were talking a little bit before about part of what you're doing is, is training data, I thought that was very fascinating, explain a little bit more about what you guys are doing in that regard. So
as I said, probably 80% of the scientists and engineers that work in my team are machine learning experts in one form or another. And those of you who know anything about machine learning, or AI, if you will, the machine learning flavor AI,
the secret sauce is the data and really it's the training data, right? It's labeled data that that tells the algorithms funny what patterns are associated with what outcomes and then those models then produced out produce predictions of outcomes that control systems use to change in the, in a vehicle sense, right? maneuver the vehicle, or change how the vehicles behaving in a certain way. So but at the end of the day, it's, it's a training data problem, right? So algorithms are the most of the algorithms and use for machine learning. I mean, there's a fair amount of dynamic algorithm development or improvement, if you will, in deep learning. But even that architectures are relatively set, and what really makes effective models and therefore effective AI is the data sets that you use to train them and collecting collecting datasets to train in particular automated vehicles. If you think about the human experience in learning how to drive, right, so you, you do a lot of things automatically, you that you learn these things over the course of years and years, right through trial. And through both observation, right, as a kid, you observed your, your parents and other people you rode with the driver around, and they made certain decisions based on certain environmental factors. And you observe that, and then you did the same thing as, as you began to learn how to drive well. Now, the question becomes, how do I create a data set that teaches a machine, all the stuff that you just learned,
right, it's interesting, the teacher, my son, the drive, and, you know, like, I have a hard time explaining to him how to do something, because I don't even know how I do it. It's just from that experience. So what you're doing is training machines in a way to to have that I mean, that's the mother's milk of machine learning and AI, is that training data? Right, right.
I mean, the, the sort of fantastic and amazing examples that you read about the, the, the Google DeepMind,
what's the name of that game that they that they did, that there's a crazy, amazing exam, but it required a lot of experts, hundreds of man hours to curate the data sets. And in fact, they had the machine kind of trying to change itself in some sense, by going through the massive numbers of combinations of different combinations of moves that could make and which ones were effective, and which ones were which brings me to a flavor of machine learning that we're also find called reinforcement learning that we are we have research on one way to hopefully get vehicles to learn faster and not have to collect all these training datasets is to enable them to learn using reinforcement learning techniques. So that's
the end goal. And when we have fully autonomous vehicles, that interim period I was talking about, what is the best way to use machine learning for that for where we are now? Yeah, so
I mean, from my perspective, in a general sense, right, from machine learning, or a, if you will, what it really does is make humans more effective, right. So it so if you just think about the driving experience as a metaphor for all of AI machine learning, I have I just recently within the last few years, bought a car with with adaptive cruise control driving with adaptive cruise control, or a good adaptive cruise control system there, there are some that aren't as good, but in heavy traffic, it's it's an amazing experience in terms of how much it reduces the cognitive load the numbers of decisions you have to make as a human in operating a vehicle that someday it changes, right. So and I think that's just the, the tiny leading edge of what is possible in terms of automation in the vehicle and taking the cognitive load off the driver. I mean, ultimately, a fully automated vehicle takes all the cogs right, they can do something else, right, you can read or whatever, but, but we're a ways away from that probably. And in the meantime, we can still dramatically reduce the cognitive load on the driver decision making by automating a lot of the things that you don't even think about, as you described that you do when you drive right. And I think that's actually a good metaphor for how how artificial intelligence or machine learning will impact our society both in industrial settings and in personal settings over the next 3050 years, or whatever, is that it makes the humans more and more productive because it reduces the sort of
repetitive things that you do almost automatically, but frees up a ton of mental space to do other things. Well,
I could talk about this all day and I probably will later on it the here booth one dimension that here has a booth here so if you want to find out more about the company, go and check them out. So Neil, thank you so much for taking the time to discuss this and
feel free to come up with any questions you have after or meetups over at the here booth.