So in 1885, Karl Benz invented the automobile. Later that year, he took it out for the first public test drive, and—true story—crashed into a wall. For the last 130 years, we've been working around that least reliable part of the car, the driver. We've made the car stronger. We've added seat belts, we've added air bags, and in the last decade, we've actually started trying to make the car smarter to fix that bug, the driver.
Now, today I'm going to talk to you a little bit about the difference between patching around the problem with driver assistance systems and actually having fully self-driving cars and what they can do for the world. I'm also going to talk to you a little bit about our car and allow you to see how it sees the world and how it reacts and what it does, but first I'm going to talk a little bit about the problem. And it's a big problem: 1.2 million people are killed on the world's roads every year. In America alone, 33,000 people are killed each year. To put that in perspective, that's the same as a 737 falling out of the sky every working day. It's kind of unbelievable. Cars are sold to us like this, but really, this is what driving is like. Right? It's not sunny, it's rainy, and you want to do anything other than drive. And the reason why is this: Traffic is getting worse. In America, between 1990 and 2010, the vehicle miles traveled increased by 38 percent. We grew by six percent of roads, so it's not in your brains. Traffic really is substantially worse than it was not very long ago.
And all of this has a very human cost. So if you take the average commute time in America, which is about 50 minutes, you multiply that by the 120 million workers we have, that turns out to be about six billion minutes wasted in commuting every day. Now, that's a big number, so let's put it in perspective. You take that six billion minutes and you divide it by the average life expectancy of a person, that turns out to be 162 lifetimes spent every day, wasted, just getting from A to B. It's unbelievable. And then, there are those of us who don't have the privilege of sitting in traffic. So this is Steve. He's an incredibly capable guy, but he just happens to be blind, and that means instead of a 30-minute drive to work in the morning, it's a two-hour ordeal of piecing together bits of public transit or asking friends and family for a ride. He doesn't have that same freedom that you and I have to get around. We should do something about that.
Now, conventional wisdom would say that we'll just take these driver assistance systems and we'll kind of push them and incrementally improve them, and over time, they'll turn into self-driving cars. Well, I'm here to tell you that's like me saying that if I work really hard at jumping, one day I'll be able to fly. We actually need to do something a little different. And so I'm going to talk to you about three different ways that self-driving systems are different than driver assistance systems. And I'm going to start with some of our own experience.
So back in 2013, we had the first test of a self-driving car where we let regular people use it. Well, almost regular—they were 100 Googlers, but they weren't working on the project. And we gave them the car and we allowed them to use it in their daily lives. But unlike a real self-driving car, this one had a big asterisk with it: They had to pay attention, because this was an experimental vehicle. We tested it a lot, but it could still fail. And so we gave them two hours of training, we put them in the car, we let them use it, and what we heard back was something awesome, as someone trying to bring a product into the world. Every one of them told us they loved it. In fact, we had a Porsche driver who came in and told us on the first day, "This is completely stupid. What are we thinking?" But at the end of it, he said, "Not only should I have it, everyone else should have it, because people are terrible drivers." So this was music to our ears, but then we started to look at what the people inside the car were doing, and this was eye-opening. Now, my favorite story is this gentleman who looks down at his phone and realizes the battery is low, so he turns around like this in the car and digs around in his backpack, pulls out his laptop, puts it on the seat, goes in the back again, digs around, pulls out the charging cable for his phone, futzes around, puts it into the laptop, puts it on the phone. Sure enough, the phone is charging. All the time he's been doing 65 miles per hour down the freeway. Right? Unbelievable. So we thought about this and we said, it's kind of obvious, right? The better the technology gets, the less reliable the driver is going to get. So by just making the cars incrementally smarter, we're probably not going to see the wins we really need.
Let me talk about something a little technical for a moment here. So we're looking at this graph, and along the bottom is how often does the car apply the brakes when it shouldn't. You can ignore most of that axis, because if you're driving around town, and the car starts stopping randomly, you're never going to buy that car. And the vertical axis is how often the car is going to apply the brakes when it's supposed to help you avoid an accident. Now, if we look at the bottom left corner here, this is your classic car. It doesn't apply the brakes for you, it doesn't do anything goofy, but it also doesn't get you out of an accident. Now, if we want to bring a driver assistance system into a car, say with collision mitigation braking, we're going to put some package of technology on there, and that's this curve, and it's going to have some operating properties, but it's never going to avoid all of the accidents, because it doesn't have that capability. But we'll pick some place along the curve here, and maybe it avoids half of accidents that the human driver misses, and that's amazing, right? We just reduced accidents on our roads by a factor of two. There are now 17,000 less people dying every year in America.
But if we want a self-driving car, we need a technology curve that looks like this. We're going to have to put more sensors in the vehicle, and we'll pick some operating point up here where it basically never gets into a crash. They'll happen, but very low frequency. Now you and I could look at this and we could argue about whether it's incremental, and I could say something like "80-20 rule," and it's really hard to move up to that new curve. But let's look at it from a different direction for a moment. So let's look at how often the technology has to do the right thing. And so this green dot up here is a driver assistance system. It turns out that human drivers make mistakes that lead to traffic accidents about once every 100,000 miles in America. In contrast, a self-driving system is probably making decisions about 10 times per second, so order of magnitude, that's about 1,000 times per mile. So if you compare the distance between these two, it's about 10 to the eighth, right? Eight orders of magnitude. That's like comparing how fast I run to the speed of light. It doesn't matter how hard I train, I'm never actually going to get there. So there's a pretty big gap there.
And then finally, there's how the system can handle uncertainty. So this pedestrian here might be stepping into the road, might not be. I can't tell, nor can any of our algorithms, but in the case of a driver assistance system, that means it can't take action, because again, if it presses the brakes unexpectedly, that's completely unacceptable. Whereas a self-driving system can look at that pedestrian and say, I don't know what they're about to do, slow down, take a better look, and then react appropriately after that.
So it can be much safer than a driver assistance system can ever be. So that's enough about the differences between the two. Let's spend some time talking about how the car sees the world.
So this is our vehicle. It starts by understanding where it is in the world, by taking a map and its sensor data and aligning the two, and then we layer on top of that what it sees in the moment. So here, all the purple boxes you can see are other vehicles on the road, and the red thing on the side over there is a cyclist, and up in the distance, if you look really closely, you can see some cones. Then we know where the car is in the moment, but we have to do better than that: we have to predict what's going to happen. So here the pickup truck in top right is about to make a left lane change because the road in front of it is closed, so it needs to get out of the way. Knowing that one pickup truck is great, but we really need to know what everybody's thinking, so it becomes quite a complicated problem. And then given that, we can figure out how the car should respond in the moment, so what trajectory it should follow, how quickly it should slow down or speed up. And then that all turns into just following a path: turning the steering wheel left or right, pressing the brake or gas. It's really just two numbers at the end of the day. So how hard can it really be?
Back when we started in 2009, this is what our system looked like. So you can see our car in the middle and the other boxes on the road, driving down the highway. The car needs to understand where it is and roughly where the other vehicles are. It's really a geometric understanding of the world. Once we started driving on neighborhood and city streets, the problem becomes a whole new level of difficulty. You see pedestrians crossing in front of us, cars crossing in front of us, going every which way, the traffic lights, crosswalks. It's an incredibly complicated problem by comparison. And then once you have that problem solved, the vehicle has to be able to deal with construction. So here are the cones on the left forcing it to drive to the right, but not just construction in isolation, of course. It has to deal with other people moving through that construction zone as well. And of course, if anyone's breaking the rules, the police are there and the car has to understand that that flashing light on the top of the car means that it's not just a car, it's actually a police officer. Similarly, the orange box on the side here, it's a school bus, and we have to treat that differently as well.
When we're out on the road, other people have expectations: So, when a cyclist puts up their arm, it means they're expecting the car to yield to them and make room for them to make a lane change. And when a police officer stood in the road, our vehicle should understand that this means stop, and when they signal to go, we should continue.
Now, the way we accomplish this is by sharing data between the vehicles. The first, most crude model of this is when one vehicle sees a construction zone, having another know about it so it can be in the correct lane to avoid some of the difficulty. But we actually have a much deeper understanding of this. We could take all of the data that the cars have seen over time, the hundreds of thousands of pedestrians, cyclists, and vehicles that have been out there and understand what they look like and use that to infer what other vehicles should look like and other pedestrians should look like. And then, even more importantly, we could take from that a model of how we expect them to move through the world. So here the yellow box is a pedestrian crossing in front of us. Here the blue box is a cyclist and we anticipate that they're going to nudge out and around the car to the right. Here there's a cyclist coming down the road and we know they're going to continue to drive down the shape of the road. Here somebody makes a right turn, and in a moment here, somebody's going to make a U-turn in front of us, and we can anticipate that behavior and respond safely.
Now, that's all well and good for things that we've seen, but of course, you encounter lots of things that you haven't seen in the world before. And so just a couple of months ago, our vehicles were driving through Mountain View, and this is what we encountered. This is a woman in an electric wheel chair chasing a duck in circles on the road. Now it turns out, there is nowhere in the DMV hand book that tells you how to deal with that, but our vehicles were able to encounter that, slow down, and drive safely. Now, we don't have to deal with just ducks. Watch this bird fly across in front of us. The car reacts to that. Here we're dealing with a cyclist that you would never expect to see anywhere other than Mountain View. And of course, we have to deal with drivers, even the very small ones. Watch to the right as someone jumps out of this truck at us. And now, watch the left as the car with the green box decide she needs to make a right turn at the last possible moment. Here, as we make a lane change, the car to our left decides it wants to as well. And here, we watch a car blow through a red light and yield to it. And similarly, here, a cyclist blowing through that light as well. And of course, the vehicle responds safely. And of course, we have people who do I don't know what sometimes on the road, like this guy pulling out between two self-driving cars. You have to ask, "What are you thinking?"
Now, I just fire-hosed you with a lot of stuff there, so I'm going to break one of these down pretty quickly. So what we're looking at is the scene with the cyclist again, and you might notice in the bottom, we can't actually see the cyclist yet, but the car can: it's that little blue box up there, and that comes from the laser data. And that's not actually really easy to understand, so what I'm going to do is I'm going to turn that laser data and look at it, and if you're really good at looking at laser data, you can see a few dots on the curve there, right there, and that blue box is that cyclist. Now as our light is red, the cyclist's light has turned yellow already, and if you squint, you can see that in the imagery. But the cyclist, we see, is going to proceed through the intersection. Our light has now turned green, his is solidly red, and we now anticipate that this bike is going to come all the way across. Unfortunately the other drivers next to us were not paying as much attention. They started to pull forward, and fortunately for everyone, this cyclists reacts, avoids, and makes it through the intersection. And off we go.
Now, as you can see, we've made some pretty exciting progress, and at this point we're pretty convinced this technology is going to come to market. We do three million miles of testing in our simulators every single day, so you can imagine the experience that our vehicles have. We are looking forward to having this technology on the road, and we think the right path is to go through the self-driving rather than driver assistance approach because the urgency is so large. In the time I have given this talk today, 34 people have died on America's roads.
How soon can we bring it out? Well, it's hard to say because it's a really complicated problem, but these are my two boys. My oldest son is 11, and that means in four and a half years, he's going to be able to get his driver's license. My team and I are committed to making sure that doesn't happen.
Thank you.
Chris, I've got a question for you. Sure.
So certainly, the mind of your cars is pretty mind-boggling. On this debate between driver-assisted and fully driverless—I mean, there's a real debate going on out there right now. So some of the companies, for example, Tesla, are going the driver-assisted route. What you're saying is that that's kind of going to be a dead end because you can't just keep improving that route and get to fully driverless at some point, and then a driver is going to say, "This feels safe," and climb into the back,
Right.
and something ugly will happen.
No, that's exactly right, and it's not to say that the driver assistance systems aren't going to be incredibly valuable. They can save a lot of lives in the interim, but to see the transformative opportunity to help someone like Steve get around, to really get to the end case in safety, to have the opportunity to change our cities and move parking out and get rid of these urban craters we call parking lots, it's the only way to go.
We will be tracking your progress with huge interest. Thanks so much, Chris. Thank you.