The Road to Self-driving Cars Is Full of Speed Bumps

Automakers are revving up for a very near future of fully autonomous vehicles. But the road ahead is rough

By Hannah Fry
Oct 25, 2018 12:00 AMNov 14, 2019 5:46 PM
Self-driving-car
Neil Webb

Newsletter

Sign up for our email newsletter for the latest science news
 

The sun was barely above the horizon on March 13, 2004, but the Slash X saloon bar, in the middle of the Mojave Desert, was already thronging with people.

The bar is on the outskirts of Barstow, a California town between Los Angeles and Las Vegas. It’s a place popular with cowboys and off-roaders, but on that spring day it had drawn the attention of another kind of crowd. A makeshift stadium that had been built was packed with engineers, excited spectators and foolhardy petrol heads who all shared a similar dream: to be the first people on Earth to witness a driverless car win a race.

The race had been organized by the U.S. Defense Advanced Research Projects Agency, or DARPA (nicknamed the Pentagon’s “mad science” division). The agency had been interested in unmanned vehicles for a while, and with good reason: Roadside bombs and targeted attacks on military vehicles were a major cause of death on the battlefield. Earlier that year, DARPA had announced its intention to make one-third of U.S. ground military forces vehicles autonomous by 2015.

Up to that point, progress had been slow and expensive. DARPA had spent around half a billion dollars over two decades funding research at universities and companies in the hope of achieving its ambition. But then came an ingenious idea: Why not create a competition? The agency would invite anyone in the country to design their own driverless car and race them against each other on a long-distance track, with a prize of $1 million for the winner. It would be a quick and cheap way to give DARPA a head start in pursuing its goal.

On the morning of the 132-mile race, a ramshackle lineup of cars gathered at Slash X, along with a few thousand spectators. Things didn’t quite go as planned. One car flipped upside down in the starting area and had to be withdrawn. A self-driving motorbike barely cleared the start line before it rolled onto its side and was declared out of the race. One car hit a concrete wall 50 yards in. Another got tangled in a barbwire fence. The scene around the saloon bar began to look like a robot graveyard.

The top-scoring vehicle, an entry by Carnegie Mellon University, managed an impressive 7 miles before misjudging a hill — at which point the tires started spinning and, without a human to help, carried on spinning until they caught fire. It was over by late morning. A DARPA organizer climbed into a helicopter and flew over to the finish line to inform the waiting journalists that none of the cars would be getting that far.

The race had been oily, dusty, noisy and destructive — and had ended without a winner. All those teams of people had worked for a year on a creation that had lasted, at best, a few minutes.

But the competition was anything but a disaster. The rivalry had led to an explosion of new ideas, and by the next DARPA Grand Challenge in 2005, the technology was vastly improved. An astonishing five driverless cars completed the race without any human intervention.

Neil Webb

Now, more than a decade later, it’s widely accepted that the future of transportation is driverless. In late 2017, Philip Hammond, the British Chancellor of the Exchequer, announced the government’s intention to have fully driverless cars — without a safety attendant on board — on British roads by 2021. Daimler, a Germany-based auto manufacturer, has promised driverless cars by 2020, and Ford by 2021. Other manufacturers have made similar forecasts for their driverless vehicles.

On the surface, building a driverless car sounds as if it should be relatively easy. Most humans manage to master the requisite skills to drive. Plus, there are only two possible outputs: speed and direction. It’s a question of how much gas to apply and how much to turn the wheel. How hard can it be?

But, as the first DARPA Grand Challenge demonstrated, building an autonomous vehicle is a lot trickier than it looks. Things quickly get complicated when you’re trying to get an algorithm to control a great big hunk of metal traveling at 60 mph.

Beyond the Rules of the Road

Imagine you’ve got two vehicles approaching each other at speed, traveling in different directions down a gently curved county highway.

A human driver will be perfectly comfortable in that scenario, knowing the other car will stick to its own lane and pass safely a few feet to the side. “But for the longest time, it does look like you’re going to hit each other,” explains Paul Newman, professor of robotics at the University of Oxford and founder of Oxbotica, a company that builds driverless cars.

How do you teach a driverless car not to panic in that situation? You don’t want the vehicle to drive off the side of the road trying to avoid a collision that was never going to happen, says Newman. But, equally, you don’t want it to be complacent if you really do find yourself on the verge of a head-on crash. Remember, too, these cars are only ever making educated guesses about what to do.

How do you get it to guess right every single time? That, says Newman, “is a hard, hard problem.”

It’s a problem that puzzled experts for a long time, but it does have a solution. The trick is to build in a model for how other — sane — drivers will behave. Unfortunately, the same can’t be said of other nuanced driving scenarios.

“What’s hard is all the problems with driving that have nothing to do with driving,” says Newman.

For instance, how do you teach a self-driving algorithm to understand that you need to be extra cautious upon hearing the tunes of an ice cream truck, or when passing a group of kids playing with a ball on the sidewalk?

Even harder, how do you teach a car that it should sometimes break the rules of the road? What if an ambulance with its lights on is trying to get past on a narrow street and you need to drive up on the sidewalk to let it through? Or if an oil tanker has jackknifed across a country lane and you need to get out of there by any means possible?

Zapp2Photo/Shutterstock

“None of these are in the [U.K.] Highway Code,” Newman points out. And yet a truly autonomous car needs to know how to deal with all of these scenarios if it’s to exist without ever having any human intervention. Even in emergencies.

That’s not to say these are insurmountable problems. “I don’t believe there’s any level of intelligence that we won’t be able to get a machine to do,” says Newman. “The only question is when.”

Unfortunately, the answer to that question is probably not anytime soon. That driverless dream we’re all waiting for might be a lot further away than we think. That’s because there’s another layer of difficulty to contend with when trying to build that sci-fi fantasy of a go-anywhere, do-anything, steering-wheel-free driverless car, and it’s one that goes well beyond the technical challenge.

The People Factor

A fully autonomous car will also have to deal with the tricky problem of people. “People are mischievous,” says Jack Stilgoe, a sociologist at the University College London who studies the social impact of technology. “They’re active agents, not just passive parts of the scenery.”

Imagine a world where truly, perfectly autonomous vehicles exist. The No. 1 rule in their onboard algorithms will be to avoid collisions wherever possible. And that changes the dynamics of the road. If you stand in front of a driverless car, it has to stop. If you pull out in front of one at a junction, it has to behave submissively.

“People who’ve been relatively powerless on roads up till now, like cyclists, may start cycling very slowly in front of self-driving cars knowing that there is never going to be any aggression,” says Stilgoe.

Getting around this problem might mean bringing in stricter rules to deal with people who abuse their position as cyclists or pedestrians. It’s been done before, of course: Think of jaywalking. Or it could mean forcing everything else off the roads, as happened with the introduction of the automobile. That’s why you don’t see bicycles, horses, carts, carriages or pedestrians on an expressway.

If we want fully autonomous cars, we’ll almost certainly have to do something similar again and limit the number of aggressive drivers, ice cream trucks, kids playing in the road, roadwork signs, difficult pedestrians, emergency vehicles, cyclists, mobility scooters and everything else that makes the problem of autonomy so difficult. That’s fine, but it’s a little different from the way the idea is currently being sold to us.

“The rhetoric of autonomy and transport is all about not changing the world,” says Stilgoe. “It’s about keeping the world as it is but making and allowing a robot to just be as good as and then better than a human at navigating it. And I think that’s stupid.”

But hang on, some of you may be thinking. Hasn’t this problem already been cracked? Hasn’t Waymo, Google’s autonomous car, driven millions of miles already? Aren’t Waymo’s fully autonomous cars (or at least, close-to-fully-autonomous cars) currently driving around the roads of Phoenix?

Well, yes. But not every mile of road is created equally. Most miles are so easy to drive, you can do it while daydreaming. Others are far more challenging.

Sundry Photography/Shutterstock

At the time of writing, Waymo cars aren’t allowed to go just anywhere: They’re “geo-fenced” into a small, predetermined area. So, too, are the driverless cars Daimler and Ford propose to have on the roads by 2020 and 2021, respectively. They’re confined to a pre-decided go-zone. And that does make the problem of autonomy simpler.

Newman says the future of driverless cars will involve these types of go-zones.

“They’ll come out working in an area that’s very well known, where their owners are extremely confident that they’ll work,” says Newman. “So it could be part of a city, not in the middle of a place with unusual roads or where cows could wander into the path. Maybe they’ll work at certain times of day and in certain weather situations. They’re going to be operated as a transport service.”

Staying Focused

Lisanne Bainbridge, a psychologist at the University College London, published a seminal essay in 1983 called “Ironies of Automation,” on the hidden dangers of relying too heavily on automated systems. A machine built to improve human performance, she explained, will lead — ironically — to a reduction in human ability.

By now, we’ve all borne witness to this in some small way. It’s why people can’t remember phone numbers anymore, why many of us struggle to read our own handwriting and why lots of us can’t navigate anywhere without GPS. With technology to do it all for us, there’s little opportunity to practice our skills.

There is some concern that this might happen with self-driving cars — where the stakes are a lot higher than with handwriting. Until we get to full autonomy, the car will still sometimes unexpectedly hand back control to the driver. Will we be able to remember instinctively what to do? And will teenage drivers of the future ever have the chance to master requisite driving skills in the first place?

But even if all drivers manage to stay competent, there’s another issue we’ll still have to contend with: What level of awareness is being asked of the human driver before the car’s autopilot cuts out?

One level is that the driver is expected to pay careful attention to the road at all times. At the time of writing, Tesla’s Autopilot is one such example of this approach. It’s currently like a fancy cruise control: It’ll steer and brake and accelerate on the motorway, but expects the driver to be alert, attentive and ready to step in at all times. To make sure you’re paying attention, an alarm sounds if you remove your hands from the wheel for too long.

But that’s not an approach that’s going to end well. “It’s impossible for even a highly motivated human to maintain effective visual attention toward a source of information, on which very little happens, for more than about half an hour,” Bainbridge wrote in her essay.

Other autonomous car programs are finding the same issues. Although Uber’s driverless cars require human intervention every 13 miles, getting drivers to pay attention remains a struggle. In March, an Uber self-driving vehicle fatally struck a pedestrian in Tempe, Arizona. Video footage from inside the car showed that the “human monitor” sitting behind the wheel was looking away from the road in the moments before the collision.

Neil Webb

A Plan for the Inevitable

Though this is a serious problem, there is an alternative. The car companies could accept that humans will be humans, acknowledge that our minds will wander. After all, being able to read a book while driving is part of the appeal of self-driving cars.

Some manufacturers have already started to build their cars to accommodate our inattention. Audi’s Traffic Jam Pilot is one example. It can completely take over when you’re in slow-moving highway traffic, leaving you to sit back and enjoy the ride. Just be prepared to step in if something goes wrong. But there’s a reason why Audi has limited its system to slow-moving traffic on limited-access roads. The risks of catastrophe are lower in motorway congestion.

And that’s an important distinction. Because as soon as a human stops monitoring the road, you’re left with the worst possible combination of circumstances when an emergency happens. A driver who’s not paying attention will have very little time to assess their surroundings and decide what to do.

Imagine sitting in a self-driving car, hearing an alarm and looking up from your book to see a truck ahead shedding its load onto your path. In an instant, you’ll have to process all the information around you: the motorbike in the left lane, the van braking hard ahead, the car in the blind spot on your right. You’d be most unfamiliar with the road at precisely the moment you need to know it best.

Add in the lack of practice, and you’ll be as poorly equipped as you could be to deal with the situations demanding the highest level of skill.

A 2016 study simulated people as passengers, reading a book or playing on their cell phones, in a self-driving car. Researchers found that, after an alarm sounded for passengers to regain control, it took them about 40 seconds to do it.

Ironically, the better self-driving technology gets, the worse these problems become. A sloppy autopilot that sets off an alarm every 15 minutes will keep a driver continually engaged and in regular practice. It’s the smooth and sophisticated automatic systems that are almost always reliable that you’ve got to watch out for.

“The worst case is a car that will need driver intervention once every 200,000 miles,” Gill Pratt, head of Toyota’s research institute, told technology magazine IEEE Spectrum in 2017.

Pratt says someone who buys a new car every 100,000 miles might never need to take over control from the car. “But every once in a while, maybe once for every two cars that I own, there would be that one time where it suddenly goes ‘beep beep beep, now it’s your turn!’ ” Pratt told the magazine. “And the person, typically having not seen this for years and years, would . . . not be prepared when that happened.”

Adjusting Expectations

As is the case with much of the driverless technology that is so keenly discussed, we’ll have to wait and see how this turns out. But one thing is for sure: As time goes on, autonomous driving will have a few lessons to teach us that apply well beyond the world of motoring — not just about the messiness of handing over control, but about being realistic in our expectations of what algorithms can do.

If this is going to work, we’ll have to adjust our way of thinking. We’re going to need to throw away the idea that cars should work perfectly every time, and accept that, while mechanical failure might be a rare event, algorithmic failure almost certainly won’t be.

So, knowing that errors are inevitable, knowing that if we proceed we have no choice but to embrace uncertainty, the conundrums within the world of driverless cars will force us to decide how good something needs to be before we’re willing to let it loose on our streets. That’s an important question, and it applies elsewhere. How good is good enough? Once you’ve built a flawed algorithm that can calculate something, should you let it?

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group