View all Articles
Commentary By Mark P. Mills

Self-Driving Car Dreams

This year marks the 100th anniversary of the idea of the self-driving car. In 1918 Scientific American published a vision of the “car of the future” when “the steering wheel will be obsolete” and driving “done from a small control board.”

“Let’s be real: Autopilot in cars today really means a human is the test pilot.”

And 60 years ago, an issue of Electronic Age featured a story about the world’s first successful test of a self-driving car. Engineers from RCA — that era’s techno-wizard company — sat in the passenger seat as a car with a blacked-out windshield drove itself on a public highway in Nebraska. 

But recent tragedies involving self-driving cars highlight the fact that, despite what today’s enthusiasts believe, we still have a long way to go before auto-piloted cars become as common as automatic transmissions. 

Let’s be real: Autopilot in cars today really means a human is the test pilot. The test? Can a person, when instructed by a robot, quickly obey and seize control of a car hurtling towards disaster?

Referring to last month’s fatal crash of a Tesla X, a headline stated: “Tesla says driver ignored warnings from Autopilot.” This was the third confirmed death at the “hands” of an algorithm piloting a car, following one two weeks earlier with an Uber in Arizona, and another with Tesla in Florida in 2016.

After the Arizona crash, the state suspended permissions for robocars. Minnesota’s legislature is considering a ban. Sen. Dianne Feinstein (D-CA), who hails from tech central, put a hold on legislation intended to promote self-driving. Meanwhile, Uber, Toyota, and NVIDIA (a leading chipmaker for autopilots) have all halted testing autonomous cars on public roads.

Engineers and regulators have long wrestled with developing sophisticated practices to ensure safety at human-machine interfaces. What tech aficionados ignore at their peril is that safety is not just statistical; it is also psychological. No matter how valuable and superior a new technology may be by objective measures, it has to be far safer than what it replaces in order to feel that way to its users. 

This stubborn fact about human psychology is well researched. How much risk we accept is a function of both the perception and the reality of how much control we have. Tolerance for risk imposed or controlled by others tends to be much lower than risk more closely linked to an individual’s control or choices. By these measures, robot cars aren’t even close yet.

Consider that aviation, which began in the 1930s, didn’t go mainstream until flying was more than 10 times safer than driving. Aviation has since had to improve continually to stay ahead of automotive safety. And it has. Or consider that in 1918, the Model T, primitive compared to modern cocooned cars, was actually 10 times safer than traveling by horse and wagon. By contrast, today’s robot pilots are still demonstrably less safe than human drivers.

The Uber incident is the first fatality by a fully autonomous vehicle. And it comes after a approximately 6 million cumulative autonomous robo miles. While that may sound impressive, it’s actually a statistical rate 15 times worse than humans. The nation’s roads see just one fatality for every 100 million people-piloted miles — which includes distracted, speeding, and inebriated people. 

In fact, robo’s record is even worse. The autonomous miles include a human minder ready to take over in ambiguous situations. Records show that robots have handed the wheel over to humans thousands of times, and there is no data on how often such “disconnects” avoided accidents. The Uber fatality was blamed on the minder failing to seize control. 

Some 20-fold more quasi-robo-miles have been accumulated with Tesla’s autopilot which, the company asserts, is not fully autonomous, meaning humans must stand ready to take control at a moment’s notice. With “only” two fatalities, that record does roughly match the human average. 

But that just means those Tesla drivers properly obeyed the algorithm. Today’s robots are lousy pilots with poor vision. To become independent of people, they will need vision systems (sensors) superior to humans in all conditions, and algorithms that beat human reasoning in all circumstances. Neither yet exists. 

Many devices can outperform humans for specific functions. But no tech works well in all known conditions. Snow and rain are particularly challenging. And employing all classes of sensors together — cameras, radar, lasers, infrared, etc. — would increase a car’s cost 10 fold, while still achieving sub-human “situational awareness,” to use the military term.

The idea that algorithms may soon be as smart as or even smarter than people has been around at least since Alan Turing and the first computers of World War II. But today’s artificial intelligence remains confused by such mundane things as tunnels, bridges, or graffiti, which mar road signs. And no software yet can emulate the complexity in the choreography of behaviors and permissions that go on driver-to-driver and between pedestrians and drivers (think eye-contact, a nod of the head, a subtle gesture).

Software stability and reliability also remain unresolved. In so-called cyber-physical systems where there is risk of very real and often immediate human harm, there’s no room for sloppy code, continual updates to fix glitches, or system “freezes” common in low-risk consumer apps.

True, sensors and software are getting better; costs are collapsing. However, there’s still a long way to go. Congress is being urged to move on legislation to accelerate development of self-driving cars, and perhaps even indemnify companies eager to deploy them. That would be a mistake. New regulations aren’t needed. Engineers still have work to do. And matching the human safety record won’t be good enough. Robo cars will have to at least match aviation safety, because that’s what people will demand. 

If Congress feels compelled to do something, it should think about emulating aviation innovation and create a Robocar Safety Reporting System. A neutral party with lots of relevant experience, say, the Defense Advanced Projects Agency, could operate that.

A future in which the steering wheel is “obsolete” may finally be on the horizon. But glib promotions and over-eager deployments will impede rather than accelerate the path to such a new era of mobility. 

This piece originally appeared in RealClearPolicy

______________________

Mark P. Mills is a senior fellow at the Manhattan Institute and a faculty fellow at Northwestern University’s McCormick School of Engineering. In 2016, he was named “Energy Writer of the Year” by the American Energy Society. Follow him on Twitter here.

This piece originally appeared in RealClearPolicy