Some New Thoughts on Autonomous Vehicles

[By Bob Poole, Reason Foundation. From: Surface Transportation Innovations #145, November 2015. Reprinted with permission.]

Autonomous vehicles keep making news, with three October stories highlighting important questions. First, a Google self-driving car (the little pod without a steering wheel) was stopped by a traffic cop in Mountain View, CA for driving too slowly (24 mph in a 35 mph zone). In this particular test vehicle, a Google employee on board was able to bring the car to a stop, in response to the officer’s action. The company explained that this pod-car is designed for controlled environments and low speeds, not driving at the speed limit on ordinary streets. But this incident highlights an emerging question of how to design vehicle automation so that it operates effectively in the real world—e.g., not holding up traffic on the Interstate by driving the legal 65 mph limit when everyone else is doing 75 mph.

Second, a University of Michigan study compared the on-road safety record of conventional vehicles with that of AVs from three of the 10 firms licensed to test their vehicles in California, and found that the AVs were involved in accidents five times as often as conventional vehicles. However, all the accidents involving AVs were the fault of the conventional vehicles crashing into them. A similar finding emerged from a study by the California Department of Motor Vehicles. Once again, this raises the question of whether the automation algorithms lead to AV driving patterns that confuse drivers of conventional vehicles or don’t make small adaptations that human drivers do without thinking.

The third news story concerned Tesla’s release of what it calls Autopilot as a $2,500 option to owners of its Model S produced in September 2014 or later. Despite its name, the system is a combination of adaptive cruise control, lane-keeping, and lane-changing—and is designed for use on highways, not city streets (it cannot handle traffic lights or stop signs, for example). But Model S owners quickly produced videos of themselves doing things they are not supposed to do, such as reading a newspaper while the car drives, leading to a lot of criticisms.

Tesla’s approach of introducing automation features incrementally conflicts with what appears to be Google’s current approach. Tesla aims to move through what the industry calls five levels of automation, from Level 1 (conventional cars with some driver assists like adaptive cruise control) through Level 5 (fully autonomous, with no driver controls). A growing number of human factors and automation experts question one or more of the intermediate stages as being dangerous, due to the time needed for the system to alert the driver (of, say, a Level 3 AV, who may be sending text messages or holding a business meeting) of the need to immediately regain control of the vehicle. The safest course, these experts maintain, is to not develop the intermediate levels and hence not introduce real automation until Level 4 or 5 is perfected. A recent Wired article (Nov. 10, 2015) quoted Ford officials as planning to go directly to Level 4, in which the automation can handle all situations, but driver controls are provided as an option, for when the owner actually wants to drive. Google now plans to go straight to Level 5, with no driver controls provided (first for pod-cars in protected environments, and later for applications like robo-taxis). Most other companies are sticking with the evolutionary approach.

MIT professor David Mindell (aeronautics/astronautics and history of engineering) is an expert on robotics. His new book, Our Robots, Ourselves, cites decades of experience with automation in aircraft, spacecraft, underwater exploration, etc. to argue that full automation is an unrealistic goal. He cites 40 years of lessons learned from automation technology in these other fields, which finds that “those systems are all imperfect, and the people are the glue that hold the system together. Airline pilots are constantly making small corrections, picking up mistakes, correcting the air traffic controllers.” Examining what he calls Google’s “utopian automation,” he argues that in order to work it must:

  1. Identify all nearby objects correctly;
  2. Have perfectly updated mapping systems; and,
  3. Avoid all software glitches.

And that, he concludes, is unrealistic to expect.

Stanford engineering professor Chris Gerdes is one of a growing number of experts raising questions about the ethical decision algorithms that must be built into AVs’ automation systems to handle potential accidents. He and his students are developing and operating various AV prototypes, as well as holding workshops for engineers and researchers from academia, established auto companies, and would-be AV producers. As noted in an article about Gerdes’ work by Keith Naughton of Bloomberg, he is raising such questions as: “When an accident is unavoidable, should a driverless car be programmed to aim for the smallest object or to protect its occupant? If the car must choose between hitting a group of pedestrians and risking the life of its occupant, what is the moral choice?” On a similar theme, MIT Technology Review profiled other researchers dealing with these questions, provocatively titled, “Why Self-Driving Cars Must Be Programmed to Kill.”

I raise these points not to oppose ongoing research on AVs, which on balance appear likely to offer large safety benefits, but only to illustrate that projections of huge fleets of Level 5 vehicles on our streets and highways five or even 10 years from now strike me as fanciful. There are many serious questions of technology, public acceptance, and ethics still to be worked on. Meanwhile, our streets and highways need major improvements, regardless of what vehicles may be like 30 or 40 years from now.