What is profoundly important about the Uber-Herzberg crash is that it repeats the same three lessons taught by the Tesla-Brown crash. Humans take risks. Human attention is unreliable. Robots are imperfect.

The driver of the truck that cut in front of Joshua Brown’s Tesla on May 7, 2016, took a calculated risk that his rig would clear the Tesla’s path or that the Tesla would slow. Elaine Herzberg would have had to make a similar calculation when she crossed a street in Tempe on March 18, 2018. She assumed the approaching car would slow or change lanes to avoid her. Several viewers of the crash video have suggested there was time for the Uber vehicle to brake and/or sufficient lane space behind Herzberg to avoid a collision.

It is part of human nature to take risks. A behavioral attribute called risk appetite says humans are designed to accept — and sometimes seek — some level of risk. Another behavioral attribute called risk compensation says that when we feel safer we display a riskier profile. The driver of a large vehicle judges himself a little safer when crossing in front of a smaller one, involuntarily trusting that the driver of the smaller vehicle will be highly invested in avoiding a crash. When one walks across a four-lane roadway with only a few cars, one tends to believe there is room for a car to get by— that the driver of the approaching car will be strongly inclined to avoid a collision. These risk calculations are taken at a barely conscious level. And they happen constantly. If they did not, we would be incapable of making many of the daily, physical decisions that we do now.

It is well understood that human attention is fragile. We hardly need another explanation of distracted driving at this point in the automobility narrative. We understand that the Tesla driver was otherwise occupied and had ignored several attention-recovery warnings from his vehicle before the crash — of course risk compensation is complicit, here, as we know Brown exhibited an exaggerated reliance on the technology in earlier videos he had made — he evidently saw little value in paying attention to the road or car, given his high level of trust.

We can see in the Uber-crash video that Rafaela Vasquez, the Uber vehicle’s safety driver, was not attending to the road or the vehicle in the seconds leading up to the crash. This understanding of the unreliability of human drivers — which is expected to worsen as automation becomes more reliable — is exactly why Waymo (Google) has skipped SAE Levels 2 and 3 of vehicle automation. Ford has indicated it would follow Google’s thinking regarding Level 3. To be fair, this is what Uber has also elected to do, but while testing a Level 4 wannabe, the safety driver is a weak link.

The third problem, of course, is that machines are not perfect either. Whether there was a sensor, algorithm or actuator fault in the Uber-Herzberg crash, is not yet known, but we can be certain, without checking with any AV tester of these robots, that the specifications of an experimental Level 4 vehicle include detection and avoidance of a pedestrian — which according to Alain Kornhauser should be in AVs’ AI sweet spot. So regardless of where in the autonomous stack the failure occurred, the experimenter and, I would submit, its regulators have some culpability.

It is well documented that I am very much in favor of Level 4 development for the purpose of developing driverless taxi and shuttle fleets, and I fully agree that every pedestrian fatality is a tragedy of equal weight to the victim and to their family, regardless of the type or state of the vehicle driver. It is also sadly clear that when a human driver kills a pedestrian it is seen as commonplace compared to the coverage the Uber-Herzberg crash will continue to receive.

I want Level 4 testing to continue, but not like this.

Although I do not know where the machine failure occurred in the Uber crash, there is an important aspect of machine intelligence (and human intelligence) that is not always well understood, that may have played a role, here. (It did in the earlier Tesla crash.) Errors in decisions made by machines (and humans) are of two kinds. A false positive error for an AV is when something that is not an obstacle is detected as being an obstacle — say a water puddle is assessed as a deep hole to be avoided. A false negative is when something that should be avoided is determined to be harmless, such as a person lying in the roadway being detected as a puddle to be ignored.

False or False?

The balance between false-positives and false-negatives is very difficult and cannot be proven to made error-free, except in trivial cases. Ideally one would like to have only false positives — and very few at that.

While the key critical behaviors of a Level 4 vehicle is not to kill anyone, inside or outside the vehicle, it is also critically important that such vehicles — when in commercial service — not swerve and brake for dozens of false positives each trip. If the vehicle contribution to the Uber-Herzberg crash is determined to be that the pedestrian was not assessed as an obstacle to be avoided, then the decision boundary between false negative and false positive must be altered. A false-alarm-free ride cannot be guaranteed to Uber riders or safety drivers in an experimental vehicle. Experiments are not the time for rider comfort and they are not the time for safety drivers to be attending to non-safety tasks.

The safety driver should have been prodded into action much sooner. In the final seconds before the Uber-Herzberg crash, if the decision software saw even a glimmer of anything there should have been serious attempts to demand the safety driver’s attention. Having “pay attention” be part of this job description is necessary but insufficient. Assuming that “humans are good enough at paying attention” is negligent on the part of experiment’s operator. Continuing Level 4 tests while depending on the unmonitored and assumed attention competence of safety drivers should not be permitted.

There is no evidence that anyone working on AV-AI has cracked the false-positive / false-negative problem. There is no proof that the problem is fully solvable. And if it is not fully solvable, then there will always be a non-zero error rate. This conundrum, as it becomes more widely understood will dampen public acceptance, and will require thoughtful and sensitive explanation. These are fatalities, after all.

As was the case for the Tesla-Brown crash, the Uber crash was a personal tragedy for Elaine Herzberg and her family. But it was also a regulatory tragedy for Level 4 testing and a social tragedy for the way in which the built environment is biased for people in automobiles over people not in automobiles.

For the remainder of this century — the fundamental nature of human risk and attention behavior is unlikely to change. As machines make fewer errors — which appears inevitable — humans will take more risks and pay less attention. These two human factors are well-understood. The only way for there to be no crashes is for there to be perfect robots in managed driving environments, and no human drivers. And there is no reliable evidence that that will happen before 2068 — just some wishful thinking.

The argument that AVs just need to be a little better than humans to be unleashed on the public is wrong — they need to be a lot better. If they’re only a little better, our risk and attention behavior will overwhelm any modest improvement.

It gets worse

There is a further danger lurking for AV industry: there will be an acceleration of the nature of negative public opinion around these crashes.

Tesla-Brown garnered some criticism, but the fact that the truck-driver’s behavior was called out as a triggering event, the fact that the vehicle in question was a consumer product, the fact that Joshua Brown choose to the operate his vehicle recklessly, the fact that the fatality victim was the misbehaving owner of an expensive show-off vehicle, the fact that Tesla responded immediately to close the driver-attention gap, and the fact that public perception of the Tesla brand was net positive combined to have us write off the apparent AI failure “this time”.

Uber-Herzberg has and will draw much more criticism. The pedestrian’s crossing behavior will not seem so egregious compared to the truck driver’s, the vehicle in question was part of an experiment, so should have been under far more diligent regulatory control, the victim (Herzberg) was an “innocent” collateral victim compared to Mr Brown in his Tesla. The non-attentive driver in the Herzberg case was being paid to pay attention, and the Kalanick-damaged public perception of Uber’s reputation had not fully recovered prior to this crash. There will be less forgiveness this time.

The bigger danger on the horizon is when one of the Level 3 “Conditional” vehicles — available now from a couple of manufacturers (not Tesla, Uber or Google) — while in self-drive mode and with its driver otherwise occupied, causes a fatality for a person not in the vehicle. Such an accident will seem yet more egregious than either of the other two. We all hope this never happens, but as the Level 3 vehicle population from several providers reaches into the tens of thousands over the next couple of years operating in built environments that favors the automobile rights-of-way, who would be able to say they were surprised?

Bern Grush

Subsequently…

Peter Els. 5 Setbacks to the future of mobility following the fatal Uber accident.

Alex Roy. The Half-Life Of Danger: The Truth Behind The Tesla Model X Crash