Really, it was just a matter of time.
On Sunday night, the tragic scenario that many cyclists and tech writers have been wringing their hands over for years played out in Tempe, Arizona, as an autonomous vehicle hit and killed a woman who was walking her bike across the street. A spokesperson with the Tempe Police Department identified the victim as 49-year-old Elaine Herzberg.
The vehicle that caused the deadly crash was a Volvo XC90 that was part of a fleet of self-driving vehicles that ride-share giant Uber has been testing in multiple Arizona municipalities. Uber quickly announced that it would suspend testing of these vehicles. The company’s CEO, Dara Khosrowshahi, tweeted a statement of condolence:
Initial news stories reflected many of the institutional biases cyclists and pedestrians face after serious crashes with motor vehicles. Parroting initial reports from the Tempe Police Department, dozens of stories (like this and this) blamed the victim, saying that Herzberg was crossing Mill Avenue outside a crosswalk—while failing to give information about the speed the Volvo was traveling or why it did not detect her presence. Few stories mentioned that a human “backup” driver was in the vehicle or shared preliminary information about whether that individual had any role in the crash.
Tempe Police have promised a thorough investigation, but some of the most frightening questions raised by this case won’t be solved by simple forensics: Are these vehicles truly ready for prime time on public roads? What are the companies that are designing and deploying self-driving cars doing to prevent crashes like this? Why is Arizona allowed to be a hotbed for autonomous vehicle testing?
Self-driving cars appear to be really good at detecting other motor vehicles, but numerous researchers and writers have expressed public concerns about how existing autonomous systems struggle to sense and understand cyclists. Part of it has to do with the relatively small mass of a bike and part of it relates to the difficulty programmers seem to face predicting the movements of riders or pedestrians. For instance, a cyclist might shift left to avoid a giant pothole or a mound of glass. Just as such simple movements can often surprise motorists, software systems have not yet proved adept at predicting or reacting to such behavior.
The concerns over the safety of these vehicles was enough for California state officials to pull registration for the company’s test fleet. But top officials in Arizona have openly taken steps to position the Grand Canyon State as a pro-business, regulation-free environment for the testing and use of driverless cars. Last November, a spokesperson for the non-profit Consumers for Auto Reliability and Safety expressed concerns about the state’s lack of safety, data-reporting, and liability provisions to The New York Times: “It’s open season on other Arizona drivers and pedestrians. There is a complete and utter vacuum on safety.”
Despite all the very real concerns cyclists should have about the rapid and unfettered deployment of autonomous vehicles that might not yet be safe enough for our streets, the technology, of course, holds promise to make roads less dangerous. After all, computer-driven cars can’t drive drunk or get distracted by a smart phone, and they surely are less likely to speed or run lights than human pilots. In short, it's been hard to imagine that autonomous cars could be worse than human drivers.
But events in Tempe this week suggest that it’s too soon for even such tempered optimism.