10 Lessons From Uber’s Fatal Self-Driving Car Crash

The fallout from the first autonomous car fatality continues to swirl a year later, as the once high-flying technology faces a “trough of disillusionment.”

byEdward Niedermeyer|
Self-Driving Tech photo
Share

0

The single most important moment in the world of self-driving cars was around 10 PM on a Sunday evening, one year ago today. On that fatal evening, Elaine Herzberg stepped out into Mill Avenue in Tempe, Arizona and was struck and killed by a Volvo XC90 that was testing Uber's self-driving car technology. Herzberg's death has cast a pall over what had been a white hot race to develop world-changing technology that promised to both save lives and create billions in value for the winners, changing public perceptions of autonomous vehicles and internal practices at the firms developing the technology alike. Though this shameful moment won't stop the march toward autonomous vehicles, it does provide an important opportunity to stop and reflect on how it happened and what lessons must be learned to prevent it from happening again.

Lesson #1: This Is Not A Race.

Like other fatal technological disasters, Elaine Herzberg's death was caused by cascading failures. These failures run from the poor quality of the software itself to the poor training of Uber's safety drivers, all the way down to the poor design on the street where she was struck (and many more besides). But underlying all the failures that resulted in Herzberg's death was a single factor that played into almost all of them: the perception that autonomous vehicle technology is a "race" in which the "winner" takes the lion's share of the rewards. This "race to autonomy" is flawed in more ways than we have space to get into here, but on the most fundamental level it is the reason that technological corners were cut, safety drivers were undertrained and working long hours, and that the Arizona testing program was working at breakneck speed with little oversight from Uber's Advanced Technology Group headquarters in Phoenix.

This "race to autonomy" was fueled by almost everyone: companies used it to raise more venture capital, venture capitalists used it to validate their investments, the media used it as an easy framework for telling exciting stories about the new technological space, and the narrative eventually took on a life of its own. It's a seductive narrative, giving one the sense of observing history in the making, and implying that anyone could understand who was most likely to become the next tech titan even if they didn't understand the immensely complex technology itself. Like so many dangerously flawed heuristics, it reduced immense complexity to an easily-grasped simplicity and glossed over questions as important and fundamental as "if this is a race, what precisely is the point that marks the finish line?"

But more importantly, the "race to autonomy" created the terrible incentives that led to Herzberg's tragic death. It rewarded speed at all costs, operating with lack of oversight, working long hours in safety-critical situations to rack up mileage counts, inflating perceptions of your development program, and glossing over seemingly minor flaws in order to get vehicles on the road and testing. Herzberg's death has dramatically illustrated the dangers of rushing AV development, but there are still pockets of people in the space who still pursue the "move fast and break stuff" approach in order to secure more funding or stand out in a tough and crowded field. Until developers, regulators, politicians and the media all understand the horrible cost of this mentality, the risk of another Elaine Herzberg-like fatality will linger.

Lesson #2: Culture Matters.

The Elaine Herzberg death came hard on the heels of the infamous Waymo-Uber lawsuit, in which the Alphabet company sued over allegations that its former engineer Anthony Levandowski stole lidar technology in order to bring it to Uber's self-driving car program. The lawsuit was eventually settled in Waymo's favor, but not before some of the ugliest elements of what was undeniably an ugly company culture at Uber were dragged out into the light. Even before the trial, Uber's "tech bro" culture had received unfavorable coverage, but the trial revealed the extent to which a "win at all costs" attitude and profound arrogance permeated the company, starting at the top with CEO Travis Kalanick.

Kalanick "made a decision that winning was more important than obeying the law," argued Waymo's lawyer Charles Veerhoven, and that approach seems to have been embedded deep in Uber's culture. Uber began testing in Arizona after its cars had been caught running at least six red lights in San Francisco, prompting California's DMV to pull its testing permits and pass what are now some of the nation's toughest rules on autonomous vehicle testing. Even after Kalanick was ousted in the wake of the Waymo lawsuit, Uber engineers pulled an emergency braking system that could have prevented Herzberg's death from its cars in hopes of impressing incoming CEO Dara Khosrowshahi with a smooth ride that wouldn't be interrupted by aggressive braking. Rushing, avoiding oversight and manipulating perceptions were all core elements of Uber's culture, and contributed to a fatality despite numerous opportunities to check them.

Some AV developers now say that Herzberg's death was a "wake-up call," but others say that if you needed that wake up call your problems are already extremely serious. Driving causes more than 35,000 deaths in the US every year, and so it goes without saying that autonomous drive systems need to be developed in a manner that is fundamentally distinct from that which develops mobile apps or social media networks. The most conscientious AV developers realize that safety is literally the most fundamental aspect of their job: it doesn't take much to make a car "drive itself"... the hard part is making it drive itself safely. Safety is the beginning, middle and end of AV development, and there simply is no other goal. Any developer whose culture isn't fundamentally oriented this way is asking to cause another fatality.

Lesson #3:

Humans Are Bad At Overseeing Imperfect Automation.

Autonomous drive technology is billed as an opportunity to relax instead of stressing out about the dangers of the road, but until such time as it's been validated to be consistently safe enough in any conditions it might find itself in all automated driving requires human oversight. This is a major challenge because the vast majority of driving is incredibly boring, and safety drivers can be easily lulled into a false sense of security that the system isn't prepared to deliver. This is precisely what happened in the Herzberg incident: Uber's safety driver Rafaela Vasquez was watching The Voice on Hulu using her phone at the time of the crash, and she had looked down at her phone 166 times during the trip that evening. 

The video of Vasquez looking up from her phone in horror, realizing that Herzberg had just been hit while she looked away from the road, is a glimpse at the terrors that the future may hold. Until full autonomy is completely validated, human-in-the-loop automated driving will encourage us to take ever more liberties with our attention while failing to protect the next Elaine Herzbergs. This is a limited problem in the context of autonomous vehicle "safety drivers," who are now increasingly well-trained and partnered up while testing, but it shows the problem that seems to be underlying the deaths of drivers using Tesla's Autopilot. Now, with Tesla's "Full Self-Driving" system apparently likely to be released to some customers before it's even validated, another company seems ready to put its own customers in the position of being untrained and unsupervised safety drivers.

Yes, Vasquez had a level of responsibility behind the wheel, but Uber also had a responsibility not to put a single individual in a highly automated car for 10 hours at a time with no supervision. Because full autonomy is taking longer to deliver than promised, the temptation to deliver nearly-full autonomy while telling drivers to maintain awareness will be hard to resist but Vazquez's look of horror should be a reminder to everyone of the risks of such systems. It certainly makes the prospect of Tesla's customer testing downright terrifying.

Lesson #4:

Vulnerable Road Users Require Extra Care.

Velodyne, which makes the lidar used in Uber's autonomous vehicle fleet, said it was "baffled" by the fact that the sensor didn't seem to identify the bicycle that Elaine Herzberg was walking across Mill Avenue when she was struck. Still, most of an autonomous vehicle's sensors are best at identifying substantial objects that might damage the car itself if it were to run into them. Particularly at night, when cameras aren't at the peak of their functionality, radar and lidar can struggle to properly identify pedestrians and cyclists on their own. This is actually one of the reasons why autonomous vehicle developers come to places like the Phoenix, Arizona area: not only is the weather consistently good, but there aren't many pedestrians or cyclists on the road as there are in other cities.

Generally speaking, the best AV developers say that you can't have too much redundancy or diversity in your sensor suite, simply to cover as many edge cases as possible. But even after Elaine Herzberg's death, we still don't see many AV developers adding the one sensor that is best at identifying vulnerable road users at night: thermal imaging sensors. Toyota Research Institute has said that its newest sensor suite will include a thermal camera, and surely others must follow... especially if AVs ever hope to be deployed in areas with high volumes of cyclists, pedestrians, scooters and other VRUs, thermal sensors are going to have to become a standard part of the AV stack. By picking up body heat, thermal sensors can pick out humans even if they are behind a pole or another vehicle, whether it's broad daylight or in the middle of the night. 

Frankly, it's surprising that more companies haven't adopted thermal sensors in the wake of the Herzberg crash.

Lesson #5: Regulation Beats The Alternative.

After having its testing license in California revoked, Uber fled for Arizona where governor Greg Ducey welcomed autonomous vehicle companies with open arms and taunted the Golden State for stifling innovation. For a free-market Republican, lowering regulatory barriers was an appealing way to bring business to Arizona and improve the state's high-tech image. It even worked well for a while, making Arizona one of the major hubs for AV testing... until the Herzberg crash.

In response to the fatality, Ducey had to go from one extreme to the other, banning Uber's cars from Arizona roads outright. Even Maricopa County's Attorney's Office had to recuse itself from the case due to a conflict of interest stemming from its participation in a public safety campaign in partnership with Uber. The entire situation showed that Arizona's public officials had been so anxious to curry favor with self-driving car companies that it had contributed to the cascading failures that led to Herzberg's death. Both Ducey and the State of Arizona are now defendents in a $10 million lawsuit assigning them a portion of the blame for Herzberg's death.

Arizona's experience was an important reminder to public officials to avoid a regulatory "race to the bottom." The short-term advantages of lowering regulatory limitations on self-driving cars are tempting, but the long-term risk of a death are massive. The broader message: if you forgo regulation, you invite personal injury lawsuits. Either regulation can be done proactively in a way that balances public safety with responsible public road testing, or it can be done proactively by ambulance-chasing lawyers and an angry public. Clearly the first option is preferable.

Lesson #6: Legal Liability Must Catch Up With Technology.

The Elaine Herzberg death brought up an extremely difficult question that society was not yet prepared to answer: who is responsible when an ostensibly self-driving car crashes? Is the software itself a "person," subject to all the same rights and restrictions as a human driver? Is the safety driver the responsible party? Is the company? Are the regulators? As the legal aftermath of the fatal Uber crash plays itself out, these questions are having to be answered piecemeal in an emotionally-loaded context.

For too long, autonomous drive technology has been treated as a purely technical problem when in fact driving is an intensely social activity governed by laws, customs and jurisprudence that has evolved around human drivers. The flip side of the newly sober attitude in the AV development space is that now the rest of society has some time to catch up with the technology and start forming rules and principles to assign responsibility and provide accountability when AVs go wrong. Yet, like thermal imaging, we still don't see the urgency around these issues that one would have hoped for in the immediate wake of the first fatal AV crash. As great as the need for talented AV engineers is, the need for smart, technically well-versed lawyers and politicians is just as great... at least it is if we hope to better manage situations like the Herzberg fatality in the future.

Lesson #7: Leaders Need To Listen.

Less than a week before Herzberg's death, an Uber employee named Robbie Miller emailed top leaders at Uber's Advanced Technology Group and company lawyers, warning that the factors that would contribute to Herzberg's death were a problem. That email read, in part:

"ATG still has a bit of work on establishing a culture rooted in safety. The cars are routinely in accidents resulting in damage. This is usually the result of poor behavior of the operator or the AV technology. A car was damaged nearly every other day in February. We shouldn’t be hitting things every 15,000 miles. Repeated infractions for poor driving rarely results in termination. Several of the drivers appear to not have been properly vetted or trained."

Miller recommended a number of changes that are now standard practice for most major AV developers, including having multiple people in each testing vehicle, stop piling up testing miles for the sake of miles, empower employees to escalate matters and ground the fleet, institute crash reviews, share data more widely among ATG's teams and more. In retrospect, Miller's email shows just how preventable Herzberg's death really was.

Lesson #8: Infrastructure Matters.

AV developers are loath to develop their systems in a way that makes them depend on infrastructure, whether that's specific street design features or V2X communication technology. This is understandable, since technology moves fast and infrastructure rarely does. At the same time, it's becoming increasingly clear that infrastructure does play a role in AV safety and that local officials need to be sure that their streets aren't bringing AVs and other traffic into conflict.

The spot where Elaine Herzberg was struck had a feature that looked for all the world like an unpainted crosswalk, and yet it was not a crosswalk nor was it located in a particularly safe place to cross Mill Avenue. We don't know what Herzberg was thinking when she stepped out into the road, but the mixed message that this road design feature communicated has been recognized as part of the problem and been filled in to prevent further confusion. Any city looking at inviting AVs onto its roads should remember this factor and perform a detailed inventory of any roads where AVs will be operating in order to prevent further fatal confusion.

Lesson #9: Autonomy Has Opponents.

The public has been suspicious of autonomous vehicles since day one, but in the wake of the Herzberg crash we've seen a downtrend in the public's trust of this new technology. Mistrust of AVs comes from many corners, but it extends as high up as the president himself, who was just described in an Axios story as believing "they will never work" and expressing horror at the prospect of being in a car that couldn't be controlled by a human. Though often derided as "luddism," this fear of autonomous vehicles is a fact of life that AV developers and fans can't simply mock until it disappears. Indeed, the very arrogance of some in the high tech sector leads to situations like the Herzberg incident, which fuels the skepticism and fear of AV opponents as has been evidenced by subsequent attacks on Waymo AVs in Arizona.

This is a major cultural stumbling block for the sector: high tech firms are used to providing such profound economic incentives that they are able to sweep aside dissent, confusion and resistance. With AVs still far away from delivering on their overhyped promises, public outreach, education and communication will become more important than ever to AVs and their supporters. This is a massive challenge given the complexity of the subject and the depth of confusion that exists around it, but even a good-faith effort at humble education would be a huge improvement on the typical tech sector arrogance. If nothing else, it's no longer enough to simply say "historical progress is on our side" and dismiss concerns out of hand. 

Lesson #10: Trust Is The Currency Of Autonomous Drive Technology.

Trust, at its most fundamental level, is about long-term relationships. Trusting someone or something means knowing that it will be there for the long haul, understanding what it wants and how it operates, and having assurances that it will respond to concerns. This is why the situation in Arizona is such a mess: lowering all barriers brought in a flood of of Uber AVs which then left again as soon as Herzberg was killed. Cycling from one extreme to another does the opposite of building trust, just as cutting corners to achieve a barely-tangible (outside of funding rounds) advantage creates inconsistency that undermines trust.

Bad outcomes are bad, and Elaine Herzberg's death was the result of a lot of irresponsible behavior, but we may face a situation in the future where someone is killed by an AV without all the obviously bad inputs we see in this case. Anticipating the possibility of a bad outcome no matter how hard everyone tries to prevent it requires that good players be as honest and transparent as possible, not hyping up the technology but communicating about it realistically. It requires an acknowledgement that some risk is inevitable in any public road testing, but that the risk is clearly understood, mitigated to the extent possible, and not something that will overturn everything you thought you knew about the developer or its public-sector partners. It requires that AV developers have a sense of sharing incentives with the public rather than acting as if they exist in an alternate reality, making millions by creating safety risks that can be walked away from.

Developing this trust is as important to an AV developer's future as creating strong prediction models or reliable path-planning algorithms. That's because autonomous vehicles should ultimately be like airplanes: whether you step into a Boeing or an Airbus, you want to feel like every effort has gone into keeping your flight as safe as possible. Since AVs won't be differentiated by handling or performance, the only marketable attribute that an AV developer needs is the trust that a customer feels getting into one of their vehicles. If AV developers aren't integrated into the communities in which they operate, if they don't share in the risks of the road, if they look for the least oversight possible, if they cut corners in pursuit of an ever-higher valuation, they are ultimately hurting themselves as much as the people their vehicles share the road with.

stripe
Car TechSelf-Driving Tech