Silicon Valley’s Anti-Autonomy Backlash Is Afraid Of The Wrong Things

Autonomous test vehicles may stand out to Silicon Valley’s anti-autonomy NIMBYs, but the real risk passes them unnoticed every day.

byEdward Niedermeyer|
Car Tech photo
Share

0

Humans are good at a lot of things, but when it comes to assessing risk in the modern world we have some serious limitations. It's not uncommon to be plagued with fear and anxiety while flying, for example, but the same people who quake at the thought of trusting their life to an airliner will often treat the far more dangerous task of driving with baffling nonchalance. It should be no surprise then, that people are also wildly off the mark when it comes to assessing the risks presented by public road testing of autonomous vehicles.

This misperception of risk is dramatically illustrated in a recent story by Washington Post reporter Faiz Siddiqui, which uncovers a kind of NIMBY (Not In My Back Yard) backlash against AVs in the heart of Silicon Valley. Siddiqui spoke with a number of Valley residents, most of whom work in the tech sector and believe in the long term potential of self-driving cars, who object to being what one terms "the guinea pig" for this new technology. Comparing this backlash to tech workers who limit their childrens' screen time because they understand the potential negative impacts technology can have, Siddiqui's reporting suggests that familiarity with autonomous vehicles breeds not understanding but fear.

At a time when a decades-long tech boom seems to be deflating, with unprofitable companies are being foisted on public markets and the negative consequences of social media and facial recognition fuel a cultural "techlash," Siddiqui's reporting strikes a resonant tone. Everywhere you look you can find evidence that the tech sector has lost the ability to anticipate the negative consequences of its innovations, and is heedlessly "blitzscaling" its way toward a technological dystopia. Parts of the autonomous drive sector provide plenty of fuel for these concerns as well, but Siddiqui's subjects struggle to correctly identify the real sources of danger among the robocars sharing their local streets.

Misperceptions of risk are often rooted in aesthetic novelty more than anything else, drawing our attention to things that look startlingly unfamiliar while allowing more immediate but somehow familiar risks to fade into the background. Unsurprisingly, these concerned residents of Silicon Valley seem to have latched onto the Alphabet company Waymo's unusual-looking autonomous test vehicles, which bulge with a variety of sensors and immediately stand out as members of an experimental test fleet . Meanwhile, the "sheer volume of Teslas on the streets" that Siddiqui only mentions in passing as evidence of The Valley's willingness to adopt new technologies, pass by with the quiet anonymity of any other consumer vehicle.

 This dichotomy shows how badly risk can be misperceived, given the profound differences between how Waymo and Tesla approach risk and safety. This contrast extends from their overall approaches to autonomy and internal safety cultures to the designs of their technology stacks and on-road testing protocols. When armed with the full facts, the ubiquitous and anonymous Teslas turn out to be embodiments of the toxic culture that fuels "techlash" anxieties while the eye-catchingly unfamiliar Waymos reflect a reassuring culture of cautious safety.

The contrast between these two firms goes back to Google's decision to pass on a highway-only driver assistance system called Autopilot due to safety concerns, and Tesla's decision to pursue a near-identical system without implementing driver monitoring or other systems that might mitigate the risk Google found. Tesla's need to keep up with the Google-led automated driving technology trend fueled a frantic development program that saw Tesla CEO Elon Musk approve the release of "beta" software to customers, dismiss his own engineers' concerns about safety, and pass on their recommendation that it include a driver monitoring system. This allowed Tesla to deploy a system that provides the look and feel of a self driving vehicle at a relatively affordable price point, encouraging the risk of inattentiveness that Google had found while blaming drivers for the inevitable crashes (some of which have been fatal).  

Tesla continues to push new updates and features to Autopilot, including a suite of features that it says will evolve into a Level 5 "Full Self-Driving" system sometime next year, effectively turning its customers into untrained and untested public-road testers of automated driving software. Musk has repeatedly expressed the desire to push automated driving software onto public roads as fast as possible in order to reduce the number of road fatalities caused by human error and argued that anyone critical of resulting crashes is "killing people" by dissuading them from adopting the technology. This argument for putting as many vehicles on public roads as possible to speed up neural net learning was condensed into a breathtakingly crude utilitarian argument by his acolyte and defender Lex Fridman, who argued that "we're going to have to be more forgiving of a car causing an fatality" and said that if an AV accelerated into a crowd of people it would be justified if it lead to a decrease in human-caused fatalities.

This is precisely the kind of toxic tech culture that the "techlash" is rightly focused on: pushing immature technology onto public roads as fast as possible, and cavalierly endangering lives based on the hope that it might someday save more lives than were lost during development. It's precisely the attitude that led to the fatal crash of an Uber autonomous test vehicle in Tempe, AZ last year, prompting the entire sector to rethink the whole notion of a "race to autonomy" and beef up their safety cultures, particularly in the context of public road testing. In the "trough of disillusionment" that has followed this tragedy, many AV developers have pushed back their timelines and doubled down on safety with the understanding that another fatality could prompt precisely the kind of backlash we see in Siddiqui's story.

The major Level 4 developers, which include Waymo, Aptiv, Argo, Aurora, Toyota Research Institute, Cruise, Intel/Mobileye, the BMW/Daimler/Bosh coalition and Uber's Advanced Technology Group, form a rough consensus around certain fundamental questions. In terms of the technology itself, all of these companies are focused on geofenced Level 4 robotaxis because they believe truly autonomous vehicles require 360 degrees of radar, lidar and camera coverage which require a sensor suite that is too expensive for a consumer-grade vehicle. They also forgo the massive amount of data that Tesla can (theoretically) harvest from customer vehicles, preferring instead to test and gather data using their own smaller fleets of test vehicles with highly-trained "safety drivers" ready to take over. Since the computer vision algorithms Tesla relies on are hard to make completely reliable and can't be fully audited, the AV industry consensus emphasizes the safety net that additional sensor modalities provide rather than trying to maximize data collection in hopes of a deep learning breakthrough as Tesla does.

In short, Tesla is a dramatic outlier in an industry that has realized that a slower, safer approach will prevent the kinds of tragedies that could make self-driving cars on public roads the focal point for the simmering techlash. As AV developers dial back expectations in order to focus on the safety of both their systems and the processes through which they develop them, Tesla is charging ahead by releasing features like "Smart Summon" and claiming it will have a million robotaxis on the road by the end of next year. But because its cars can be bought by anyone, and we assume that consumer vehicles are strictly regulated when in fact there is no real regulation of automated driving technology like Tesla's, these rolling embodiments of the toxic culture that has inspired the techlash go largely unnoticed.

Inside the AV business, Musk's approach inspires the proverbial fear and loathing. People who pioneered autonomous drive technology long before the public ever heard of it either roll their eyes dismissively at Musk's high-risk approach or (increasingly) worry that it will bring the entire sector into disrepute. Their frustration at the misinformation that he spreads about things like lidar, safe development practices and the distinction between autonomy and driver assistance has been mounting for years, yet they can't seem to break through his stranglehold on public perception. 

Now their worst fears are coming true, as their small fleets of professionally-operated test vehicles are becoming the focus of the techlash while Musk's flamboyant risk taking passes largely unremarked upon. Having myself tried to sound the alarm about Tesla's dangerous approach to automated driving for some time, I don't have much advice to give them. All I can recommend is that they start being a lot more aggressive about naming and shaming the players and practices that threaten to tar their life's work with the tech sector's worst attitudes. After all, Siddiqui's reporting makes it clear that people aren't going to just figure it out for themselves.

stripe
Car Tech