Dr AutoPilot and Mr Autopilot: Why We Should Want Waymo’s Go-Slow Approach To Win

It’s cool to bash Waymo for moving slow, but in the life-and-death world of autonomous vehicles taking the hard road should be celebrated.

byEdward Niedermeyer|
Self-Driving Tech photo
Share

If you listened to Waymo CEO John Krafcik's comments

at the Frankfurt Auto Show, you might have picked up on some subtle shade at Tesla and other big names in the automated driving space. Highlighting the depth of experience that the Alphabet company's unrivaled has yielded, and showing why its goal remains mastering the challenges of Level 4 autonomy, it might be easy to feel like you've heard comments like Krafcik's before. But with the benefit of historical context, some of which I have gleaned from the research that went into my book LUDICROUS: The Unvarnished Story of Tesla Motors, we can glean some important lessons from Krafcik's speech.

At the time, nobody knew how close Tesla came to becoming an Alphabet company itself. It wasn't until Ashlee Vance's kinda-authorized biography of Elon Musk came out in 2015 that the public knew Tesla CEO Elon Musk had negotiated a deal with his friends and Google co-founders Larry Page and Sergey Brin, that would have seen Tesla bought by the search giant at a healthy premium and with Musk staying in charge. This generous deal was ultimately rejected by Tesla after Musk engineered a miracle March 2013 sales turnaround, borrowed money to pay back government loans and began a promotional campaign that sent Tesla's stock into Ludicrous Mode.

Part of Musk's promotional blitz, starting in the second quarter of 2013, involved talking about automated driving for the first time. Musk started by saying Tesla might use Google's technology to make its cars driverless, but by the second half of the year he was talking up Tesla's own system free from the search giant's effort, inspiring headlines that had his company "[moving] ahead of Google." To achieve this, Musk said Tesla's system would offer automated driving for "90% of the miles driven within three years," saying full autonomy was "a bridge too far."

With the benefit of hindsight, it's clear that Musk was⁠—at a minimum⁠—either inspired or scared into this direction by his peek behind the curtain at Google's surprisingly advanced autonomous technology. But based on the latest information from Krafcik, Musk seems to have been more than just inspired: Google had extensively tested a freeway-only "driver in the loop" system prior to that point called... "AutoPilot." According to Google/Waymo consultant Larry Burns's book Autonomy, "AutoPilot" [Burns doesn't use this name] was developed through 2011, tested it in 2012 and decided by the end of the year that it wouldn't pursue the product.

In short, it seems that Musk must have had a peek at (or possibly even a demonstration of) AutoPilot, and decided that if Google wouldn't take it to market he would, right down to their internal name for the product. Though not everyone would make the same decision with regard to a friend's company's product, particularly after that company made an attractive bailout offer for ones own firm, it's not hard to understand why Musk did what he did. In trend-obsessed Silicon Valley, automated driving was about to make Tesla's electric vehicle technology into old news and here was a fully scoped and demonstrated product that could get Tesla back into the game and that was otherwise going to become "abandonware."

The problem, of course, was simply that Google abandoned AutoPilot for good reasons. The video of test "drivers" using AutoPilot, which Krafcik showed publicly for the first time in Frankfurt, shows drivers becoming profoundly inattentive, putting on makeup, plugging in phones and even falling asleep. The leaders at Google's self-driving operation rightly realized that partial automation created a thorny human-machine interaction problem that was in a way almost harder to fully manage than Level 4 autonomous drive technology itself. Without incredible amounts of work on driver monitoring, operational design domain limits and other HMI work, AutoPilot was an irresponsible product to foist on the public... and one which didn't even really provide the main benefits of autonomy.

It's hard to imagine that Musk learned of AutoPilot in the first quarter of 2013 without learning Google's reasons for abandoning the product, but if he did learn about these risks he's been playing dumb about them ever since. He did play up the challenges of Google's new directions though, telling the media about the the "incredible" challenges presented by "the last few per cent" of miles driven and that Google's lidar technology was "too expensive." Ever since, Musk has regularly made lidar a whipping post that focuses public attention on the challenge of Level 4 autonomy and away from the major issues with Tesla's Autopilot approach.

In the years since 2013, Waymo has quietly and safely made steady iterative progress on its Level 4 technology without breaking into the consumer mass-market. Tesla, on the other hand, has garnered billions in market valuation and established itself as a household consumer brand on the strength of an Autopilot system that has now been involved in numerous crashes and deaths. The very scenario that Google's leadership feared, a fatal crash involving an inattentive AutoPilot user, has now happened multiple times... and yet, rather than destroying trust in the broader technology it has somehow not even hurt Tesla's perceived position as a leader in automated driving.

On the one hand, this seems like a validation of Musk's notoriously ruthless and risk-tolerant approach to entrepreneurship (at PayPal, he once gave away credit cards to basically anyone who wanted one). On the other hand, Musk's decision to either ignore or dismiss Google's concerns, despite their unprecedented research and subject-area knowledge, casts subsequent Autopilot deaths under precisely the circumstances Google worried about in a troubling light. After all, Tesla's own engineers shared those concerns and pressed Musk to adopt driver monitoring, which Musk dismissed due to either cost or the inability to make the technology work. 

At a certain point it becomes impossible to deny that Musk could have foreseen the deaths of Gao Yaning, Josh Brown, Walter Huang, Jeremy Banner and possibly others (not to mention the countless non-fatal Autopilot crashes). One is forced to conclude that he risked these crashes because the benefits outweighed them, and without question the subsequent hype, headlines and stock value that accrued to Tesla and Musk were worth billions. The public is outraged by the possibility of automakers making recall decisions by weighing the cost of a few cents per part against the inevitability of a certain number of human deaths, a trope made popular in Fight Club and evidenced in scandals like the Ford Pinto,GM ignition switch and Takata airbag defects, and yet Musk's cold-blooded calculus has yet to become a public morality tale.

This is yet another example, alongside that of Anthony Levandowski, of a certain amoral and self-enriching attitude that is bafflingly well-tolerated in Silicon Valley. Waymo is continuously derided or tut-tutted over its inability thus far to widely deploy its own Level 4 robotaxis in a viable business, but criticizing Tesla's decision to deploy Autopilot without the safeguards that Google testing proved it needed to be safe resulting in several deaths is itself derided as the domain of anti-Tesla "haters" and kooks. Surely we can now see, as the NTSB stacks up case after case of "predictable abuse" of Autopilot, that rewarding Musk's willingness to sacrifice human lives for his own aggrandizement and enrichment is to create a set of incentives that lead directly to dystopia.

Of course, there are reasons why Musk's amoral gambit hasn't been seen for what it is. Despite the years of academic research that back up Google's research, the human-in-the-loop nature of Autopilot (and AutoPilot) make it possible to blame the human even if all this research these systems will always lead them toward inattention (especially if there are one or two easily-discredited studies from major institutions showing the opposite). Even the US safety regulator, NHTSA, isn't equipped to establish something like "predictable abuse" (which is very different from the kinds of defects it is used to hunting) requiring the NTSB to build up a body of evidence before acting. Even Tesla's opaque data-management system makes it harder for Tesla owners, their loved ones, the media and regulators to establish that the problems identified by Google and countless academic researchers are really killing people.

Because so many participants in the public "debate" about safety problems with Tesla's Autopilot have a financial interest in the company's stock or even just enjoy using the system (or even just like other aspects of Tesla's brand), there will always be someone defending Tesla. But the more important discussion here goes beyond Tesla itself: if one major automaker determined that a particular system isn't safe and another one deployed it anyway, would anyone call the latter company a brave innovator even as people died due to their decision? What if they were aircraft manufacturers?

Whatever one might think specifically about Elon Musk or Waymo or any individual, business or sector, what they do and how it is received creates incentives that the rest of us have to live with. Letting Musk's Autopilot decision slide creates a deeply worrying precedent that will in turn justify someone else's decision to put your life at risk for the benefit of their greater glory. Ignoring the facts promulgated by academic researchers, Waymo and the NTSB in turn contribute to the erosion of fact- and science-based discourse.

Even if you believe that the Tesla drivers who have died made a conscious choice (and Tesla certainly did not disclose the research showing that "predictable abuse" of "Level 2+" systems is all but inevitable), cars and drivers endanger plenty of people on the road who did not. Elon Musk, on the other hand, did make a conscious choice to deploy a system that he knew had life-or-death safety issues and didn't disable it or pull it from the market even after people started dying. 

Next to that, Waymo's slow (sometimes seemingly excruciatingly so!) march toward truly driverless technology should be celebrated. They may not be living up to the toxic expectations of Silicon Valley hype culture, but they are living up to the most fundamental common norms of human society. If it means we have to wait a little longer to feel like we are living in an epic future, so be it. At least when that future comes it will have a shot at being more utopia than dystopia.

stripe
Car TechSelf-Driving Tech