Just when you thought the self-driving car lobby was already drowning in its own Kool Aid, a Waymo engineer tweeted something very, very foolish. It's hilarious on its face, but it conceals a dark secret that isn't: The majority of those working on self-driving cars don't really understand what they're working on.
Let's start with the funny part.
On December 3rd of 2017, Waymo's Vahid Kazemi, a software engineer at Google's Research & Machine Intelligence Department, according to his LinkedIn profile, tweeted the following:
This guy sounds pretty damn dangerous. Someone needs to take away this guy's drivers license. Immediately.
There's a lot going on here—one could write a book about how wrong this is. Oh, wait. Someone already has. Several books, in fact. Among them is The Glass Cage, by Nicholas Carr. Then there's Fly By Wire, by William Langewiesche. Then there's Understanding Air France 447, by Bill Palmer. These three books cover the history of automation and autonomy, from the industrial revolution through modern aviation. If Kazemi had read any of them, he wouldn't have given us the most entertaining example to date of the ignorance at the core of the community actually developing self-driving cars.
How a man with a Ph.D. in Computer Science and Masters in Control & Robotics, working at Waymo—the company with the most advanced self-driving car program in the world—could be so ignorant staggers belief.
Based on my frequent visits to Silicon Valley and "Mobility" conferences around the world, he's not alone.
Let's break it down.
Kazemi, accustomed to what he describes as the "autonomous" features of his own car, rents one without them. As a result, he confesses that he "almost crashed ten times and ran over five people in two days." He then concludes that "humans aren't designed to drive cars!"
On its face this might be reasonable, except for who he is, what he should know, the language he uses, and the chasm-jumping logical leap justifying his career path.
Kazemi is obviously aware of Waymo's strategic decision to focus exclusively on Level 4 autonomous cars. Their decision was based on research indicating that semi-autonomous systems short of Level 4 led to atrophying skills. As cars march up the SAE/DOT Level definitions, drivers pay less attention—and place more faith in technology—than they should. Waymo's test drivers began falling asleep when using semi-autonomous systems, and were unresponsive to transition warnings alerting them to resume control. Studies have suggested that unprepared passengers might need as long as thirty seconds to do so, and Waymo concluded that no transition warning system would be sufficient to make a semi-autonomous car safe.
Waymo therefore decided to jump from Level 2/ADAS, which is where we are today, straight to R&D for Level 4.
Let's define some terms, for it appears that even people like Kazemi are confused. ADAS stands for Advanced Driver Assistance System, which defines a loose collection of safety technologies like radar/active cruise control (ACC), lane keeping (LKAS) and automatic emergency braking (AEB). Even if a car has such systems, their behavior varies from car to car, which can make it unwise to rely on theoretically similar systems one is unfamiliar with. Long-term reliance on any form of automation—even mild semi-autonomous systems—leads to atrophying skills. The more automation, and the more time spent using it, the worse the atrophy.
I think the SAE definitions are vague, but I'd argue that L2 starts when ADAS integration starts getting good, and that Tesla Autopilot was the first good L2 system—at least at the peak of its first generation, which ended with their schism with MobilEye.
Legally and technically, anything under Level 4 is semi-autonomous, and absolutely requires a human in the loop, because L2/ADAS can disengage anytime without warning. What about Level 3? All that adds is a transition warning system, which Waymo decided years ago couldn't be implemented safely.
Level 3 is so vague, hardly anyone wants to claim their system falls under the SAE definition. Even Cadillac's excellent SuperCruise doesn't claim Level 3.
Separately or together, no matter how many ADAS features you pack into a car, and no matter how well they work, I must repeat anything under Level 4 is semi-autonomous.
It is absolutely impossible to get a job at a real company working on this tech without knowing these definitions.
Or is it?
So, what were the "autonomous" features Kazemi was accustomed to that—when placed in a rental car lacking them—he nearly had 15 accidents over 48 hours? Does he regularly drive one of the two cars whose L2 is so good that they suggest full autonomy to the uneducated driver?
Not even close.
Here's what Kazemi bought in January 2016, according to his Facebook:
Yup. That's an Infiniti Q-series. Great car. How about that ADAS suite? It sounds like his is fully equipped, which means it's got everything listed here: ACC, LKAS and AEB.
How good is Infiniti's ADAS? Pretty good, but nowhere nearly as good as Tesla's Gen 1 or Cadillac's SuperCruise. There's a reason we never hear about Infiniti Autopilot accidents, a la Tesla. No one using the Infiniti thinks it's self-driving. It's ADAS is sooooo ADAS, Infiniti barely markets it at all, and they certainly don't bother playing Tesla's nomenclature game.
Kazemi must know this. Kazemi must understand this.
And yet he refers to these features as "autonomous."
Here's a guy working on L4 who doesn't seem to know or understand the nomenclature (let alone functionalities) of anything contiguous to what he's working on, and then he gets into a car with literally nothing, and nearly kills five people.
His conclusion? Humans aren't designed to drive cars. The reality? He shouldn't be driving cars, even with all the ADAS in his Infiniti. Drivers today remain legally responsible for anything that happens, no matter how many ADAS features they become accustomed to. If his skills are so atrophied that he—with all his education and purported knowledge of the sector—cannot muster the intellectual wherewithal to maintain safe control of a vehicle lacking ADAS, then he's a menace.
Waymo's Level 4 solution is perfect for him.
By conflating semi-autonomous features with actual autonomy, glossing over his own lack of skills and concluding that no one else could do better, Kazemi highlights the blinders-on mentality of too many in self-driving development.
Virtually everyone in the self-driving car sector—whether legacy car makers or startups—is focused on what's called series autonomy, which substitutes for human control rather than augments. An augmentation approach would follow the lessons of commercial aviation, where training and automation have proven incredibly successful in reducing fatalities.
Short of Level 4, parallel/augmented technology is the only way to make driving safer while eliminating incidents like Kazemi's. I wrote a whole article about augmentation earlier this year, and Toyota Research Institute boss Gil Pratt should be applauded for being the lone voice in the industry pushing for parallel solutions. TRI's Guardian is the answer to Kazemi's folly, and the antidote to the Kool Aid drinkers who think series is the only way to go.
Kazemi and crew would have us believe that 100+ years since the advent of the car, nothing can or should be done to make them safe under human control, that we can't learn how to master machines, that nothing is gained from doing so, and therefore no one should be driving at all.
I say he's wrong.
I want an uncrashable Porsche 911, if only they would design one exactly like this.
As for Kazemi, he pulled down his tweet. I can't wait for Waymo to give him the Wall-E pod he so needs, wants and deserves.
Alex Roy is Editor-at-Large for The Drive, Host of The Autonocast, co-host of /DRIVE on NBC Sports, author of The Driver and Founder of Noho Sound, has set numerous endurance driving records in Europe & the USA in the internal combustion, EV, 3-wheeler & Semi-Autonomous Classes, including the infamous Cannonball Run record. You can follow him on Facebook, Twitter and Instagram.