Sticks and stones may break my bones, but words will never hurt me.
Guess what? Words can hurt us, when life or death decisions are made on false assumptions. I’m talking, of course, about the language of Self-Driving Cars, that seemingly inevitable panacea into which Silicon Valley and the car industry are pouring billions to save us from ourselves. If technology is only as good as our understanding of it, then the automotive industry has a long way go.
The problem isn’t limited to Tesla’s branding of the word “autopilot”, but it certainly starts there, especially now that the California DMV has threatened to stop Tesla from using Autopilot as a brand name. Add the recurring debate over whether Tesla Autopilot was defective in the Joshua Brown crash, and obvious questions emerged:
What is an autopilot? Is Tesla Autopilot actually an autopilot? What does “autopilot” imply? Where does Tesla’s system fall short of real or perceived autopilots? Is the problem the “auto” prefix? The “pilot” suffix? For those who object to Tesla’s choice of branding, I bring you Audi’s Piloted Driving, the yin of implication to Autopilot’s yang. What does “Piloted Driving” imply? What about Volvo’s Pilot Assist?
In the spirit of unpacking this mess, I signed up for flying lessons and began a deep dive into the history of autopilots and automation. I also decided to reach out to a wide variety of pilots — from private owners of single-engined turboprops to professionals flying Boeing 757’s and Airbus A380’s — and take them out in a variety of cars, starting with Teslas.
This is Part 1 of what I learned.
If we don’t know what we’re talking about, we can’t know what to expect
The burgeoning autonomous car sector is suffused with vague and overlapping terminology, from general terms (Self-Driving Car, Autonomous Driving, semi-Autonomous Driving) to branded suites (Autopilot, Drive Pilot, Pilot Assist, Piloted Drive) to functionality (Autosteer, Steering Pilot, Steering Assistant).
That Tesla Autopilot “controversy”? A clickbait-driven diversion masking the two real issues: 1) confusion between the literal and generally accepted meaning of the words and phrases the entire industry has co-opted from the aviation and maritime sectors, and 2) the gap between what vehicular automation/autonomy can do, should do, and what we expect it to do.
Any company using the phrase “Self-Driving” in a press release, or using “Auto” as a prefix, or “Pilot” as a prefix or a suffix, bears a moral responsibility for their technology to at least live up to the minimal functionality of its literal definition. If it’s an invented term, the perceived one.
And yet…
The autopilot perception problem starts with grammar
You may have noticed in the prior section that I twice refer to “Tesla Autopilot”, always as a single phrase, with autopilot capitalized. This is precisely as Tesla intended, and is a brilliant, shining example of how to inform and confuse simultaneously, intentionally or not.
Until recently, even I — who have penned thousands of positive words about Tesla’s system — was guilty of unintentionally contributing to confusion over its capabilities. I often used the possessive “Tesla’s Autopilot”, which suggested 1) that there is a functional standard among autopilots, and/or 2) that someone — presumably in aviation, since most people associate autopilots with aviation — had previously branded it, saw it genericized like Band-Aid and Kleenex, then had it co-opted in the car sector by a guy named Musk.
Not quite.
There is no one standard for autopilot functionality within the aviation or maritime sectors, but there are tons of trademarks with “autopilot” in them, many of them having nothing to do with transportation at all. There are probably more manifestations of autopilots than there are definitions, which brings us to…
What Is the generic definition of an autopilot?
The Merriam-Webster simple definition states:
“a device that steers a ship, aircraft, or spacecraft in place of a person”
So, a system that can perform 100% of the steering in place of a person. Not throttle, or any other systems. Takeoff, landing and docking require additional inputs, so a human operator is implied.
The full definition states:
“a device for automatically steering ships, aircraft, and spacecraft, or automatic pilot”
Note the addition of the word automatic. The traditional definitions going back over one hundred years are very consistent in the use of the word automatic.
NOT autonomous, but automatic.
Now let’s move onto the more timely Wiki definition, conspicuously updated to reflect modern iterations (bolding is mine):
“An autopilot is a system used to control the trajectory of a vehicle without constant ‘hands-on’ control by a human operator being required. Autopilots do not replace a human operator, but assist them in controlling the vehicle, allowing them to focus on broader aspects of operation, such as monitoring the trajectory, weather and systems.”
“Autopilots have evolved significantly over time, from early autopilots that merely held an attitude to modern autopilots capable of performing automated landings under the supervision of a pilot.“
Taken alone, the first paragraph would appear to tell the whole story — autopilots do not replace the human operator — but the second paragraph puts everything in context, and begs an obvious question about aviation that the automotive industry doesn’t seem to be asking.
If modern autopilots are capable of performing automated landings — and presumably takeoffs — why are pilots necessary at all?
The answer goes back to language.
Automation ≠ Autonomy
As per Merriam-Webster, the “auto” prefix denotes automation, not autonomy, therefore an autopilot is an automatic pilot, not an autonomous one.
Don’t believe me? An “automobile” doesn’t drive itself, but an “autonomobile” would, or will.
FYI, the definition of automatic is:
“…having controls that allow something to work or happen without being directly controlled by a person.”
The definition of autonomous is:
“…existing or acting separately from other things or people, having the power or right to govern itself.”
Automation ≠ autonomy, and never has. Go back and reread the autopilot wiki. The word “autonomous” doesn’t appear once. Nor does the phrase “Self-Flying” plane.
The Folly of SAE’s Autonomy Levels
That fancy SAE Level system the DOT recently chose to adopt to define autonomous driving? It makes no sense. It conflates and/or confuses automation and autonomy by placing them on a single continuum. Their definitions are so vague, and so many graphical interpretations of the levels exist, it’s a miracle manufacturers haven’t thrown up their hands and—
Oh, wait. They have. Not one automotive manufacturer will place a car currently on the market on that chart. Not even industry leaders Tesla, Mercedes and Volvo. And they never will. They can’t, for liability and marketing purposes. They would prefer to exploit their co-opted phrases or hide behind invented ones, and wisely so, because they suspect something they don’t want competitors or consumers to know.
Yet.
Not only does automation ≠ autonomy, but automation isn’t necessarily the path to autonomy.
Automation is binary. Add more automation, and you have a decision tree. Hit its limits, and it stops, resets or fails. Autonomy is what you want when you hit those limits. Automation is a neat, ordered room. Autonomy is like the sky.
How to get to autonomy? Aviation isn’t telling.
Boeing and Airbus could sell fully automatic planes. The first fully automated flight took place in 1947, but nearly sixty years later not a single airline flies jets without a crew aboard. Why? Because automation can reach its limits, or even fail, and when out-of-the-box thinking becomes necessary the best autonomous technology remains the human mind.
For now, and probably for a long, long time.
What people think aviation autopilots do
Everything. Literally everything, from A to B. Ask anyone but a pilot and they’ll tell you pilots have it easy. All they have to do is press a few buttons and they’re done, from takeoff to landing, including navigation. As long as nothing breaks, the pilots can read, or sleep.
What aviation autopilots actually do
That depends on the plane, and the system.
A Cessna 172 — the VW Beetle of planes — can accommodate a wide of variety of autopilots, from a few hundred to a few thousand dollars. By commercial standards, these remain fairly primitive. Early units controlled roll, modern units control roll and pitch. For the uninitiated, that’s basically left/right, and up/down. Systems vary, but throttle is on you until you get further up the food chain, in autopilots and planes. So is most everything else.
The bottom line? The autopilots in most small planes are fairly simple, capable of holding the wings level and maintaining heading (i.e. steering) from B to C but not D, with human guidance, reducing the pilot’s workload.
Trust me, there’s still work. Coming out of an old 911 with an aftermarket GPS and cruise control, I felt right at home.
The big planes flown by commercial carriers, like Boeing and Airbus? Not only can their autopilots control roll, pitch and yaw, but throttle as well. The latest aircraft integrate GPS navigation and myriad stability systems, greatly reducing crew workload, and enabling B to D flight.
A to B, as in gate to takeoff? Still under human control.
Autopilots don’t make flying safer, at least not directly. They make it indirectly safer by reducing pilot workload in a huge way. They are still only as good as the data entered into them, and crashes have occurred when pilots entered incomplete or incorrect instructions.
Garbage in, garbage out.
See a pattern here?
Human pilots remain essential to the safe operation of even the latest generation aircraft, as evidenced by the tragedy of Air France 447 and Sully’s Miracle on the Hudson. Both occurred in Airbuses — the most automated civilian aircraft in the sky — and yet saw two very different outcomes. The 447 story might have ended differently had the crew better understood their job, but Sully’s plane would almost certainly have crashed had it not been for his experience.
What Tesla Autopilot actually does
Tesla Autopilot is, by virtually any definition, a form of autopilot, but its current functionality — like the functionalities of systems from Mercedes-Benz and Volvo — cannot and should not be judged via direct comparison to aviation systems on any level other than basic workload reduction.
Like even the latest aviation autopilots, Tesla Autopilot and competing systems require human monitoring, and no manufacturers have said otherwise, with one exception and one misunderstanding. Early ads for the Mercedes-Benz’s 2017 E-class — since pulled — used the phrase “Self-Driving Car“, and Tesla’s Chinese market website — now clarified — used language that could be interpreted to mean “Self-Driving.”
LIke all automation, Tesla Autopilot is only effective within its limits. Workload is increased when users don’t understand these limits and must compensate for unexpected events. Workload is greatly reduced when used within known parameters, for example, speed limits. Tesla Autopilot is more effective at or near the speed limit — in any conditions — than at its upper limit of 90mph. So are humans.
You don’t need to be Stephen Hawking to know that education — for human drivers, or for humans about the automation on which they rely — is everything. Such education is mandatory and standardized in aviation. Driver’s Ed In the United States? Not so much. Car manufacturers have never been in the Driver’s Ed business, and yet the more advanced the safety features they introduce, the more education is necessary for consumers who confuse automation for autonomy, at their peril.
The leading automotive systems — Tesla Autopilot being the most advanced — are no more than a set of interlocking automation functionalities like radar cruise control, automatic emergency braking and automated steering.
Tesla’s next update — alleged to include highway interchange navigation — will theoretically put it on par with mid-level aviation autopilots that can follow multi-waypointed routes.
Autonomy? Not even close. But Tesla Autopilot’s vastly superior steering functionality does a great impersonation of autonomy, if by autonomy we mean accuracy of lane keeping in fairly clear conditions, on a well marked road, or with a car to follow via radar-cruise control.
The effective integration of multiple automation functionalities is very impressive, but it does not autonomy make.
The road to autonomy, or the jump?
If autonomy is a long way off, achieving greater levels of automation is not. The legacy manufacturers would have us believe — as per Nicholas Carr’s The Glass Cage — that partial automation leads to atrophying skills. I completely agree. I’ve seen it in my own driving, every time I get out of a Tesla and get into pretty much any other modern car.
Then I get back into a Tesla after a month or two, and it’s better, even if I’m fractionally worse.
Why? Tesla’s system is being updated as rapidly as they can gather data about accidents. Every Tesla is networked via what they call Fleet Learning, and — by also gathering video for accident analysis — they have a unique opportunity to rapidly fill in the gap between increasing automation and the skills potentially declining because of it.
Until rivals release a similar real-time data gathering platform that can be updated, they are wise to avoid deployment of automation they cannot improve in less than the traditional 3-5 year product cycle.
While Tesla forges ahead with increasing automation, the legacy manufacturers orbit a lower level dictated by their legal departments. This isn’t necessarily a mistake, not based on their belief that they can leapfrog Tesla all the way to actual autonomy and deliver a Self-Driving Car.
It all depends, of course, on what is happening behind closed doors in Japan, Germany & Detroit. Are they building a mountain of automation and hoping to find autonomy on top? My sources tell me that won’t work, at least not well, and probably not as quickly as the ocean of real-world data being collected by Tesla, Google and companies yet to show their wares.
We’ll cover all that in another article, because now it’s time address the most important lesson of my first flight: the Joshua Brown crash had nothing to do with Autopilot falling short of the aviation systems after which it was named. Aviation autopilots don’t avoid collisions.
TCAS does.
TCAS, as in Traffic Collision Avoidance System, a safety technology that wasn’t mandated in the US until 1986, decades after commerical aviation became ubiquitious, and then only for certain classes of aircraft.
TCAS is one of many aviation safety systems that have no one direct corollary in automotive, and yet beg to be mirrored by more than just the automatic emergency braking and automated steering systems still in their infancy. The nascent V2V (Vehicle-to-Vehicle) standard is an important step, but nowhere near the ubiquity necessary to address consumer expections, nor those of regulators unfamiliar with aviation.
The misunderstanding, and the opportunity
People think they want autopilots in cars, but what they really want are things autopilots don’t do, some planes can do, and some things airlines and pilots won’t let them do.
This is the big one, the big consumer misunderstanding, a gap in functionality masked by language and fantasies about technology consumers perceive as beyond their understanding, but not their wallets. What people want has little to do with autopilots, and yet everything to do with the future of automation (and possibly autonomy) on the ground. It’s been hiding in plain sight for decades.
It is the story of Boeing vs. Airbus, of fly-by-wire systems and Flight Envelope Protections, of a war for business, over technology, with no clear winner even to this day, 35 years after it started.
There is much to learn from aviation. As NASA’s Stephen Casner told Scientific American, “News flash: Cars in 2017 equal airplanes in 1983.”
If you want a roadmap for the future of Self-Driving Cars, tune in next week for Part 2: The Automation We Deserve, Not The Automation We Want.
[CORRECTION: This article was updated to reflect Mercedes-Benz’s and Tesla’s clarifications (in the US and Chinese markets, respectively) regarding their systems capabilities.]
Alex Roy is an Editor-at-Large for The Drive, author of The Driver, and set the 2007 Transcontinental “Cannonball Run” Record in 31 hours & 4 minutes. You may follow him on Facebook, Twitter and Instagram.