The Way We Talk About Autonomy Is a Lie, and That’s Dangerous

No matter what you’ve been told—and told, and told again—our self-driving future is a long and uncharted ways off. Pretending otherwise might have dire consequences. Here’s an unvarnished look at the current reality.

byJonathon Ramsey|
Self-Driving Tech photo
Share

You can be forgiven for thinking that autonomous cars, the all-seeing, self-driving, accident-free future of human transportation, have already arrived on a road near you. After all, the media have been writing headlines to that effect for years now: "Let the Robot Drive: The Autonomous Car of the Future is Here" (Wired, January 2012); "Like it or Not, the Autonomous Car Is Here—Almost" (Autoweek, May 2013); "The Driverless Car Comes to Washington" (The Washington Post video, August 2014); "Driverless Cars Are Already Here" (TechCrunch, June 2015); "Driverless Cars Are Here—Now What?" (The Hill, September 2016); "Self Driving Cars Are Here, Are You Ready?" (Tech.Co, December 2016). 

And so on.

But this is, at best, only partially true. While a number of automakers have engineered vehicles that can pilot themselves with an ability unfathomable even a decade ago, after months of interviews with the people shaping the self-driving car industry it's clear that our autonomous future—the one where you take a nap as your vehicle whisks you to your destination in comfort and safety—is not in any real sense here now, nor around the corner, but likely decades away. All claims to the contrary are either based on misunderstanding or are intentionally misleading.

That's not to say that future isn't coming, eventually. Aside from nearly every notable automotive manufacturer, major tech players like Apple, Google parent company Alphabet, and Uber are currently staking out their respective turf in the ascendant autonomous ecosystem. But to understand where we're going, it's helpful to take a clear-eyed look at where we are right now, ignoring the hype.

Don't Believe the Clickbait

To illustrate the issues of hyperbole, let's backtrack to another headline: "10 Million Self-Driving Cars Will be on the Road by 2020", from Business Insider (June, 2016). A quick scan reveals their definition of "self-driving car" to be incredibly broad: "We define the self-driving car as any car with features that allow it to accelerate, brake, and steer a car's course with limited or no driver interaction." Adaptive cruise control, which covers the acceleration and braking aspects of that definition, has been around since 1998; add lane-keep assist, which debuted back in 2003 and can control the vehicle's steering to keep it from leaving its lane of travel, and you have Business Insider's characterization of an "autonomous vehicle"—i.e., cars like the Mercedes-Benz S-Class and Tesla Model S and a half-dozen other models already on the road. This is hardly the customer fantasy of "Netflix and chill" in the backseat, and given the average creation cycle for a new vehicle is anywhere from three to five years, it seems unlikely we're about to experience a wave of fully-autonomous vehicles by the time the next Presidential election roles around.

“When told that their car will perform a task they aren’t interested in, such as driving, humans will do other things—like watch a Harry Potter DVD.”

An eager marketplace tends to trample or dismiss such discrepancies in favor of the shiny, best-case-scenario version. We've seen it before: we're promised a Jetsons-like ideal of a robot butler—the reality is a Roomba. So, what is the current reality of the self-driving car?

Mallory Short / The Drive

The Limitations of Current Technology

A vehicle's ability to "see" and translate its surroundings comes from onboard sensors. The three major sensor systems in use, alone or in combination, by vehicles currently on the road are: 1.) cameras; 2.) radar; and 3.) Light Detection and Ranging (LIDAR), currently a very expensive system that measures range via pulsed laser, and can be found atop Google’s latest self-driving minivan and Uber’smodified self-driving Volvo XC90s, among others. Google's 64-channel LIDAR system is good to 120 meters, creates a 360-degree image, has a 26.9-degree field-of-view, and can take more than two million readings per second.

"The camera is very good at providing a huge amount of information," says John Dolan, a principal systems scientist at Carnegie Mellon University’s Robotics Institute. "But interpreting that information accurately is difficult because of the lighting issue," he says, referring to the loss of image quality that occurs in situations like direct sunlight, poor or extreme contrast, or fog.

Lasers like LIDAR aren't disrupted by lighting issues, and are "very good at giving you shape information without too much difficulty in terms of the processing," Dolan says. "But it gets confused easily by bad weather." Radar is not confused by weather, "but it doesn't give as much shape as a LIDAR [system]—it gives, basically, just range or distance, and the rate at which the distance is changing, or the velocity of the vehicle."

There is currently no standard for the type of equipment used in semi-autonomous systems. Tesla's Autopilot notably foregoes LIDAR, with CEO Elon Musk stating he's "not a big fan" of the technology due in part to its complexity and expense (though another of Musk's companies, SpaceX, uses LIDAR on its Dragon reusable rocket). In May of 2016, a Tesla owner and avid fan of the brand named Joshua Brown was killed when a semitrailer truck pulled out in front of his Model S sedan on a Florida highway. The Autopilot system, engaged at the time, did not recognize the white trailer and failed to apply the brakes, as did Brown. A six-month investigation into the incident by the National Highway Traffic Safety Administration (NHTSA), which took into account changes that Tesla made to Autopilot after the crash, including more aggressive alerts when drivers remove their hands from the wheel and system disengagement when those warnings are ignored, cleared the company of a "safety-related defect trend,” though The New York Times noted that "some experts speculate that a LIDAR-driven car might have avoided this fatal crash."

(It should be noted that the same NHTSA report declared that "data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation" and that Autopilot does not engender "unreasonable risks due to owner abuse that is reasonably foreseeable (i.e., ordinary abuse)," while Musk, during a press conference last year, said that "Autopilot accidents are far more likely for expert users" than novices. More on all that in a bit.)

Unforeseen incidents aside, even predictable weather conditions remain a massive hurdle. Heavy rain alone can shut down an autonomous system, and forget hands-free driving on a lonely snow-covered road. According to Liam Paull, a research scientist at MIT’s Toyota-funded Computer Science and Artificial Intelligence Lab (CSAIL), snow "pretty much hoses both camera and laser, because now everything's just white, you can't see the road lines." And, according to Paull, laser doesn't bounce well off the snow, creating "all sorts of garbage" in the feedback.

As a result of such issues, most major manufacturers regulate when and where autonomous features are available. Eric Coelingh, technical director of active safety functions at Volvo, said the company will start by sending engineers to personally verify the roads on which full autonomy will be enabled on its upcoming production cars. 

"We have to test that it works on a particular road in particular weather conditions, and when we know that it's working [during] nice weather in Gothenburg, it doesn't mean that it works with nice weather in New York, or with bad weather in Los Angeles," Coelingh says. "We don't have the data on that."

But even when Volvo does have the relevant data, that doesn't mean the autonomous capabilities function without restriction. 

"When the circumstances are out of this [verified] scope because the road conditions are completely different, or the weather conditions are completely different, then we cannot make this autonomy feature available to our customers," Coelingh says.

“Autonomous mode can create in drivers a false sense of security and a susceptibility to distraction, making an emergency disengagement more dangerous because the point at which the driver needs to retake control of the vehicle is also a point at which he is poorly equipped to do so.”

So, despite the widespread use of the term "self-driving car," what's actually being described is a vehicle with limited semi-autonomous capabilities only available under certain conditions. This is the reality now, and will be for the foreseeable future—and for the foreseeable future, these early-stage autonomous experiments will share the road with traditional vehicles piloted solely by humans. This could be a dangerous combination.

Mallory Short / The Drive

The Space Between

The NHTSA first issued a “Preliminary Statement of Policy Concerning Automated Vehicles” in 2013. It laid out five tiers of autonomous operation, from no autonomy at Level 0 to an occupied or unoccupied car that could handle every operation on its own at Level 4. Last year, the agency adopted a new classification system outlined by the Society of Automotive Engineers, explained in the 30-page document "Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles" that defines six levels of autonomy. Level 0 is intermittent warning systems like blind-spot detection; Level 1 encompasses features that can monitor the environment and alter steering or acceleration and braking, like parking assist, which only controls steering, or adaptive cruise control (ACC) that only adjusts speed; Level 2 includes systems that simultaneously steer and change speed, like ACC with lane-centering, and traffic jam assist that maintains space in traffic and navigates shallow bends. Under all of these level definitions, the driver is still charged with monitoring the environment.

At Level 3, an "Automated Driving System" can take over all driving functions and total monitoring of the environment—the caveat being that the driver is expected to be ready to take over if the vehicle strays off-course, or finds itself in a situation it can’t handle. According to the SAE paper, at Level 3 "an ADS is capable of continuing to perform the dynamic driving task for at least several seconds after providing the [driver] with a request to intervene." So, while the vehicle should be able to fully monitor its environment, the driver must also be ready to take over in an emergency. 

That point at which a self-driving system requires a human to retake the controls to avoid an incident is termed a "disengagement." The California Department of Motor Vehicles requires any company testing autonomous vehicles in the state to log every disengagement that occurs; the most recent report from the California Department of Motor Vehicles shows commendable year-over-year improvement between 2015 and 2016. Waymo, the self-driving car project from Alphabet, led the rankings with just 0.2 disengagements per thousand miles driven autonomously in 2016—a four-fold improvement over the previous year even as total miles logged increased by 50 percent.

But although the cars are improving, none of the experts with whom we spoke believe Level 3 autonomy is safe—and that the issue has largely to do with people. A self-driving car operating in autonomous mode can create in drivers a false sense of security and a susceptibility to distraction, making an emergency disengagement more dangerous because the point at which the driver needs to retake control of the vehicle is also a point at which he is poorly equipped to do so. The specification says, in effect, "Your car will drive for you, but you need to watch it while it does, just in case.” Studies have shown humans aren’t good at continuous monitoring without personal involvement, and even Ford engineers tasked with monitoring the brand's self-driving cars from behind the wheel doze off at such a high rate that the company recently announced it would skip Level 3 autonomy altogether. It's an unavoidable fact that when told that their car will perform a task they aren’t interested in, such as driving, humans will do other things—like watch a Harry Potter DVD

Level 4 is the first phase at which a vehicle will never need driver input, no matter the situation. If a Level 4 car gets in trouble and the driver doesn’t take control, the car is programmed to get itself out of danger—by pulling itself off the road, for instance. (This is one of example of why Volvo sends human engineers to verify roads approved for a future, Level 4-autonomous Volvo: a road without a safe extraction point may not be fit for hands-free driving.) Currently, there are no Level 4 autonomous vehicles for sale, though automakers and headlines continue to insinuate or outright declare they'll be here in three years' time, by 2020:

"But ... will they be too safe?", www.theguardian.com

But what if customers want Level 4 and an automaker only has Level 2? That's when the advertising creatives step in to try to make the latter sound as much like the former as possible. Twice last year, regulators and consumer watchdogs went after automakers over potentially confusing marketing of autonomous functionality.

In March of 2016, Mercedes-Benz released a 30-second commercial called “The Future”, touting its 2017 E-Class sedan. Over scenes of the futuristic Mercedes F-105, a purportedly fully-autonomous but nonetheless concept-only vehicle, the narrator ("Mad Men" star Jon Hamm) asks, “Is the world truly ready for a vehicle that can drive itself? An autonomous, thinking automobile that protects those inside and outside?” At the moment the narrator answers his own question with, "Ready or not, the future is here," the decidedly non-fully-autonomous Mercedes-Benz E-Class rolls into frame. The commercial didn’t explicitly state equivalent capability between the F-105 "that can drive itself" and the "self-braking, self-correcting, self-parking" E-Class, but watchdog groups insisted that equivalence was too strongly suggested. 

Video thumbnail

On July 27, consumer advocates sent a letter to the Federal Trade Commission asking the FTC to "scrutinize" the commercial. On July 28, Mercedes dispatched troops to defend the 30-second spot, with a spokeswoman saying the ad contained a disclaimer about the E-Class's true abilities. That disclaimer, in fine print and appearing for six seconds, reads: "Vehicle cannot drive itself, but has autonomous driving features. System will remind the driver frequently to keep hands on the steering wheel." Much easier to catch was the company's print advertisement timed to run in magazines around the same time as "The Future" ad appeared on television and the internet—a glossy, full-page shot of the E-Class with the text "Introducing a self-driving car from a very self-driven company.”

The Mercedes-Benz E-Class advertisement suggested its E-Class was a "self-driving car.", Mercedes-Benz

Only July 29, Mercedes pulled “The Future” from its campaign. 

"Given the claim that consumers could confuse the autonomous driving capability of the F015 concept car with the driver assistance systems of our new E-Class in our ad 'The Future,' Mercedes-Benz USA has decided to take this ad out of the E-Class campaign rotation,” the company said. A version of “The Future” uploaded to the Mercedes-Benz USA YouTube channel in October features a less-suggestive voiceover.

And then there was Tesla, whose situation was, as usual with the Silicon Valley company, more complicated. Tesla controversially named their driving assistance system "Autopilot," after the systems that appear in airplanes, and Tesla's web site notes the technology "classifies [Autopilot] as a Level 2 automated system by the National Highway Transportation Safety Administration (NHTSA)", while in the same paragraph calling it "an advanced driver assistance system" that is "designed as a hands-on experience." As Tesla CEO Elon Musk has repeatedly explained, a plane's autopilot flies under the constant supervision of at least one pilot ready to intervene immediately, and at no time to do cockpit personnel watch movies or go to sleep on the flight deck. Knowing all of that, the name "Autopilot” makes sense (although it should also be noted that studies by NASA and others show that pilots also have a hard time paying attention to their autopilots).

The problem is that the name piggybacks on a very specific definition of "autopilot," one that makes sense at 38,000 feet but is less obvious to non-pilots (i.e. nearly all of the global car-buying public). If someone at sea level said about a machine, "I left it on autopilot," and then went on to explain that she spent two hours monitoring said machine to make sure it was operating correctly, a reasonable response would be, "that's a bad autopilot." That sort of confusion is, in part, why in October 2016 German regulators sent a letter to every German Tesla owner stating that Autopilot “requires the driver's unrestricted attention at all times.” That same month, the California Department of Motor Vehicles proposed a regulation to forbid Tesla from using the terms “autopilot,” “self-driving,” and “automated” in advertising materials. (The proposal never left the draft stage.)

There is likewise an issue with the term "in beta," which Tesla has repeatedly used to describe Autopilot. While the term is defined by Merriam-Webster (and common usage) as "a stage of development in which a product is nearly complete but not yet ready for release," Tesla has always insisted that Autopilot is a fully-vetted system ready for primetime, and that the term, as Musk explained at a press conference last year, is used "to reduce people's complacency in using [Autopilot]" and "not because it's 'beta' in any normal sense of the word." Which begs the question: when it comes to such a high-stakes feature as Autopilot, wouldn't using any definition other than the "normal sense of the word" lead to confusion as to the system's actual operating state? And building on that, a larger question taking into account both Musk's assertion that Autopilot accidents are much more likely for expert users than novice ones and the preponderance of anecdotal evidence showing Autopilot users operating the system incorrectly

or irresponsibly and counter to Tesla's own definition of "intended use" as compared to systems from other manufacturers: is it possible that the term "beta" confuses some Tesla owners into thinking that Autopilot is still in a test phase, and an enthusiastic and risk-taking subset of those owners (which would include people like Josh Brown) are deliberately pushing the boundaries of the system in the name of analysis?

We posed that question to the company. A Tesla spokesperson issued the following response:

Autopilot's 'beta' label is intended to reduce complacency, and Tesla has taken many steps to prevent a driver from misusing Autopilot, including blocking a driver from using the system after repeated warnings are ignored. 

Tesla

While that answer clearly skirts the question, the truth is that regulating advertising language or policing automaker jargon can't fully control the message. There is another, arguably larger problem.

The Media Hype Machine Is Putting Lives in Danger

Consider the potential for mixed messages in this Bloomberg interview with Betty Liu: 

Video thumbnail

In the video, Musk explains that the driver is ultimately responsible for what happens behind the wheel, even when Autopilot’s engaged. In the same interview—but in a video clip posted separately—Musk rides with Liu in a Model X, activates Autopilot, puts his hands in the air, and says, "It’s on full autopilot right now, I’m not touching anything, no hands, no feet, nothing.” Musk then continues the interview while the Model X autonomously drives, navigates bends on the Tesla campus, and stops behind another vehicle.

Video thumbnail

Musk was simply demonstrating his new technology at the request of a media outlet. But consider how that media is presented, and consumed: at the time of this writing, the video about driver obligations has 166,798 views; the video of a self-driving car autonomously ferrying several passengers, posted on the same day, has 1,027,268.

Tesla owners as well as members of the media posting hands-free Autopilot experiences have become a YouTube staple. Like this one. And this one from CNET that begins with the line, “You don’t need to put your hands on the wheel to drive a Tesla.” And this one from Inside Edition about a grandmother “Behind the Wheel of Self-Driving Tesla.” And this one, with the line, “It was clear we were in good hands with the P90D, but with all the new free time the Autopilot afforded us, what were we going to do with it?” And this one from Josh Brown a month before he died, showing his Tesla avoiding a collision while merging onto a highway. In the video's description, Brown wrote: “I actually wasn't watching that direction and Tessy (the name of my car) was on duty with autopilot engaged.” What was in "that direction" Brown wasn’t watching? The highway onto which he was merging.

Bryant Walker Smith, an assistant professor in the University of South Carolina School of Law and an affiliate scholar at the Center for Internet and Society at Stanford Law School, says "the levels of automation were described as a promise by the automaker to the user: 'We are promising that our system will do this under these conditions.’ So, part of this will be clearly communicating those capabilities and those limitations to the user, not just once, not just in an owner’s manual, but in real time, as the systems are operating.”

This is where Tesla's "disruption" cuts both ways. It’s inconceivable that the CEO of a major manufacturer would tweet that one of its marquee convenience systems, installed on a production car on sale to the public, is still in beta, as Musk has done, or that an OEM exec would repeatedly say the law is just going to have to catch up to the company’s innovations. To be sure, the risk-averse legacy car manufacturers certainly still get ahead of regulations and operate beyond them at times. But, for numerous reasons, they don’t crow about it, because it tends to be bad for business.

For now, Tesla’s upgraded its current Autopilot with new safety restrictions. For the next version of Autopilot, Tesla has added cameras, updated the sensors, and improved the radar processing (while still holding back on LIDAR). But the rise of autonomous systems from a growing number of manufacturers creates an uneasy situation on the road, according to MIT's Liam Paull.

“There’s plenty of people out there who are happy to take the risk for the thrill of having an autonomous vehicle,” Paull says. “In general, I'm OK with people accepting risks and taking risks, but there's plenty of other people on the road who have not agreed to these kinds of things. 

"It's difficult, because on the one hand you need a good regulatory environment to push the science forward, but we all have a responsibility to do things properly.”

Mallory Short / The Drive

When Silicon Valley Met the Auto Industry

Manufacturers, tech companies, and consumer groups broadly agree that the federal government needs to set the tone for coordination. In an attempt to answer that need, the National Highway Traffic Safety Administration last September released its 116-page Federal Automated Vehicles Policy. The guidelines for “highly automated vehicles” (HAVs)—the kind with driving controls that humans can still operate when necessary—assigned responsibility for regulating the hardware and software to the federal government; created a 15-point safety assessment for design, development, and testing of self-driving vehicles; and drafted a Model State Policy that “outlines State roles in regulating HAVs, and lays out model procedures and requirements for State Laws governing HAVs.”

Stakeholders were pleased with this long-awaited first step. Nevertheless, among the more than 1,100 public comments submitted on the policy, those same stakeholders listed numerous complaints about issues the NHTSA both did and did not address.

Establishing a national framework pleased the states, mostly; the National Conference of State Legislatures regretted that the NHTSA hadn’t asked the NSCL for input before drafting the Model State Policy, and noted possible inconsistencies and ambiguities in the division of federal and state responsibilities. Apple wanted exemptions for “limited and controlled testing on public roads,” plus clearer rules on the gathering and use of driver data. Lyft didn’t believe the NHTSA went far enough to prevent mismatched laws from state to state that would hinder streamlined development nationwide. The Property Casualty Insurers Association of America wanted a more clear understanding of liability and state oversight in the event of autonomous crashes.

“If you enumerate all the different things that can go wrong, or [are] difficult in urban driving, autonomous cars today will not be able to handle 95 percent of them.”

John M. Dolan

Outside the scope of the policy paper, others have suggested how the NHTSA might handle regulating an industry suddenly filled with new and niche players working beyond traditional vetting and recall procedures originally designed for major automotive manufacturers. Among his numerous publications, Walker Smith wrote a 100-page paper titled "Lawyers and Engineers Should Speak the Same Robot Language" that addressed this very issue. When asked about potential pitfalls in this early development phase, Smith said that "one of the questions we’ll really struggle with is how to deal with a less top-down industry, because "major automakers, suppliers, Silicon Valley companies that do this research will have strong reputational [and] financial interests to act deliberately." 

"There will be some smaller startups that may or may not have those same interests," Smith says. "And that could be concerning for the federal government and for state governments with respect to road safety.”

Carnegie Mellon scientist Dolan concurred. "I think that the current and maybe uneasy marriage between a Silicon Valley mentality and a Detroit (i.e.) more cautious traditional automaker mentality has had some fruitful results," he notes, adding that "the historical traditionalism of the automakers has been broken through to some extent by the freewheeling attitude of Silicon Valley, and I think as a result we've made more progress more quickly than we otherwise would have." 

"However, I think that the reliability and testing practices of the automakers need to be harnessed or somehow introduced into the process to a greater extent, so that we don't have any unfortunate problems."

A series of high-profile crashes, either fatal or injurious, that mar the public's faith in the technology would no doubt constitute an "unfortunate problem." And because autonomy is new, exciting, and puts lives at stake, we judge the technology on a higher standard—one that turns every mundane fender bender into a high-profile event.

Alternately, those otherwise unremarkable incidents may seem momentous because there's an inherent understanding that autonomy is a sort of haphazardly regulated Wild West. That's why a number of engineers, safety advocates, and even automakers don’t want federal guidelines, they want the government to step in and slow down the roll-out of self-driving vehicles and draft proper regulations first. Rosemary Shahan, founder of Consumers for Auto Reliability and Safety, believes that if the automakers are going to move the needle this quickly, a policy paper isn’t enough. 

"What we really need are federal standards," she says. "They're trying to jump the gun and come out with these cars without federal standards in place, just basic standards that say, for instance, that [autonomous cars] can't be hacked."

Proper regulations would give the government a bright-line rule for issuing recalls to protect public safety. "All they've done so far is draft guidelines," Shahan said, "and that's not adequate to protect consumers. You have to actually have a standard that [automakers] need to meet. And occasionally they fail to meet those standards and then [the authorities] issue a safety recall. Without a standard, it’s unclear what [it would] take to get a safety recall issued, and what will be considered an unreasonable risk to safety.”

The NHTSA doesn’t have the money, staffing, engineering capacity, or tech know-how to codify a set of all-encompassing laws in advance. So, as Walker Smith told Wired last May, our autonomous future could be "driven by what actually gets to market." The rest of us will simply be along for the ride.

When Will Robots Win the World Cup?

What, exactly, will that ride look like? In other words: what do those currently guiding the industry envision as a likely roll-out for autonomous vehicles? The people we spoke to mostly agree that, in terms of the vehicle, the hardware is ready, it’s the software that needs finishing. Self-driving cars don't need to learn how to navigate roads as much as they have to learn how to navigate humans. Despite massive processing ability, an autonomous car still doesn’t know how to handle what MIT’s Paull called “corner cases,” or “the part of the space that you don't get a lot of data about because [contact] happens so infrequently, so it's very hard for a robot to learn what to do in those situations.”

Corner cases can include the wide variety of situations that require a sort of best-case judgment call, such as navigating a residential street with legal two-way traffic but a single usable lane. Human drivers use waving and finger pointing to communicate with one another and establish order; computers don’t speak that language yet. Nissan, BMW, Ford, Volvo, and Toyota have pledged to sell a Level 4 autonomous car by 2021, and the standard five-year automobile development cycle means they’re working on those cars right now. Considering the challenge of teaching the hardware our human ways, computers are unlikely to have their own wave-and-point capability by 2021, either.

Ford has already suggested that its coming Level 4 car (scheduled for 2021, naturally) will operate “in a geo-fenced area that is very heavily 3D mapped.”

According to Volvo engineer Coelingh, “the best self-driving cars very soon will be better than the worst drivers we have around—the inattentive drivers or the drunk drivers. But before [these cars] can beat the really attentive drivers, that will take time.”

“Even the bullish Elon Musk has said Autopilot needs to log a billion test miles before it gets out of beta; Tesla is reportedly somewhere north of 200 million right now.”

That means, at the very least, a lengthy and phased roll-out of autonomous capabilities—or, as Paull puts it, "having a car that can [drive itself] in progressively more and more scenarios." This, "rather than a switch where there's a car that can do it all," is the likely scenario. "And the way that starts is with highway driving,” he says.

Of the type of Level 4 autonomy that relieves a driver of the need to monitor the environment, Walker Smith says the vehicle needs "two of the three [things]: slow speeds, simple environments, or supervised operations."

"I could see a company like Tesla having a system that technically does not expect that the user will pay attention on freeways," Smith says. "I could see in the next year or two a company deploying a low-speed shuttle under relatively simple conditions where the people are just passengers.”

Dolan, of Carnegie Mellon, says current systems are "largely able" right now to deal with routine highway driving and simple city roads. About Level 4 autonomy, even on the highway, he was more circumspect when taking corner-case scenarios into account. 

"If you stick to a given route at low-traffic times of the day, it may well be that an autonomous car can run 95 percent of the time without human intervention," he says. "But if you enumerate all the different things that can go wrong, or [are] difficult in urban driving, autonomous cars today will not be able to handle 95 percent of them. 

"How long is it going to take for robots [to be able] to do that? It's similar to asking how long is it going to be until we have a robot team that can win the World Cup."

Volvo demonstrates one of the approaches we can expect in the interim. This year, the Swedish carmaker will lease 100 development cars to select customers in Gothenburg for its "Drive Me" pilot program. One goal of Drive Me is to "research how ordinary people want to use this technology" in preparation for the autonomous car Volvo expects to sell at dealerships in 2020.

That 2020 production car will supposedly take the carmaker from its current Level 2 assistance features straight to Level 4 autonomy. Volvo's Coelingh told us, "That car will have three different modes: you can drive it manually, and you will have all the latest and greatest safety systems that will help you to avoid collisions; you can drive in assisted modes like ACC and Pilot Assist, and these two modes will be available, in practice, anywhere at any point in time; the fully autonomous mode, the Level 4 mode, will also be in that car, but you will only have it available on certain roads—roads that fulfill a number of preconditions. 

"We have designed the technology for your daily commute, so we will select roads that a lot of people use to get into a big city or out of the big city—the bigger freeways, or roads where there's a lot of congestion. And then the car will tell you whether full or autonomy is available. But I do not expect that by 2020 it will be available at any point in time, on any road. That will take much more time.”

Volvo has already said that when it makes Level 4 autonomy available, it will accept liability for accidents that occur during the proper use of the autonomous system.

The Drive Me initiative will work with the city of Gothenburg to explore open and covered parking solutions for large numbers of autonomous vehicles. "What's even more interesting [than the technology itself] is that [it] is an enabler for making the traffic system much safer than it is today," Coelingh says. "There's lot of focus on technology, but there's actually very good reasons to look at the bigger picture and try to understand how this will impact society. I think it’s more interesting that it can really change the transportation system as we know it.”

Toyota will take a different approach, practically backing its way into Level 4 autonomy with the help of an army of scientists working at two well-funded research centers at MIT and Stanford. "The core idea here is that, instead of building an autonomous car, you build a very advanced safety system that's basically an autonomous car that sits in the background and makes sure a human doesn't do anything wrong," Paull says. "That’s got a couple of advantages. One, you never have to go through Level 3. The other is that, if your system decides it's not sure what's going on, it can always do nothing. The hope is that you should do no harm; you should have a system that, worst case, is as good as a human driver, and the best case is a car that's incapable of crashing.”

Toyota’s CSAIL institutes are one year into Toyota’s five-year funding commitment. The Japanese automaker says it expected to have an offering with autonomous capability on-sale by—you guessed it—2020.

Mallory Short / The Drive

Miles and Decades to Go

This article touches on some of the hurdles facing full autonomy, but certainly nowhere near everything that governments, manufacturers, and the public will need to sort out in the meantime. For instance, upgrading the national infrastructure so self-driving cars will have the lane markers they need. Establishing liability protocol—if a "driver" isn't driving her car, is the manufacturer at fault if that vehicle crashes, or maybe the software supplier?—is itself a massive undertaking, and will likely tie up a lot of time in various courts across the country. There’s the seemingly trivial but potentially crippling matter of human bullying; in other words, how to make an autonomous car safe enough to "do no harm” yet aggressive enough that it doesn’t continually give way to humans who figure out that a self-driving car, unlike a person, is always programmed to stop for pedestrians? (Plus, some of those pedestrians could be among the thousands of human drivers predicted to be out of work thanks to self-driving vehicles.)

What about the possibility of hacking self-driving car? Despite a lot of hand-wringing, the full scope of possibilities still seems unclear.

It will take years, money, miles, legislation, and judges to figure it all out—and that's just for the problems we can foresee. Even the bullish Elon Musk has said Autopilot needs to log a billion test miles before it gets out of beta; according to some reports, Tesla would be only just north of 200 million active Autopilot miles by now.

So, what about those autonomous Jetsons dreams? We’ll get there, eventually, but it will be slow and it will come piecemeal. And that’s assuming the technology doesn’t suffer any major missteps. As to the question of a realistic timeline for you to be able to get in your car and ride to your favorite restaurant without touching the wheel or putting down your book, Paull says we might get "a car that can do 95 percent of the driving tasks without any oversight from a human within ten years. But that last five percent is going to be very, very tough." Volvo’s Coelingh says, "If you're talking about anywhere, in any weather conditions, I think it’s longer than 20 years.” Gil Pratt, CEO of the Toyota Research Institute that oversees the two CSAIL outposts, said at this year’s Consumer Electronics Show we’ll need "decades to have a significant portion of US cars operate at Level 4 autonomy or higher."

So not even the boffins know how it’s all going to play out, or when. Walker Smith says that "the things that we expect are going to take far longer, and it’s the things we don't expect that are going to be right around the corner." Regarding that first wave of autonomous cars being promised by the manufacturers in three to four years, Smith notes that "you might not be able to take this car from your house 1,000 miles to a mountain range, but you might be able to take it on your freeway, and you might be able to take a shuttle around downtown LA. Or you could just have flying cars and drones that we all find imminently more interesting than self-driving cars. Who knows?”

Who knows, indeed? Anyone who claims otherwise has a robot butler to sell you.

stripe
Car TechSelf-Driving Tech