Latest Tesla Autopilot Wreck Shows How Future Car Accidents Will Be Investigated Like Plane Crashes

New autonomous systems mean highway crashes will need to be treated more like fallen airplanes.

byEric Adams|
Self-Driving Tech photo
Share

0

Another Tesla crash, another round of finger-pointing. This time, a Tesla owner in Los Angeles drove into the back of a parked fire truck on a freeway this past Monday while the car’s Autopilot semi-autonomous drive system was, according to the driver, engaged. Fortunately there were no injuries, but the National Transportation Safety Board indicated that it would send two investigators to look into the incident.

That last detail alone—that the NTSB would even show up for an otherwise straightforward rear-end collision on a highway—is actually the most significant part of this story, and it’s a telling indicator of what we’re up against as we march through the semi-autonomous phase of driving on our way to the Holy Grail of fully autonomous transport. This is new turf, and the NTSB needs as much data as it can get about what happens when drivers engage new technologies on public roads and things go awry.

This is the second time the NTSB has investigated a Tesla accident. The first one, in 2016, resulted in a fatality, andthe investigation found that the driver was over-reliant on the Autopilot system—using it in circumstances it wasn’t designed for—and also that the Tesla system allowed him to be that over-reliant, by not preventing its use on certain roadways and by not monitoring well enough the driver’s engagement. Monday’s crash likely won’t result in a full investigation, but it is an opportunity to collect that data on the safety of semi-autonomous systems.

As with the 2016 crash, the questions for the NTSB—again, keeping in mind that it’s not a full investigation—won’t be who was generally at fault in the crash, even though it might appear pretty obvious who was at fault. After all, Tesla’s Autopilot is a Level 2 autonomy system, which means it’s an advanced driver assistance device that can take control of the car under certain relatively constrained situations with the critical caveat that the driver is still in charge and responsible. So if that Tesla in California roared into the back of the fire truck at an estimated speed of 65 miles per hour...why didn’t he stop it?

No, the NTSB will instead be interested in the nuances of how and why. The Board doesn’t write rules or even technically assess blame for accidents. Rather, it dives into crashes to understand them at the deepest possible level, grasp all the contributing factors, and, in the case of full investigations, it makes recommendations as to what changes might be necessary to prevent similar accidents from happening in the future. In the case of road accidents, it looks at driving conditions, the technical details relating to how the vehicle operates, and human factors such as the driver’s mental state, his or her general situational awareness, and his/her familiarity with the equipment being operated—in addition to what might appear to be patently obvious causes. It also assesses the vehicle’s intended capabilities and limits, and how well those systems sync up with those defined parameters.

It’s in these operational nuances that automotive accident investigations will likely start looking a lot more like airplane accident investigations, now that we’re going deeper into the rabbit hole of autonomy. Because of the degree of automation and semi-automation already present in aviation, and the NTSB’s long-proven ability to suss out even the most intricate and microscopic contributing factors to accidents involving aircraft using those systems, those skills will be brought to bear on the safety of autonomous car systems, as well. 

“Aviation suggests the way forward,” says autonomy researcher Bryant Walker Smith, an assistant professor in the School of Law and the School of Engineering at the University of South Carolina. “Both regulators and manufacturers will investigate, and crashes will be examined more systematically. Investigations will increasingly turn to digital data stored locally or remotely, whether from the vehicles involved, other vehicles, personal devices, or surrounding infrastructure. Sometimes these data will provide certainty—allowing investigators to ‘replay’ a crash—and sometimes they will actually introduce new uncertainty.”

Hence the arrival in Los Angeles of two federal investigators sniffing around a seemingly mundane, non-fatal highway rear-ender: This is that process starting in earnest. In short, Smith says, entire vehicle fleets will learn from single incidents. This mirrors the current modus operandi of the aviation industry, which unpacks safety incidents in excruciating detail. Because aviation is farther along it its march toward autonomy, that knowledge base includes much about the human operator’s role in accidents—what pilots do and don’t know, what they see and don’t see, and, critically, what role semi-autonomy technology plays in their overall engagement in the process flying.

For instance, Smith cites numerous plane accidents that could provide insight into similar potential pitfalls for autonomous driving. There was a Northwest Airlines flight in 2009 in which understimulated pilots accidentally flew 150 miles past their destination airport in Minneapolis; or Air France Flight 447, in which intense confusion relative to what the airplane was doing led to a fatal plunge into the Atlantic Ocean, also in 2009; or the apparent skills degradation in a 2013 Asiana Airlines crash at San Francisco International Airport that led the crew to misjudge their approach, thinking that the airplane was automatically maintaining airspeed when it wasn’t.

In each case, a huge variety of factors contributed to the accident, something we could see more and more as manufacturers ask drivers to be just partially engaged in their driving, It’s already well-known that the Level 3 autonomy—the next stage, with fully hands-off driving, but drivers who still need to step in if the vehicle can’t handle a situation—is potentially murky business, so much so that some manufacturers now say they will jump straight to Level 4, which features full robotic control. Indeed, if drivers increasingly equate “hands-off” with “brains-off” as they might in Level 3 autonomy, the possibilities for mayhem stack up. 

“That's the story of innovation: replacing one set of problems with a new set of (hopefully smaller) problems,” Smith says.

We continue to see it in aviation, and it will become more pronounced in autonomous driving. Along the way, we may find ourselves surprised by what we learn. Just as the design of Tesla’s Autopilot system was deemed a contributing factor in the 2016 fatal accident—even though the driver was overwhelmingly responsible—the NTSB might similarly feel that Monday’s accident had a variety of its own contributing factors. Maybe, for instance, the driver simply misinterpreted how the system worked. Maybe that’s his fault; then again, maybe it’s Tesla’s, for not proclaiming the system’s capabilities and limits well enough. 

Or maybe, as more incidents stack up, the powers that be will start to feel that there’s something about the current practical limits of semi-autonomous capabilities—or our ability to use them—that suggests we’re maybe getting a teensy bit ahead of ourselves, and that those who aim to skip Level 3 have the right idea. If distraction can turn to disaster among some of the most highly trained pilots in the world, it can certainly happen to the rest of us, too.

stripe
Car TechNews by BrandSelf-Driving TechTesla News