Self Driving Safety Steps Into The Unknown

Former NTSB Chairman Christopher Hart explains why autonomous vehicle safety issues go well beyond the precedents set by aviation.

byChristopher Hart|
Self-Driving Tech photo
Share

0

Editor's note: the author is the founder of Hart Solutions LLC;, and was previously Acting Chairman and Chairman of the National Transportation Safety Board (2014-2017).

On August 30, 2017, I published an article in TheDrive about lessons that the autonomous vehicle (AV) industry could learn from decades of automation experience in aviation. In that article, I note that aviation has been continuously improving automation, and the AV industry could benefit by learning from and avoiding aviation’s automation mistakes rather than repeating them. 

With nearly 40,000 fatalities a year on our highways in the U.S. alone, and more than a million a year worldwide, motor vehicle fatalities are a major public health issue. Motor vehicle safety experts estimate that more than 90% of crashes involve human error, and the theory is that if the human driver were replaced with automation, this tragic loss of lives would largely be eliminated. While this theory is overly simplistic, automation can potentially save tens of thousands of lives a year in the U.S. alone.

The public is very skeptical about automation in cars. With so many lives being lost every year, it would be very unfortunate to delay the implementation of AV automation because the public is made even more skeptical by crashes that could have been avoided by paying more attention to decades of aviation automation lessons learned. 

For a variety of reasons, automation on our streets and highways will be far more complex and challenging than automation in aviation, and this follow-up article is about several automation issues that the AV industry will face that have not already been encountered in aviation. Some of these issues are related primarily to automation that assists drivers, but most are related to automation generally, whether or not involving drivers.

Automation That Assists Drivers

Because streets and highways are much more complex and variable than airways, automation challenges will be much more difficult on the ground than in the air. AV makers will encounter not only several issues that aviation has already encountered, as discussed in my previous article, but they will also have to address several issues that have not been encountered in aviation. Two of those previously unencountered issues relate largely to automation that is designed to assist drivers, rather than replace them, namely, (a) AV automation will increasingly include artificial intelligence (AI) that will “learn from experience;” and (b) drivers will generally receive little or no initial or recurrent training about their automation.

Artificial Intelligence. Most aviation automation does not include AI. Instead, it is designed to follow specific algorithms the same way every time, i.e., whenever “X” is encountered, the appropriate response is “Y.” In that sense, most aviation automation is not designed to, and does not, learn from experience. Once the design of the software has been finalized, it and the automation it controls remain the same until the software designers change it. If the software is changed more than minimally, pilots are re-trained to the revised software to ensure that they know and understand the revised behavior of the automation.

Much of the automation in AVs, on the other hand, will include AI that is designed to learn from experience. When automation is assisting the driver, AI learning will create the challenge of keeping drivers apprised of the capabilities and limitations of their automation – knowing what it can and cannot do – as it learns. This will increase the likelihood that drivers will not fully understand the behavior of their automation. 

There are at least three potential undesirable outcomes from the driver not remaining abreast of the automation. The first is that the automation may surprise the driver by doing something that the driver did not anticipate, which can lead to undesirable consequences. Another is that, out of frustration, the driver will turn the automation off, if possible, resulting in the loss of the protection that the automation was designed to provide. Finally, the driver could be distracted by the automation – e.g., what’s that chime for and what do I need to do about it, or why did the brakes or steering activate in that circumstance, and if I don’t like it, how do I overcome it; and the distraction itself could generate undesirable consequences.  

Drivers Are Unlikely to be Trained. Airline pilots are extensively trained and periodically re-trained, largely in simulators that are so realistic that pilots can be licensed to fly the airplane based solely upon simulator training. Moreover, when their systems are modified, as they are from time to time, pilots are generally re-trained to the modifications.

On the other hand, the designers of automation for cars must assume the worst case, i.e., that most drivers will probably not receive any training regarding the automation; and that most drivers will probably not look at their Owner’s Manual or any other source of information regarding what the automation is designed for and what it can and cannot do.

Lack of driver training about the automation may lead to the same undesirable outcomes as when the driver does not remain abreast of how the automation has learned – driver surprise, turning off the automation, and distraction.

General Automation Issues, With or Without Drivers

Because automation on the ground will be much more complex and challenging than automation in the air, AV makers will encounter several automation issues that were not encountered in aviation. The following issues relate to all AV automation, both with and without drivers.

Street Testing is Essential. Modern aviation simulators are capable of very thorough testing of automation, sufficient to obviate the need for real-world testing, because (a) the flying environment is far more predictable and far less variable than the streets, and (b) simulators replicate airplanes and the flying environment so realistically that pilots can get certified in an airplane based solely upon “flying” the simulator, without having to fly the real airplane. Lab testing of AV automation, e.g., in simulators, does not obviate the need for real-world testing because (a) AV simulators are not sufficiently realistic; (b) the streets are so variable and unpredictable that they are difficult to replicate adequately in simulators; and (c) developing a testing syllabus that is appropriately responsive to the variability of experience and capability of drivers is much more challenging for AVs than for airplanes.

In addition to simulator training being insufficient, testing on test tracks will also be insufficient because the very large variety of street challenges cannot, as a practical matter, be satisfactorily replicated on a test track. 

As a result, autonomous vehicles will not be safe and reliable on the streets until they are tested and proven on the streets. The problem is that street testing of vehicles that are designed to be driverless will lead to crashes sooner or later for two reasons. First, street testing involves a combination of two concepts that do not work well together; and second, human monitors, however vigilant, cannot possibly look at the road 100 percent of the time.

Regarding the two intersecting concepts, the first concept is that responsible AV manufacturers will not test their vehicles on the streets until they have been proven in the lab and on the test track to be very reliable. The second concept, that does not work well with the first concept, is that longstanding automation history, in aviation and elsewhere, has demonstrated that humans are not good monitors of reliable systems. Experience has shown that humans are likely to become complacent and inattentive over time when operating very reliable systems because infallibility over time often generates expectations of continued infallibility.

Regarding 100 percent vigilance, it is not possible for a monitor, however vigilant, never to look away from the road. Looking away from the road may not be for an undesirable reason, such as texting, but could be for a desirable “good driving” reason such as looking back and in the rear-view mirror before changing lanes, reading a street sign, or checking to see why an automation warning signal occurred.

Given the large quantity of street testing, it was very foreseeable that two circumstances – having a person or thing in front of a vehicle at the very instant that a human monitor happened not to be looking at the road – would combine sooner or later to produce a bad outcome. A momentary failure to look at the road could be because of expectations of reliability, the impossibility of looking the road 100 percent of the time, or both. This problem has already manifested itself with the crash that occurred in March 2018, in Tempe, AZ, when an Uber test vehicle that was being street tested with a human monitor/driver killed a woman who was walking her bicycle across the street. That crash resulted in the first fatality from a street test of a vehicle that was made to be driverless. 

[Disclosure: When the Tempe crash occurred, Uber voluntarily terminated all of its autonomous vehicle street testing. The author was engaged as a consultant to help Uber resume street testing.]

Whatever the reason, it was just a matter of time before a monitor would be looking somewhere other than at the road when a person or thing happened to be in front of the vehicle. Hence, the Tempe fatality reflects not just an Uber problem, but a problem that is inherent to the development of driverless cars; and there can be no assurance of no further street testing crashes, possibly including fatal crashes.

The in-cab camera in the Uber revealed that the monitor/driver was looking down, rather than at the road, when the crash occurred, and the preliminary NTSB report states that there was no slowing before the crash. The NTSB investigation may shed light upon whether the crash would have occurred, albeit possibly at a lower speed, even if the monitor/driver had been looking up at the time, and even if the car’s autonomous emergency braking system had not been disabled for the purposes of being able to test the capabilities of the system being developed.

Because humans are not good monitors of reliable systems, and because 100 percent road vigilance is not possible, the Tempe crash revealed the need for (a) better training of driver/monitors, to enhance their vigilance about the difficulty of humans monitoring reliable systems, and (b) systems that are more robust about warning monitors in a timely manner of the need to look forward.

The fact that this conundrum is inherent to the industry and may generate additional street testing crashes creates a public education challenge. Every AV crash will generate significant media attention, especially if a fatality is involved, notwithstanding the lack of media attention to the deaths of more than 100 people every day on our streets and highways. Given that the public is already skeptical of the concept of autonomous vehicles, it is important to educate the public that (a) there is a very real possibility of fatalities during the development process, but (b) the safety improvements from automation, if done properly, are not just marginal but can be significant. Automation will not eliminate all of the tens of thousands of deaths that occur every year, but automation done right can reduce the number significantly.

Need for “Graceful Exits.” My previous article about aviation lessons learned notes the need for graceful exits for systems that assist drivers if (a) the driver is not adequately attentive, (b) the automation fails, or (c) the automation encounters unanticipated circumstances or is uncertain. When the vehicle is driverless, the last two of these three possibilities remain; and in fact, when there are drivers, graceful exits are very important, but when there are no drivers, graceful exits are absolutely essential because there is no opportunity for a driver to “take over” when, for whatever reason, the automation is not performing as designed.

Aviation automation has not yet developed graceful exits for automation failure or encountering unanticipated circumstances, and airliners will have pilots until those graceful exits are developed. An example is the landing of an airliner on the Hudson River in 2009 shortly after takeoff: Could automation have accomplished the successful outcome that the pilots accomplished, namely, landing in the river without any fatalities after losing both engines due to ingesting birds? 

After the crash in Sioux City, IA, in 1989 that resulted from an uncontained engine failure in which the shrapnel from the engine failure penetrated and disabled all three of the airplane’s hydraulic systems, the pilots were – amazingly – able to reach an airport, but their landing was not successful. After that crash, automation was developed that could have landed the airplane successfully, even in its highly compromised condition. That demonstrated that, with the benefit of hindsight, automation could be designed that could have salvaged that particular situation. 

Similarly, automation could probably be developed that would have landed the airplane successfully in the Hudson River, given the damage that occurred. As with Sioux City, however, that design would occur with the benefit of hindsight. Without the hindsight, however, the question is whether the automation could have landed the airplane successfully in a different situation, e.g., if the damage to one or both engines had been uncontained. Uncontained engine failure could have substantially changed the airplane’s behavior in several ways, including increased drag, reduced lift, asymmetric effects that could cause rolling or yawing, and loss of or reduced control due to compromised hydraulic systems and/or control surfaces.

With current technology, the development of automation with “graceful exits” that could satisfactorily handle failure of the automation or encountering unanticipated circumstances, without the benefit of knowledge of the nature of the problem, is highly unlikely, both in aviation and in autonomous vehicles. 

Mixing driverless vehicles with humans. Even when automation enables the complete removal of the driver from the vehicle, it will still encounter humans. Not only will there be pedestrians and bicyclists, but for years to come there will probably also be other cars with drivers. Consequently, the automation will need to address human factors, i.e., the unpredictability and variability of the other humans, even when the driver has been removed from the car.

Moreover, removing drivers from all vehicles will not eliminate the possibility of human error on our streets and highways. In addition to pedestrians and bicyclists noted above, there is still potential for human error in the design, manufacture, and maintenance of the vehicles, as well as the design, manufacture, and maintenance of the infrastructure. Note, for example, the crash of an automated – i.e., no human operator – airport people mover in Miami in 2008. The people mover crashed into a terminus wall, and although there was no operator human error – because there was no operator – the crash occurred as a result of maintenance human error.

Software updates. Periodic updates of the autonomous software will probably be typical, and cyber protection updates will be necessitated to keep the protections abreast of ever-advancing invasion protocols. This creates at least two challenges.

The first challenge is ensuring that the addition of the new software, including cyber protection software, to the previous software doesn’t create unintended consequences. As the software becomes more complicated, the possibility of unintended consequences increases. Software testing protocols are helpful, but they are not always robust enough to ensure that unintended consequences in the new software, and the total package including the new software, will be eliminated.

Until automation completely eliminates drivers, the second challenge will be keeping drivers abreast of the changes. Assuming the updates will change the behavior of the automation, query how user-friendly the updates will be and how easily drivers will adjust.

In aviation, before placing new software into service, software designers generally test it with the end users, using pilots in simulators, to help identify and address the human factors issues. This option is not as helpful to the AV industry because, as noted above, their simulators do not simulate real conditions as completely as aviation simulators, largely due to the variability and unpredictability on the streets; and because the population of drivers is much more variable in terms of experience and capability than the population of highly-trained and re-trained airline pilots.

Cyber security concerns. The autonomous vehicle industry will be much more vulnerable to cyber attack than airliners for several reasons. The industry must protect not only the initial software from attack, but also any updated software, the total package of initial plus updated software, the process by which the updates are broadcast, and the process by which information is transmitted from AVs, e.g, to the manufacturer, and eventually probably also to the infrastructure and other vehicles. In addition, all of these protections must be continually updated as cyber invasion protocols continually advance.

Competition re safety. As reflected by the fact that airline ads do not claim that they are the safest, airlines do not compete on safety. The lack of competition enables the airlines to share information freely about what went right and what went wrong regarding safety, as well as what was done to help things go right and to address what went wrong. 

Automakers, on the other hand, compete vigorously regarding safety, and their ads frequently tout their safety superiority. In this respect, the airline industry safety success process is probably not transferable to the auto industry because, unlike in aviation, some types of safety competition are beneficial to the auto industry. The interest of consumers in buying cars with the most safety stars has been very beneficial to accelerating the penetration of safety advances into the fleet. 

Because the auto industry competes on safety, sharing of competitive information not only potentially undermines the competitive advantage regarding safety, but potentially also generates antitrust issues. The challenge, therefore, is how to exploit the benefits of competing on safety while still enabling the sharing of safety information that could help enhance the safety of the industry. 

One possible solution might be to establish uniform minimum safety standards without competition, but then compete, with little or no sharing of developmental details, regarding the most effective and efficient way to meet those standards. 

Ideally the AV industry will manifest the auto industry’s longstanding competition on safety by competing to be the safest instead of competing to be first.

Ethical issues. Unlike in aviation, the effort to automate cars will generate ethical issues. For example, if a car encounters another vehicle coming the opposite direction in the same lane, but there are pedestrians on the sidewalk, query whether the automation will go onto the sidewalk, to save the occupants of the car while taking out the pedestrians, or crash into the oncoming traffic in order to save the pedestrians. Issues of this type will probably be rare, but when such issues arise the question is how the appropriate outcome will be determined. 

Lack of Federal Leadership. Airline industry flight operations in the U.S. are largely regulated exclusively by the federal government. Query whether the airline industry would enjoy such an exemplary safety record if it did not have uniform minimum safety standards. 

Federal leadership regarding AVs is very important for several reasons. First, in an industry that needs to be able to plan ahead for years to come, uniform minimum safety standards are important to help minimize uncertainty about what those standards might be. Second, developing public confidence regarding the safety of street testing will probably be much more difficult if there is a patchwork quilt of testing standards across the country. Third, given that the worldwide car industry is exploring AVs, the only participant in the process that can best ensure harmony across international borders is the federal government; not the states and not the manufacturers. Fourth, the federal government is the best participant in the process to address the ethical issues discussed above, both as a general matter and in order to help ensure international harmony in particular.

In conclusion, automation offers significant potential benefits, including saving millions of lives every year around the world, but there are many challenges. If the AV industry takes advantage of lessons learned from decades of aviation automation experience, it can help avoid making a skeptical public even more skeptical and less willing to accept automation. That will just be the beginning, however, because automation on the ground will be much more challenging than automation in the air, and the AV industry will therefore face many automation challenges that have not been encountered in aviation.

stripe
Car TechSelf-Driving Tech