What Can Self-Driving Cars Learn From Aviation?

The former Chairman of the NTSB chimes in on the perils and pitfalls of safety in automation.

byChristopher Hart|
Self-Driving Tech photo
Share

0

Recently, I interviewed Christopher Hart, member and former Chairman of the National Transportation Safety Board, for the Autonocast podcast (UPDATE: listen to the episode here) and his insights into the consequences of automation are a beacon of clarity in a growing sector filled with misinformation. In this article, Hart looks at the oft-ignored relationship between automation in aviation versus self-driving cars, and he makes many points—especially about data sharing—that the self-driving industry continues to ignore. —Alex Roy

Because more than 90% of the crashes on our nation’s streets and highways are attributed to driver error, the potential for automation in motor vehicles to save lives and prevent injury and damage is amazing. However, introducing automation into—and removing the driver from—such a complex human-centric environment will be a massive challenge. Commercial aviation began incorporating automation decades ago, in a much less complex and more structured environment than our streets and highways, and that industry, which is still learning about what works well and what does not, has not yet to remove the pilot. What are the lessons the auto industry can learn from the process of introducing automation into aviation?

Automation is a double-edged sword

The theory of automation is that it can eliminate human error by eliminating the human from the loop. Experience has demonstrated, however, that the reality is somewhat different.

The reality is that automation can significantly increase productivity, efficiency, reliability, throughput, and safety, but there is a downside. In the words of Prof. James Reason, University of Manchester (UK):

"In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid."

This downside can occur not only if the automation does not work as intended, but even if it does work as intended.

Automation is not working as intended

There are at least three reasons why the automation might not perform as intended: 

1. Inadequate consideration of "human factors issues" in the design of the automation; 2. The automation encounters situations that were not anticipated by the designer; and 3. Failure of the automation. 

In all three situations, the complexity of the automation, which has generally been increasing over the years, adds to the likelihood that the operator of the system will not completely understand it. At the same time, the reliability of the automation, has also generally been increasing over the years—that's a good thing, but it increases the likelihood that if the operator encounters a problem with the automation, he or she may not have ever seen that problem before, even in training. Consequently, due to the increased complexity and lack of prior experience, the operator will have to analyze the problem and decide how to respond in the moment—in real time—sometimes in life-threatening situations.  Experience has shown that the operator may respond inappropriately under those circumstances.

1. Inadequate consideration of "human factors issues"

There are several aviation accidents in which the causes included the failure of the automation designers to consider human factors issues adequately.  One of the more recent examples is a landing accident at San Francisco International Airport in 2013. That accident occurred on a clear day, with negligible surface winds, on a runway that was more than adequate (it was more than 11,000 feet long). 

Video thumbnail

However, the electronic glide slope—the radio signal that can be used to guide the airplane to the end of the runway at a descent angle of about 3 degrees—was down for maintenance. With the electronic guidance, the autopilot can keep the airplane on the 3 degree descent path, or the radio signal can be displayed on the instrument panel to enable the pilot to maintain the appropriate descent path manually. Without it, the pilot has no electronic guidance to maintain the proper descent angle. In this case, there were guidance lights near the end of the runway to give the pilot a visual indication of the appropriate descent angle. 

Unfortunately, that pilot usually used the autopilot for approaches and rarely did them manually, so his manual approach and landing skills had degraded. During an approach, pilots usually also use an automatic throttle to keep the airplane at the selected speed. This pilot was new to the airplane and did not adequately understand how its automatic throttle operated, so when he thought it was maintaining the selected speed, it was not. 

Because of a combination of the lack of electronic glide slope guidance, the pilot’s erroneous belief that the automatic throttle was maintaining the selected speed, his general unfamiliarity with the airplane, his failure to notice from the visual indicators that he was not at the appropriate descent angle, his failure to respond to these problems in time, and various other contributing factors, the airplane became too low and too slow and crashed into the seawall, short of the runway.

Due to the complication of the system, pilots could be incorrect about what the system was doing without being aware of their mistake. Problems of inadequate “mode awareness,” as the phenomenon is called, are not uncommon in complex automated systems, and those types of problems often reflect inadequate human factors awareness in the design of the system.

2. Automation encountering unforeseen circumstances

There are also several aviation accidents in which the automation encountered circumstances that were not anticipated by the designers of the automation. An example is a crash into the Atlantic Ocean during a flight from Rio de Janeiro to Paris in 2009

After the airliner reached its cruising altitude of 37,000 feet at night over the Atlantic and began approaching distant thunderstorms, the captain left the cockpit for a scheduled rest break, leaving control with two less experienced pilots. The airplane had pitot tubes that project from the fuselage to provide information about how fast it was going. Airspeed information is so important that there were three pitot tubes—for redundancy—and the pitot tubes were heated to ensure that they were not disabled by ice. At the ambient temperature of minus 50-60 degrees, and with abundant super-cooled water from the nearby thunderstorms, the pitot tube heaters were overwhelmed and became clogged with ice, so the airplane lost its airspeed information.

Debris from Air France flight 447., Getty Images

The loss of airspeed information disabled several systems, including the automatic pilot that was flying the airplane, the automatic throttle that was maintaining the selected speed, and the automatic protections that prevent the airplane from entering into an aerodynamic stall, in which the wings no longer produce lift. As a result, the pilots suddenly, without any prior warning, had to fly the airplane manually and without protections that were typically in place. The pilots responded inappropriately to the loss of these systems, and the result was a crash that was fatal to all 228 people on board.

As with most commercial aviation accidents, several factors played a role. First, the redundancy of having three pitot tubes was not effective, because all of them were taken out by the same cause. In addition, the pilots had not experienced this type of failure before, even in training—where the problem can be recreated in very realistic simulators—and they were unable to determine what went wrong. Finally, use of the automatic pilot is mandatory at cruise altitudes, so the pilots had not flown manually at that altitude before, even in the training simulator. This is important because airplanes behave very differently at cruise than they do at low altitudes, such as during takeoff and landing. Other operational and design issues compounded the problem and led to a tragic outcome.

When automation encounters unanticipated circumstances, a successful recovery usually depends entirely upon the pilots. In this situation, the pilots had not been trained to respond to a loss of airspeed information at cruising altitude, or to fly the airplane manually at cruise, and the results were horrific.

Even if the automation operates as designed by reducing human error in the cockpit, there are additional opportunities for human error elsewhere in the system. Not only are there other airplanes in the system flown by human pilots, but there are air traffic controllers, and they provide opportunities for error that may result in unforeseen circumstances. Humans also design, manufacture, and maintain the aircraft and the air traffic control equipment. All of those steps present opportunities to introduce human error into the process. 

Moreover, human error in the other parts of the process may be more systemic in its effect—possibly involving several vehicles—and more difficult to identify and correct. An example of this is the collision of an automated (driverless) people mover into a stopped people mover at Miami International Airport in 2008. Although the possibility of operator error had been eliminated because there was no operator on the people mover, the collision resulted from a simple maintenance error—which is all too human.

3. Failure of the Automation

Safety experts learned long ago to stop saying that, “This ship can’t sink.” And as unwilling as most automation designers are to admit it, their automation will fail, sooner or later. Because a failure will occur sooner or later, the design challenge is to ensure that it will fail in a way that is safe, and if it cannot be guaranteed to fail in a safe way, the challenge is to ensure that the operator will be made aware of the failure in order to be able to take over in a timely manner to salvage the situation.   

The failure of automation without the operator’s knowledge resulted in an accident in 2009 in Washington, D.C., on the city’s Metrorail system that tragically killed the train operator and 8 passengers. In that accident, a train became electronically invisible for a brief period of time. When that happened, the symbol of the train disappeared from the display board in the dispatch center, and an alarm sounded. The same alarm, however, sounds several hundred times a day, so it's largely ignored.

Unfortunately, when the train became electronically invisible, there was no alarm in the train behind it. So the automation in the train behind it began accelerating to the maximum local speed, and the human operator was unaware there was anything on the tracks ahead. By the time the operator saw the stopped train and applied the emergency brake, it was too late.

Examples 1 and 2—of automation that did not adequately consider human factors and automation that encountered unanticipated circumstances—both came from aviation. The example regarding failure of the automation, however, did not come from aviation, and aviation examples of automation failing and resulting in accidents are thankfully rare. This demonstrates a complex reality regarding automation: namely, that it is much more challenging to design automation that adequately considers human factors, and to design automation that can respond appropriately to unforeseen circumstances, than it is to reduce the likelihood of failure of the automation and to increase the likelihood that the operator will be aware of the failure in a timely manner. 

Automation working as intended

The examples noted above all relate to accidents that occurred when automation did not perform as intended. Bad things can also happen when automation does work as intended—specifically that it can result in loss of skills because the operators are no longer doing the operation manually; and it can result in operator complacency, because the system is so good it needs little attention. 

Does automation, when it works as intended, also undermine professionalism? In mass transit subway systems in many US cities, automation takes the train out of the station; observes speed limits on curves and elsewhere; avoids collisions with other trains; stops the train in the next station; and opens the doors. The primary function of the operator is to close the doors. 

Professionalism—doing the right thing even when nobody is watching—plays a major role in the safety of most commercial transportation systems because there will always be situations that are not covered by the regulations, and there can never be oversight of every operation. Automation designers need to keep in mind that a major ingredient of professionalism is the pride of accomplishing a challenging task, so query whether automation that leaves too little challenge for the operator might undermine the operator’s professionalism.

Automation on our streets and highways

Driverless cars are coming, and the potential for improvement is phenomenal. Although the theory of removing driver error by removing the driver is somewhat simplistic—the aviation examples above show that pretty clearly—the fact that more than 90% of the crashes on our roads are due to driver error provides hope that removal of the driver can significantly reduce the loss of nearly 40,000 lives every year on our streets. Removing the driver would eliminate several human frailties, such as fatigue, distraction, and impairment, all of which are on the NTSB’s Most Wanted List of transportation safety improvements.

Driverless cars will also have major impacts on our infrastructure needs, such as increasing the amount of traffic our roads can safely carry by reducing the safe longitudinal separation between vehicles. Increasing automation may also significantly reduce the need for parking spaces and facilities. Automation will probably enable other significant improvements, infrastructure and otherwise, many of which we cannot even envision today.

The similarities between self-driving cars and aviation

Introducing automation into a complex human-centric system will be very challenging, probably much more so than appears at first glance. Automation in aviation began with a simple goal: assume that automation is good, and automate whenever it is technologically feasible. Unfortunately, the “technologically feasible” approach did not consider the human factors issues associated with the automation. 

Over time, experience demonstrated that automation resulting from technological feasibility did not necessarily produce the best result for the human-machine system. From that experience came the concept of “human-centric” automation, which considers not only the technological issues but also the human factors issues. 

The auto industry appears currently to be using the “automate whenever it is technologically feasible” approach, with little or no systemic concern about the associated human factors issues. As more automation comes on line, problems will arise from the interactions between the various types of automation, in much the same way that problems can arise from the interaction of different medications.

A third similarity relates to the fact that automation is generally able to operate best in a completely automated environment, with little or no human involvement, such as with an airport people mover. Aviation experience has demonstrated that the challenges are much more daunting when automation is operating in an environment that also has extensive human involvement. 

Situations involving automation with substantial human operator involvement have demonstrated two extremes. On one hand, the human is the most unreliable part of the system. On the other hand, if the system encounters unanticipated circumstances, a highly-trained proficient human operator can save the day by being the most adaptive part of the system. 

An example of the human operator saving the day is Captain Sullenberger’s landing in the Hudson River when his airplane suddenly became a glider because both of its engines were damaged by ingesting birds. In stark contrast, a textbook example of the complexities of the human-automation interface, in which the human was the most vulnerable part of the system, is the flight from Rio de Janeiro to Paris in 2009, discussed above.

There is a good likelihood that humans will still be driving for decades to come for several reasons, e.g., some people simply enjoy driving, and many will not trust the automation.  The challenges of being mostly but not completely automated will probably be much more daunting on our highways than they have been in aviation.

Key differences between self-driving cars and aviation

The aviation industry has developed a very powerful tool for enhancing designers’ understanding of human factors issues—very realistic simulators. There are, however, several problems with transferring the benefits of simulators to cars.

First, the simulated situations for public streets are much more variable and less structured than the situations simulated in aviation. Second, the range of knowledge, skill, and experience of the average driver is far broader than for professional pilots. Third, pilots usually receive classroom and hands-on training in the automation before they use it—even in the simulator. Such training is unlikely for the average driver. Fourth, the pace of change of the automation is much more rapid with cars than for aviation because some automakers send periodic software revisions to their car owners. These differences alone—there are probably more—greatly decrease the extent to which designers can use simulators to help identify and address human factors issues in cars before their designs are in actual use.

A major process difference between aviation and cars is the willingness of aviation industry participants to collaborate and learn from each other’s safety experiences. In aviation, anybody’s accident is everybody’s accident, so the airlines put extensive effort not only into preventing their own accidents, but to preventing any airline from having an accident. Note, for example, that airline ads do not include any claims about being the safest. Because the airlines do not compete on safety, they freely share information about safety problems encountered and safety lessons learned, which enables everyone to learn about each other’s safety issues and solutions. This extensive industry-wide collaboration contributes significantly to commercial aviation’s ability to carry billions of passengers a year while going for years at a time without any passenger fatalities.

The auto industry, on the other hand, competes vigorously on safety. The auto industry has begun to recognize the importance of certain types of collaboration, as shown by their voluntary agreement regarding the implementation of autonomous emergency braking. It is not likely, however, that their collaboration will include the sharing of safety lessons learned as the airlines are doing.  Unlike in commercial aviation, auto industry competition re safety has the benefit of enabling marketing forces to help generate a public interest in safety, which in turn helps accelerate the penetration of safety improvements into the fleet. The 80% reduction in the fatality rate accomplished by the US airline industry since the mid-1990’s is a powerful example of how much can be accomplished relatively quickly through voluntary collaboration, but that process success may not be transferable to the auto industry.

Another difference relates to the availability of information from on-board event recorders.  NTSB investigations are significantly enhanced when event recorders reveal what happened. Airliners have had “black boxes” for decades, to record both the aircraft parameters and the sounds in the cockpit. Other transportation modes are increasingly introducing event recorders as well as audio and video recorders.

Assuming that difficulties will be encountered as automation is being introduced, the more the industry knows from the event recorders about what went right and what went wrong, the more the industry will be able to fashion remedies that effectively address the problems. However, event recorders in modes of transportation other than aviation have introduced significant issues regarding both privacy and the appropriate uses of the data, and these issues are yet to be addressed satisfactorily in the auto industry.

Last, but not least, cybersecurity issues will be much more challenging on our highways than they are with aviation because, among other reasons, the variability of the software on our roads will be much greater than in aviation, and the software in cars will be changing much more rapidly than in aviation, often because auto manufacturers “push out” software upgrades much more often than in aviation.

Conclusion

Automation presents significant opportunities to reduce the carnage on our highways, but the challenges of introducing automation onto our streets and highways will be daunting, and much can be learned from the commercial aviation industry’s automation experience to avoid making the same mistakes that were made in aviation.

Christopher A. Hart has been a member of the National Transportation Safety Board since 2009. He served as Vice Chairman from 2009 to 2014, Acting Chairman from 2014 to 2015, and Chairman from 2015-2017.

The views and opinions expressed in this article are those of Hart, and not those of the NTSB.

stripe
Car TechSelf-Driving Tech