Can Sully Transform the World of Self-Driving Cars?

The Drive interviews Captain Chesley “Sully” Sullenberger about autonomous cars, driver distraction and a certain Hudson River water landing.

byAlex Roy|
Self-Driving Tech photo
Share

0

Lost in the putrid cloud of self-driving car clickbait, the Department of Transportation’s Advisory Committee on Automation in Transportation held its first meeting on January 16th, 2017. One look at its members is all it takes to know whose lobbying dollars hold sway in Washington. The largest constituency? A bloc including Apple, Amazon, Lyft, Uber, Waymo and Zoox, all of whom profit from you losing your steering wheel as soon as possible. They may cite safety, but there is only one objective voice on the panel, a man with true life and death experience at the intersection of human skill and automation:

Captain Chesley “Sully” Sullenberger.

In a world where political hacks and “experts” are increasingly replacing those with real-world experience, Sully’s inclusion on the panel is a revelation. Best known for The Miracle on the Hudson, Sully’s entire career has been devoted to safety. Look past the mythology, and his is the story of the opportunity, danger and cost inherent to sacrificing skilled humans on the altar of automation. Sully has written and spoken extensively on the criticality of training and compensation for airline pilots, and his insights have clear applications to the future of the trucking industry.

In a recent interview, Sully made clear three simple messages: 1) we need real standards for self-driving cars, 2) the industry needs to reboot its approach to semi-autonomous cars, and 3) drivers education “is a national disgrace.”

Sully also ends his interview with a singularly authoritative message about human driving. TL:DR? If you love driving, read this to the end.

One more critical point. For those unfamiliar with the term, “flight envelope protections” automatically prevent pilots from exceeding a plane’s operational limits, akin to a car limiting how far you can turn the steering wheel, or push the gas pedal. Such systems are standard on all Airbus aircraft, but not Boeing. Debate has raged for decades over whether Airbus’s higher level of automation is actually safer than Boeing’s more human-centric approach. Although Sully’s miracle landing was in an Airbus, he’s experienced in both. Neither are capable of going gate-to-gate without human operators. Their differences highlight the lack of consensus not only in aviation, but on the ground, where Tesla differs with traditional automakers over the safe implementation of semi-autonomous features.

Here’s our interview, edited for clarity:

The Drive: How is that you came to join the DOT’s panel on automation? 

Sully: I’ve spent my entire professional life becoming an expert at and thinking deeply about how one uses technology. We need to assign the best possible role to the human component and the technological component, taking into mind the weaknesses and strengths of each. To make the designs we implement and use complementary is one of the most critically important decisions that we must make. It is important that we assign the appropriate roles to the human component and technological component. Even in autonomous vehicles, humans are involved very much in the design, the implementation and the maintenance of these devices, and we must assign the proper roles to each part of the technology.

Two things jump out at me in any attempt to make important recommendations going forward. We have to account for what we cannot know yet. We have to allow for the unknown unknowns. The second critical issue is that we have to decide if there will be the possibility of human intervention in case the technology isn’t doing what we want, or what’s best for that situation.

If we acknowledge that based upon our experience in other modes of transportation—and I’m talking specifically about commercial aviation—that if you require the technology to be used almost all the time, you make it much less likely that humans will be able to effectively and quickly intervene. 

In that case the technology has to be so good, so resilient, so adaptable and so reliable that human intervention is never necessary. That’s a very, very high bar. The more you take humans out of engagement with the process, the less likely you make them able to quickly and effectively intervene. If that’s the case, you must make it so good that they never have to.

Do you think we will see 100% automation of flying such that pilots can come out of commercial aircraft?

I don’t know. I think in the distant future that might be a possibility. Not in the near term, in my lifetime. I think it’s likely that even if it were thought to be technologically possible, it wouldn’t be acceptable to those participating. I don’t think it’s the way to go. 

We may have gone too far in the use of technology that removes human operators from immediate engagement with the process. This degrades their manual skills and leaves them with less confidence to be able to quickly and effectively intervene.

Some [pilots] have continued to use technology past the point when they should have abandoned it, or used degraded modes of technology, or even taken over completely manually. They wait too long to intervene and often it’s too late. That’s been true in several seminal historic accidents in the last several years.

The greater concern in aviation-based studies I’ve seen is that not only are manual flying skills degraded—which decreases confidence and the timeliness of the intervention—the greater concern is actually the lack of constant mental engagement with the operating process, which means that analytical skills are also degraded.

By not being constantly involved and engaged in the process, we’re less able to quickly analyze what’s going wrong and determine what needs to be done. That also delays the response. It delays the ability to quickly and effectively intervene in case it’s necessary, either when the technology fails, or where it’s not doing what we want, or what’s necessary in that phase of the flight.

I think we really need to rethink our cockpits and areas like nuclear power control rooms. Any critical function where safety is so important, we need to make sure we’re assigning the proper roles to the human and the technological components. If we don’t do that there are unintended consequences that can actually degrade safety of the system as a whole. 

The obvious example in aviation is Air France 447, where the crew’s situational awareness was so poor. 

When we assign technology as the doer and the human component as the monitor, we’re doing it backwards. Humans are inherently poor monitors. It’s very difficult for a TSA screener to do it, or for the pilot who’s on a 16-hour flight to be monitoring their technology for the whole time, avoiding that one chance in hundreds or perhaps thousands — where it’s not doing what it should — and having the skill and competence to quickly intervene.

The concept we’re talking about is knowing what roles one has in the cockpit.

It would be much better — at least at a conceptual level — for humans to have more direct engagement with the operation, and technology to provide guardrails to prevent us from making egregious errors, and to monitor our performance. That would be, in terms of our inherent abilities and limitations, a much better way to go. 

Regarding your Hudson water landing, it would appear that because you were in an Airbus, with flight envelope protections, you were able to focus on decision-making rather than managing of the aircraft itself. It seems as if a less experienced pilot in that seat would not have been able to make the decisions you made. Or—if it had been a Boeing—the pilots might have had to spend more time managing the aircraft rather than deciding where to put down. Does that make sense?

Sully: It does, but I wouldn’t go quite that far. I think had we been in a Boeing, as long as the airplane was similarly configured with wing mounted engines, etc., we would have had a similar outcome.

I don’t think whether it was an Airbus or Boeing really made much difference at all, because we never approached the limits of the flight envelope protections. A little known part of the experience is that not only did we not exceed the limitations beyond which the Airbus flight control protections would have protected us from ourselves, but at the very end of the flight, right as we were landing and trading some of our forward velocity for a reduced-rate of descent at touchdown—in other words, raising the nose to achieve the maximum aerodynamic performance of the wing right before landing—a little known part of the software of the Airbus flight control system inhibited me from achieving the last bit of lift out of the wing, even though I was not yet at maximum performance of the wing.

As I was calling for more performance from the wing by continually pulling back on the sidestick to raise the nose even more to make it a softer touchdown, the Phugoid mode of the Airbus flight control software prohibited me from achieving that last little bit of nose-up control, and we hit a little bit harder than I think we should have. There was a little bit more damage to the airplane, and water was being taken on after we landed a little bit faster than it would have if we had not struck so hard.

We did the best we could, considering we were using gravity to provide the forward motion of the airplane. We didn’t have engine thrust to make it a gentler touchdown. One flight attendant was injured when a piece of metal came up through the floor from the cargo compartment and gouged her leg. All those things happened in the last seconds of the flight where the flight control computers prevented me from achieving the maximum performance of the wing.

The protections that were there, we never really needed. One protection that nobody in the airlines—not even airline pilots, and only a few Airbus engineers knew about—is that the Phugoid mode would actually prevent me from getting that last little bit of lift and making a slightly softer touchdown. So it was a mixed blessing to have that protection in place, but I think if we were in a Boeing it would have been a similar outcome.

Have you driven a Tesla using their autopilot technology?

I have never driven a Tesla, but I’m sure we’re going to learn a lot about what the technology is, and how good it is. The other point I would make about this entire endeavor—especially in terms of terrestrial autonomous transportation—is that while we can look to a certain domain such as aviation for guidance, we have to realize the environment in which commercial aviation takes place is very different than your average everyday driving experience.

In aviation, we have professional pilots (for the most part) and well-designed equipment in a really sterile environment in terms of our processes, protocols, and our procedures. Think about the compliance we achieve with professional pilots and commercial aviation. Even though there is certainly ambiguity in commercial aviation in dealing with real world endeavors and situations and environmental conditions, it’s not nearly as messy as driving on the street, where drivers are much closer to each other.

We’re talking about separation of feet, not miles. We’re talking about narrow roads where there may be construction going on, or the striping is worn off, or where there are obstacles or animals or pedestrians nearby. It’s also not yet possible for technology to have vehicles automatically communicate to each other the way aircraft do, where they have Traffic Collision Avoidance Systems that actually talk to each other electronically. Autonomous vehicles must be very, very good about seeing in rain, fog, snow and darkness, and then be able to account for what other vehicles—autonomous or non-autonomous—may or may not do. It’s a much more difficult problem than what we’re seeing in aviation.

Do you think the manufacturers who are constraining autonomy until they can go all the way are correct? Or do you think Tesla’s incremental approach is the right one?

It’s difficult to know, because the tests have been done have been so differently, and the designs that they used have been so different among the variety of manufacturers. It’s difficult to draw broad conclusions at this point. What I will say is that based upon decades of experience in commercial aviation, as long as the human operator is immediately engaged in the process, I think it’s very helpful to have systems like lane departure warning and automatic emergency braking, technologies that can assist the operator to help avoid situations where a collision is caused because of distraction, or help assist the driver to react more quickly.

These are very useful because human performance among non-professional drivers in automobiles is, quite frankly, abysmal. With the increasing distractions of personal devices, paired with the increasing chances of legalization of certain substances in certain states, the former NTSB Chairman—who is now CEO of the National Safety Council—was on CBS yesterday talking about these issues, and how for the first time in decades, we’re seeing an increase in traffic deaths and collisions because of the factors I just mentioned.

I think those kinds of assistance technologies—again, the guardrail approach I’ve talked about; setting limits beyond which your vehicle cannot go even if the operator is impaired or distracted—are very helpful. That’s quite a different situation than going totally to a semi-autonomous vehicle where the operator is so disengaged, they’re unlikely to be able to quickly and effectively intervene. I think the danger is going to be...

...transitions?

Transitions.

There seems to be confusion between automation and augmentation in the automotive sector. People lower their attention, and their situational awareness is poor. Is the missing thing in automotive something like Airbus's flight envelope protections? I’m amazed we don’t see driving envelope protections, a holistic system unifying current and semi-autonomous safety technologies. Do you think that’s something we’re going to see more of in the coming years?

I think that would make sense based upon the experience we have had in commercial aviation. I think that would help, up to the point where people think that no matter what they do—no matter how distracted they become, no matter how impaired they are—that technology will save them. I think it’s unlikely that even Airbus-style protections can save people from themselves if they really feel like they’re invulnerable and they aren’t actively engaged as skilled, alert drivers.

So in this country we obviously have the lowest driver education standards in the western world.

Which is something I’ve written about before; I think it’s a national disgrace.

I hope you don’t mind if I quote you on that.

I hope that you do. What we’re doing currently in driving training is a joke and a national disgrace, and it’s costing lives. Real lives on a daily basis.

An investment in a national driver training program with higher standards would be a lot cheaper than a twenty-year, multi-trillion-dollar investment in automation, right?

I think the investment in automation will occur whether we choose as a nation to have it occur or not. That’s the direction that industry is going, but I agree with you that driver education could be more cost effective than the investment in technology. I would go one step further than that. The case that I would make—and this is something I’ve believed for a long time—is that the most effective thing that we can all do right now is to stop using our phones when we’re in a vehicle. That’s a personal choice that each of us could make now, today. That would save thousands of lives. More lives than would be saved than by any of these technologies, or by any of these training initiatives. 

It’s really inexcusable. It’s a matter of people — because of their own unwillingness to put their immediate gratification aside — to have a sense of civic responsibility to those around them. They are choosing to do things that could easily wait, putting others at risk to great harm unnecessarily. 

If we are really serious about safety, that is the first thing that we would do. We would make it completely socially unacceptable to do. We would have traffic stops for that alone, on every street, every day until people got the message that it’s going to be too expensive for them to continue to be so selfish, and that they’re putting others at risk for their immediate gratification. Everything they could do on a phone can wait, just like we waited to find out what was going on 10 or 15 years ago, before these devices existed.

There's something cultural or even psychological about the American notion of freedom that compels people to drive the way they do, often irresponsibly. Do you think that once Level 4 automation is commercially available that it will have to be mandated to get people to use it?

Sully: I don’t know, but what I do know is this: we as citizens should feel and act on a sense of civic duty. While we have the freedom to be stupid, it’s only up to the point it hurts someone else. We have a civic duty to be educated, well-informed and scientifically literate, so that we can understand important concepts and can vote intelligently. There really are things that we owe each other in this winner-take-all world. If we didn’t do these things, and occasionally put our own needs aside, and delay our gratification, everyday activities that we take for granted in our culture and society wouldn’t be possible.

Giving these little gifts of civic behavior to each other is what makes civilization possible. If we didn’t do these things, you couldn’t drive down the average street or highway. It would be suicidal to do so if everybody was running red lights constantly, if everyone was impaired, if everyone was doing something on their phone constantly, the body count would be much worse, even more than what it is today. I think we have to rethink our role in society and our relationship to others. We really aren’t completely islands to ourselves. I think that’s my underlying message, no matter what technology we’re using, no matter what mode of transportation we’re using, no matter what part of our society we’re talking about. We have to be intelligent citizens, capable of independent critical thought, scientifically literate, and make informed choices, thinking about how our choices have implications for those around us.

Thanks so much, Captain.

The bottom line? Sully supports a zero BS approach, whether it's automation or human driving. Until Level 4 self-driving cars arrive—and no one knows when they will—a skilled human operator with the best technology isn’t just the safest option, it’s also the fun one.

If you love driving, educate yourself as to your car's limits, whether hardware or software. Go to Skip Barber and safely enjoy your car while you can. The future is unknown. Hopefully, we'll have a choice.

Sully’s position sounds like what I think we’re going to end up with for a long, long time: augmented driving, which is not the same as semi-autonomy. What’s the difference? Stay tuned, because I’ve got a lot more to say about that in future columns…

Alex Roy is Editor-at-Large for The Drive, author of The Driver, and set the 2007 Transcontinental “Cannonball Run” Record in 31 hours & 4 minutes. You may follow him on Facebook, Twitter and Instagram.

stripe
Car TechSelf-Driving Tech