Air Disasters, Indiana Jones and the Future of Car Accidents

Trust me, you want to keep your steering wheel.

byAlex Roy|
The Drive Logo
Share

0

The Age of the Autonomous Car is coming, and if you want to survive it, you need to re-watch Raiders of the Lost Ark and learn the Lesson of the Swordsman. Once you do that, you’re halfway to understanding the problem with autonomous cars. For every problem technology solves, a new one is created. We solved the problem of the sword with the gun. We are now poised to solve the “problem” of human drivers with Autonomous Cars. And that’s a problem.

If Indy’s gun had misfired, he’d have been toast.

What will happen when Autonomous Cars fail?

Perception, Meet Reality

My thinking on this was inspired by a recent article on The Drive entitled, “Autopilot Is Making U.S. Airline Pilots Worse.” You don’t need to be Nostradamus to know that’s true. Autopilot has existed for almost a hundred years, and there’s no shortage of articles about the evils of technology. Every time a plane crashes, it’s the autopilot’s fault. Even if it isn’t the autopilot, it’s the autopilot. If it’s pilot error, why didn’t the autopilot prevent it? Not enough autopilot. Because autopilot.

Of course, a unaccountably large number of air disasters are averted because of autopilot. Commercial aviation—even if you include Nepal’s infamous Yeti Airlines—is incredibly safe. Far safer than driving or riding in a car. Plane crashes don’t even rank in the top twenty causes of death and yet—like shark attacks and spontaneous combustion—they capture the imagination far more than heart disease, the number one killer.

Why? Because a plane crash is like a car bomb at the intersection of faith and reason.

It makes perfect sense to put our faith in airline pilots. We need to fly, and so we place our faith in these strangers to effortlessly complete a task that is both magical and terrifying. We assume they have all the necessary training to deliver hundreds of souls safely to their destination. We reason that because they do it every day—often many times a day, for many years—we have nothing to fear. We look at airline pilots and we see Tom Skerritt, the inhumanly capable instructor with the perfect crew cut and mustache in Top Gun.

Our lives depend on commercial airline pilots, therefore we have to trust them, or every flight would devolve into a metal tube full of screaming and weeping, no one would ever fly twice, and Amtrak would get all the funding it needs.

There’s an old saying in The Delta Force about helicopters. Three is two, two is one, one is none. If you life depends on it, bring backup. Actually, bring two.

It therefore makes perfect sense that when a plane crash occurs, our Pavlovian response is to assume technical failure. It can’t be the pilot. Tom Skerritt-looking guys don’t make mistakes. If one of them does, how can we trust any of them? It must be the plane. It was the autopilot. Had to be.

There’s only one big problem with the Tom Skerritt-looking guy: Almost every time there’s a plane crash, it’s Tom Skerritt’s fault.

The Lesson of the Swordsman

Everyone thinks they know the lesson of the Swordsman: Better to have it and not need it, than need it and not have it. It’s one of the clichés used to justify gun ownership, right? I’m sure that’s what Indy was thinking when he met the Swordfighter.

Good thing I wore this holster today.

But it’s the wrong takeaway. There’s an old saying in The Delta Force about helicopters. Three is two, two is one, one is none. If you life depends on it, bring backup. Actually, bring two.

What’s the real Lesson of the Swordsman? Bring backup. A second gun. The whip. Perhaps some martial arts skills. Might Indy have survived if his revolver had misfired? Whose fault would it have been if Indy was killed? Not the Swordsman’s. Indy’s. Because he didn’t have a backup plan.

This is the hinge upon which all automation relies. In order for automation of anything to make sense, there must be backup. Two backups. There must be redundancy, and the final redundancy needs to be human.

There’s No Substitute For Training

In 2009, Air France flight 447 disappeared over the Atlantic at cruising altitude on a milk run from Brazil to Paris. It crashed, killing all 228 aboard. The analysis is fascinating and disturbing reading. I hope any manufacturer looking to offer autonomous cars has boned up on this one, because it tells us exactly what is going to happen when self-driving cars finally arrive.

There’s a reason planes still have pilots. No matter how good the automation—even with current levels of technology—something might still go wrong. Something went really wrong with Air France 447, but it wasn’t the automation. It was the pilots’ misunderstanding of what happened when the autopilot shut down. In commercial airplanes or autonomous cars, this is called disengagement.

Our faith in airline safety is in no way related to how much “flying” pilots actually do. In a typical commercial flight, the autopilot is engaged around 90% of the time. We are placing our faith in what happens when autopilot isn’t engaged. If the decision to disengage is made by the pilots, you are almost certainly safe.

If the disengagement occurs because the autopilot is unable to safely control the plane, and if the pilots don’t have situational awareness at the precise moment of disengagement, you might have a big, big problem.

This is exactly what happened on Air France 447. A cascading series of errors led to the disaster. The pilots ignored weather reports and chose to fly directly into a storm on autopilot. Ice had apparently accumulated on the pitot tube, a sensor that determine airspeed and altitude. Worsening conditions and conflicting sensor data pushed the autopilot to the limit of its ability to control the plane.

Upon disengagement, the crew was instantly confronted with the second worst-case scenario. The first would have been unrecoverable, as in both wings falling off, but there was nothing wrong with the plane—other than a blocked sensor. The second worst-case scenario is when you know something is wrong, can’t diagnose it, and every decision you make worsens the problem. The crew—confronted with the same inconsistent information that had confused the autopilot—fell into this trap.

I’m sure their decisions in the moment made sense. I’m sure they were reasonable. But all their experience was rendered irrelevant because they hadn’t been paying attention before disengagement, by which time it was almost certainly too late.

Perhaps, if they had chosen to fly around the storm rather than through it—a decision made by the captain in order to save time and fuel—228 people might have lived. Perhaps, if they had noticed the conflicting sensor data before disengagement. Alas, such a crash was almost certainly inevitable, if not on this particular flight then on another, given the same system, weather conditions and a crew with similar training and psychology.

You want to keep your steering wheel. Not because you might want to drive your car for fun, but because you are the final backup if automation fails.

What were the lessons of the accident investigation? The autopilot worked. Redundancy? The system had tons of redundancy. State-of-the-art. It worked perfectly. The system marched through all the levels of logic built into it, until it disengaged. Final redundancy lay with the crew, who weren’t up to the task.

If the autopilot was the gun, the crew were supposed to be the whip. The heroes with the hand-to-hand skills. The ones the passengers thought they could trust. But they weren’t.

Which brings us to the problem Self-Driving Cars.

The War On Driving

There are two schools of thought regarding Self-Driving Cars. Google wants to remove steering wheels altogether. The technology won’t be deployed until it is smart enough to take complete control, all the time. This makes perfect sense in a world where people can’t or don’t want to drive, for whom taking the wheel can’t possibly lead to a safer outcome, and for whom sacrificing control over their fates is acceptable.

Google’s position—in conjunction with government investment in Autonomous Driving, registration fees, speeding tickets and court fees—represents the beginning of The War On Driving.

Manufacturers will offer whatever the market demands. Right now, they are in the Steering Wheel camp, primarily because it’s what they and their customers know. If demand for steering wheels starts to falter, our control—like manual transmissions—will go away in lockstep.

Until then, we will have choices. In the coming War On Driving, I am firmly pro-choice.

I don’t know when fully autonomous cars will be commercially available. No one does, but semi-autonomous cars are already here, and they will improve over decades until reaching ubiquity. They  will become commonplace—if not mandatory—in first-world cities and along major highway corridors in my lifetime. I’m 44.

Read up on Air France 447. Trust me. You want to keep your steering wheel. Not because you might want to drive your car for fun—an idea something that eludes those who would take away your steering wheel—but because you are the final backup if automation fails.

That just leaves one more thing. The only thing that really matters.

Guns, Steering Wheels & Whips Are Nothing Without Training

Consider the Lesson of the Swordsman, the pitot tube, the pilots, and how much faith you place in things and people. When you get into car with any level of automation, you are the pilot, the driver, and—as Bush 42 would say—the ultimate decider. The more automation, the more you need to know, not less. Not only because your autopilot might disengage, but because someone else’s might, and they aren’t ready to take over.

The amount of trust we put in other drivers today is insane. Literally. Insane.

You think Driver’s Ed is bad now? Driver’s Ed in this country doesn’t exist. Not by the standard set in Germany, Denmark, Holland, Norway, and even Latvia. Bad Driver’s Ed is the reason autonomous cars are coming. Set aside the business opportunities they offer. Proponents of autonomy begin a arguments with a moral justification: It will save lives. It’s a good argument. I totally buy it. 33,000 Americans were killed in car accidents last year. If we chose to invest in Driver’s Ed instead of automation, we’d save a lot of money.

So, for all the lives this technology will save, there remains one big problem. The life you really want technology to save is your own. I won’t dispute that Self-Driving Cars will save lives overall. I’m merely opposed to the blanket promises made by the same type of people who told us antibiotics would solve absolutely everything. Antibiotics are a good thing. A miracle of science. But people die every year because they were certain there was a pill for everything.

Don’t be one of those people.

Someday, one or more of the sensors in a Self-Driving car will fail. Or get iced over. Or be hacked. Something will go wrong. Maybe in bad weather. Maybe in traffic. Redundancy will kick in, but it will reach its limit, just like it did on Air France 447. When disengagement occurs, you will wish you had a steering wheel and sufficient training to use it. On the other hand, if you have one, you may just put your hands on it just in time, and, if you have enough training, you might even save not only your own life, but the lives of your husband, fiancée, lover, children, or the four nuns you offered a ride to.

I know what I’ll be doing.

I’ll still be driving my 2000 E39 BMW M5. Myself. With a trunk full of gear for any eventuality. Carrying two full size spares, two flashlights, extra batteries, water, snacks, a battery charger, jumper cables, oil, an empty jerrycan, a fuel cell, coolant, four quarts of oil, flares, the U.S. Army Special Forces Survival Guide, and my First Blood 25th Anniversary John Rambo Combat Knife.

I might not be able to take on The Swordsman, but I think I’ll be OK. If I ever start losing my eyesight, I’ll just get a Volvo.

stripe
Car Tech