The Autonomous Vehicle Micro-Navigation Dilemma: How Do You Make a Self-Driving Car Bend the Rules?

Going out of the ordinary —in a safe way, under certain circumstances—is a big part of driving. How do we teach robot cars to do this?
www.thedrive.com

Share

Paint yourself a mental picture of the following: You pull up at, say, a music festival out in the sticks. The place is rotten with food trucks, randomly-placed porta-johns, and thousands of haphazardly-parked cars. But you manage to find a tiny, awkward space between a tree and someone’s Volvo that you can fit into—at least with some caution and a bit of creative perpendicular parking.

Except you have no steering wheel. You’re in a self-driving car, see, in the future. Before you can bolt out with your buddies to see the long-awaited Panic! at the Disco reunion, you have to tell your car that you want it to park on a steeply-inclined mound, in the grass, over the bulging tree roots that you’d have to make a few throttle surges to successfully bounce a tire over if you were doing it yourself. Will your robo-car even be capable of trying such an orthodox maneuver, or will it sit there stubbornly refusing to budge?

It’s a question that has simmered in my brain for a while, along with a vexing worry over all the other little day-to-day moves that are simple for us creative humans to figure out, but would be tough to to describe to this fancy digital egghead of a self-driving car. How would you make it move over two feet so your mother-in-law can fit next to your car in your driveway? How do you tell it to pull into your garage at a cockeyed angle, because there are a bunch of Ikea boxes in there that you’ll need to leave room to move later? Will you say these things out loud to the car? Gesture frantically, draw something on the screen, wriggle a joystick? Or maybe jump out and guide it in like the ground crew at an airport?

It’s a question that hints not just at the mechanics of micro-navigation, but at how we’re going to integrate autonomous cars into millions of lives that are all a bit frayed around the edges. Cruising down highways and parking in neatly delineated spots at the mall is (comparatively) easy; getting around downtowns only slightly less so. Coaxing your car halfway onto the (empty) sidewalk to avoid a sudden, steaming sinkhole, on the other hand, is another matter entirely—one that goes way beyond mere obstacle-avoidance and observing rules of the road. It will require buttery-smooth interactions with computers and, ultimately, the ability to override their hard-wired reluctance to do the stupid, harmless stuff we occasionally get up to. The world’s a hot mess. Robotic cars aren’t going to suddenly bring order to civilization as much as they will accommodate disorder.

At the Geneva International Motor Show this week, I posed the question to a few engineers. Most agreed that artificial intelligence would be involved—training the computers to learn how to process and execute human instructions—as could clever user interfaces such as any combination of the above-mentioned guesses. 

The most insightful answer, however, was refreshingly blunt. “We don’t know yet,” said Peter Mertens, Audi’s board member in charge of technical development. “It’s a very big challenge, similar to dealing with, say, a crosswalk where people will keep going as long as you let them while the car sits there forever.”

At some point, he noted, you need to talk to the car and tell it to nudge its way into an appropriate-enough break to make pedestrians stop, so you can make it through. While autonomous car advocates are quick to point out how far along we are in the vehicles’ development, they haven’t gotten that far down into the weeds yet. “We’re quite far away from these nuances of functionalities,” Mertens said. “We’re still in the basics, believe it or not.”

Even the strategies for ironing out those protocols will take refining. Mertens cited recent efforts to train artificial-intelligence systems—which will comprise the core solutions to these challenges—that backfired by programming in incorrect behaviors simply because they generate inadvertently positive results; the programs then snowball to the point that all the behaviors wind up being wrong. Tricky stuff, this. But it’s also to be expected. “Those are the things about human behaviors,” Mertens said. “It takes humans about seven years to become good drivers. It’s probably not going to take cars seven years, but still…”

On the other hand, it’ll be worth the wait. Just imagine the first time you successfully use your car as a rolling scaffold while you stand on top of it stringing holiday lights from your house.