How Do We Get Humans to Trust Self-Driving Cars? By Making Cars Act Like People
It’s all in the details—and it’s as vital for those inside the machines as outside.
Though a newly released study by AAA found that Americans are slowly becoming a little tiny bit less afraid of riding in newfangled autonomous cars—which, of course, aren’t actually here yet—we still have miles to go before we’ll feel safe and happy falling asleep behind the wheel of these still-largely-fictional future-beasts. At the moment, 63 percent of Americans still aren’t down with the idea of letting a machine do the dirty work on the roads, the study found.
Of course, this likely has a great deal to do with the fact that consumers a) don’t have a lick of experience with autonomous cars and probably won’t for years to come, and b) have no earthly idea how these machines will do what they do when they do get here. Of course there’s going to be trepidation on a massive scale. Time will solve much of this problem—which, again, isn’t actually a problem yet—but the study nevertheless represents an opportunity to reflect on what will be necessary for us to get there, for humans to have faith in the mysterious robo-cars of tomorrow.
According to experts who are shaping this tomorrow, the issue is largely one of trust—trust that the machines know what they’re doing in terms of the mechanics, and trust that they won’t go rogue and charge blindly into a raging river. It’s on the industry to ensure that the mechanics of the systems are actually as foolproof as possible, and it’ll be equally important that they convey that competence in an appropriate and convincing manner to passengers.
In order for that to happen, the cars will have to put on very human faces, said Jack Weast, head of Intel’s autonomy development efforts. He cites a study his group conducted about human-machine interactions with autonomous cars.
“We tried to understand what kind of interaction would engender trust,” he said, adding that their working theory was that it would be the same properties that would engender trust between humans. “Trust is built when you have certain characteristics present in the interaction, including bi-directional, clear, and open dialogue. If the car acknowledges you, you feel validated. So we built the cars to drive around our campus in Arizona with more human interactions and found they absolutely made a difference.”
He explains that when new passengers entered the car, the machines spoke to them like a human would, to explain what they were doing and why—“I’m slowing down for these pedestrians” or “I’m turning left to avoid that traffic.” Two interesting patterns emerged: “After they heard it a few times, the passengers would say ‘Okay, I trust you, you don’t have to keep doing this,’” Weast said, “and then they unexpectedly wanted to have a conversation, asking the computer to look up something on the web for them, for instance. Really quickly in a small study these qualities made a huge difference in people’s level of trust and belief.”
In short, while the cars have a long way to go before people know enough about how they function to fully trust them, a little humanizing of the systems can go an awfully long way itself.
But because the AAA study also indicated that U.S. drivers weren’t thrilled about the prospect of even driving on the same roads as autonomous cars, the manufactures will also have to do work seriously on external communications. (That is, having the cars communicate with other drivers and pedestrians.) A team of researchers in London, known as Humanising Autonomy, have been studying this very issue.
“While a lot of attention has been put on the inner features and passenger experience of autonomous vehicle concepts, not enough consideration has gone into how people outside the vehicle feel when interacting with these autonomous vehicles, and how these vehicles can show that they have acknowledged their presence and intent,” noted Raunaq Bose, CTO and co-founder of Humanising Autonomy, in an October article.
Their research project spawned a manifesto about how autonomous systems should operate in a world still filled with more humans than robots. The results argues, among other things, that the infrastructure should shift from vehicle-oriented to pedestrian-focused—and the cars should be engineered to reflect this—in order to ensure humans don’t feel even further pushed aside than they do already. Autonomous vehicles should also be programmed to acknowledge the presence of pedestrians and communicate their intent in a direct and clear manner. “Acknowledgement through eye contact has been identified as an important form of assurance between pedestrians and vehicles,” the group wrote. “Self-driving vehicles must replicate this interaction.”
So can something as simple as a virtual wink and a nod help temper human fears in the future? Seems likely...and hey, maybe a plausible-enough start to a robot uprising, too.