Should Your Self-Driving Car Kill Others to Save You?

New survey suggests people tend not to practice what they preach when it comes to autonomous vehicle morality.

byJonathon Ramsey|
Should Your Self-Driving Car Kill Others to Save You?
Share

0

One of the biggest problems confronting the engineers of a future filled with self-driving vehicles: Figuring out who those cars should have to kill. Not in some sort of rise-of-the-machines scenario, mind you—who the cars should be tasked with protecting or sacrificing in the event of an accident. According to a recent survey, most people want an autonomous car that will save as many people as possible...so long as it doesn't endanger the person answering the question in the process, of course.

The study, entitled "The Social Dilemma of Autonomous Vehicles" and published in this week's journal Science, is based on results from six different surveys that quizzed the public concerning how autonomous cars should respond when the car needs to decide between harming its passengers or harming pedestrians.

“Most people want to live in in a world where cars will minimize casualties," MIT professor and study co-author Iyad Rahwan said. "But everybody wants their own car to protect them at all costs.”

The survey found most respondents wanted a car that would make utilitarian decisions in emergencies—that is, they wanted the car to save the most lives, like sacrificing a sole occupant to save ten pedestrians. However, given the choice, not nearly as many respondents wanted to actually ride in that utilitarian vehicle. They were happy for others to take the Star Trek II option, but they wanted a car that would prioritize their lives above all else for themselves.

Sounds predictably selfish. Away from ethics studies, though, many human drivers would make a snap decision to sacrifice their own safety to avoid hitting a child in the road or plowing into a crowd. Trying to resolve the muddiness of human decision-making and reasoning, therefore, will be a big step on the road to the truly autonomous car. “Before we can put our values into machines, we have to figure out how to make our values clear and consistent,” as Harvard psychologist Joshua D. Greene told The New York Times.

Making things even more confusing, most of the survey respondents said they did not want government agencies choosing how their autonomous car should behave in a morally-difficult situation. Ethicists have asked whether automakers should install a selectable level of moral choice into self-driving cars, leaving the driver to choose how his vehicle responds in an incident. But that would make the matter of determining fault in an accident even more complex.

Neither the answers nor the “ethical algorithms” that will follow will come quickly. Researchers, scientists, and universities have been chewing on the issue for years, with humorously frightening takes on what will be life-and-death situations; one MIT paper on the subject was even titled, “Why Self-Driving Cars Must Be Programmed to Kill.” Robot cars may one day save many lives, but before then, humans need to figure out which lives they should take.

stripe