Intel Wants to Use Math to Prove Self-Driving Cars Are Safe
Will a mathematical formula instill more confidence in autonomous vehicles?
In theory, self-driving cars are very safe. They eliminate the human element that is the cause of most car crashes, after all. But definitively proving that is a bit more challenging.
Intel believes it has a way to validate the safety of autonomous cars, and it involves math. Anon Shashua, CEO of Intel-owned Mobileye, is proposing a formula called Responsibility Sensitivity Safety, which he claims will ensure that self-driving cars behave carefully. The proposed system could also help shield makers of autonomous cars from blame in the event of crashes involving human-driven cars.
Responsibility Sensitivity Safety is more about ensuring that self-driving cars don't cause crashes (and being able to prove that), than eliminating crashes altogether. Intel realizes that self-driving cars can't always account for actions taken by other vehicles, but it wants to write software that will make it impossible for self-driving cars to act dangerously. Intel's formula is kind of like Isaac Asimov's three laws of robotics: Basically to avoid injuring humans, to obey humans, and to protect itself. But in this case, it's for self-driving cars.
This concept relies on the ability to calculate known factors, like reaction times. For example, Intel proposes programming autonomous cars to drive past parked cars at a speed slow enough to stop in time if a pedestrian walks out into the road. Intel claims it's possible to calculate that speed because the maximum speed a human being can move in that situation can be measured and modeled.
Implementing this system will allow makers of self-driving cars to prove that their vehicles aren't at fault in crashes. Liability is one of the major concerns for companies developing self-driving cars, and this could go a long way to putting executives at ease. Because no matter how safe they are, self-driving cars will eventually crash.
In fact, the safe driving habits of current self-driving cars are proving to be a liability. Even the safest driving can't prevent all crashes, and in some cases the mix of a self-driving cars and more aggressive human drivers has been the cause of crashes. Each crash is a potential public relations disaster that could feed public doubts of autonomous driving. So a mathematical proof that self-driving cars aren't to blame could be very appealing indeed.
MORE TO READ
Uber Built a Miniature Fake City in Pittsburgh to Test Self-Driving Cars
The “city” is equipped with everything from oblivious pedestrians to erratic drivers.
General Motors Bringing Self-Driving Cars to New York City’s Mean Streets
How do you say “Hey, I’m walkin’ heah!” in binary?
Toyota Will Test Self-Driving Cars With AI By 2020
Japan’s largest automaker is slowly ramping up its autonomous car efforts.
California Will Let Companies Test Self-Driving Cars—Without Human Drivers
New regulations will be enacted sometime next year.