Fear and Autonomy in a Self-driving Nissan Leaf

How it feels to surrender mind, body and soul to an EV hatchback.

byBrett Berk|
Hozelock Round Sprinkler Pro
Share

0

The technologies that allow a car to drive itself either already exist, or are well on their way to existing. The big question, according to Nissan, is how to get consumers to adapt to this incipient reality, and learn to cede control to a supposedly intelligent machine.

“Trust is most important,” says Nissan’s director of autonomous Human Machine Interface (HMI), Takashi Sunda, a few days before the Tokyo Motor Show. “How to help the customer to delegate to the system.”

Nissan’s means of boosting humans’ confidence in their computers is built on transparency: a window into what the machine is “thinking.” This is largely accomplished via a new instrument panel and head-up display that communicate the robot car’s intent before it makes a move.

Harnessing an octet of cameras, a quintet of radars, a quartet of laser scanners, and however many sensors exist in the onboard GPS unit, this new, production-ready dashboard displays actual and virtual images of the car’s surroundings, and the car itself, overlaid with highlights that indicate its ability to note, recognize, avoid and countervail objects and obstacles. Sit back and relax, it aims to tell the person sitting behind the superfluous steering wheel via quavering red and green lines and boxes. I’ve got this.

To further emphasize the hands-off nature of the operator’s role, controls to input requested changes in speed (up or down) or direction (left or right) aren’t on the steering wheel or steering column stalks, as they might ordinarily be. Rather, they’re embedded in a shiny new controller, round and smooth and silver, like a shrunken Hostess Ding Dong, that sits between the front seats.

The computing power is mounted in the rear hatch—or, rather, it is the hatch; racks of wires, processors and heat-sinks occupy the entirety of the cargo area. “We will plan to shrink it down with more powerful processors, two or three times as fast,” says Tetsuya Iijima, head of Nissan’s autonomous vehicle program and our guest operator for a 20-minute course around Tokyo. “The additional weight of the system must be below that of one person,” he adds. Specifications on how big a person were not made available, despite repeated requests.

Also not exactly available were the motivations behind Nissan’s decision to change the name of its ambitious, industry-leading autonomous vehicle program—announced in 2013—from Nissan Autonomous Drive to Nissan Intelligent Drive. Iijima is circumspect. “‘Intelligent’ covers a more wide area. The driver can decide how to use the vehicle himself,” he says obliquely, as his hands hover within centimeters of the wheel. “‘Autonomous’ is more narrow of a term, and may make for some confusion.”

Neither is Nissan willing to comment directly on an upstart like Tesla, which has trounced the conglomerate’s stated goal of being the first carmaker to bring an autonomous vehicle to market. “We are sticking to our original statement. We will release such a car by 2020,” Iijima says. As for assuming full liability, which Volvo, Mercedes-Benz and Google have committed to for their coming efforts? “That discussion is ongoing,” Iijima says.

On our stop-and-go route around the city—over bridges, beneath underpasses, through intersections—the car does a remarkably good job of confronting crossings, congestion and lights, and of accelerating up to the legal speed limit. (Like current Japanese cruise control systems, it will not operate at speeds above the posted maximum.) It excels at remaining centered in its lane, better than many systems we’ve experienced. And it does quite well at detecting and avoiding obstacles. It even aborted an instructed lane change when it sensed the approach of another vehicle. It is not a reckless robot. This helps engage our trust as well.

But, like other autonomous driver-aid systems, it is troubled by situations we view as straightforward. Take a highway merge. With a raised metal barrier between us and the vehicles entering the roadway, it is unable to detect the very large box truck accelerating for entry alongside us. It gives no alert asking for the human in the driver’s seat to take over. Our operator has to grab the wheel, and floor the Leaf’s accelerator to safely clear the rig.

We understand that this is a prototype system, and we forgive it. As with a pet or toddler, the process of developing trust has to go two ways. But this near-miss leads us to ask about how the system would respond to other unpredictable yet common situations.

We’re told that in the case of a car stopped ahead with flashing hazard lights, the autonomous Leaf would come to a stop. In case of an accident blocking the lane, the current system would perform a safe panic stop, though future iterations may include accident avoidance through lane change. In inclement weather, when sensors are degraded by snow or rain buildup, the system would make a “takeover request” to the driver. If the driver didn’t respond, it would engage “Safe Mode” and pull over to the shoulder until the driver took control.

What about in the event of an animal in the road? “Animals can be detected,” Iijima says. “But we haven’t set a protocol yet for which animals can be hit, and which cannot.” This leads us to wonder about the complex physical, situational—and indeed, ethical—minutia of such decisions. Any car that would avoid a rat, only to run over a baby possum would immediately lose this New Yorker’s trust. Maybe transparency isn’t enough. Maybe we need to play a few rounds of Truth or Dare, or complete a ropes course before we can bless the system with our unqualified, unalloyed confidence.

stripe
Car TechSelf-Driving Tech