On Thursday morning, Alphabet’s autonomous vehicle subsidiary, Waymo—both of which can be colloquially lumped together as part of Google, if you haven’t gotten that memo yet—announced its AVs had achieved nine million miles of fully-automated driving on public roads in 25 cities, as well as at purpose-built simulated city at Castle Air Force Base south of San Francisco. Most of the recent surge—it passed eight million miles only this past July—happened in and around Phoenix, Arizona. There, its Early Rider program has been taking hundreds of members of the public to destinations downtown and across Mesa, Gilbert, Chandler, and Tempe.
The news of the nine-million-rider threshold comes just as another bit of news has been circulating: Crabby Phoenicians complaining that they “hate” the new vehicles because they struggle at certain intersections. For proof, we have a video showing one of the distinctive, tech-coated white Chrysler Pacifica minivans proceeding warily around a turn somewhere in town, trying to decide if two women standing at the corner talking were actually going to cross the street. (They were in position to do so, and had a green light.)
I watched this video with great interest, and my reaction was, “Wait, really? That’s it?” If hesitant robots trying to learn how to navigate intersections that most humans botch every seven seconds is the worst we can say about Waymo’s testing, then we’re actually in a pretty good place as a society. I respectfully suggest those crabby Phoenicians go watch a few dashcam video compilations showing real mayhem that human drivers are capable of causing and then chill the eff out.
By coincidence, this also happened to be the week I took my first spin in an autonomous Waymo, trundling around the company’s neighborhood in Mountain View, California. (I’m using the terminology somewhat loosely here—it’s technically semi-autonomous, but still hands- and eyes-free.) I’ve been in several versions of autonomous vehicles, ranging from military systems in the U.S. to buses in the Netherlands to Hyundai’s roundabout-conquering Nexo in South Korea. So it was a long time coming to finally experience the Waymo car in action.
First, the good: The drive was a non-event. No fuss, no drama. The vehicle readily managed a sampling of suburban California driving challenges, from crosswalks to bicyclists to bright sunshine to other vehicles behaving erratically in front of it. (The car waited patiently for them to get their acts together and move on.) The Waymo personnel managing the drive in the front seats watched the visualized stream of data in real time on a laptop, as the vehicle used lidar to paint virtual pictures of the long-range environment (up to 300 yards away), radar to track shorter-range targets, and high-resolution cameras to identify specific things like pedestrians, the color of traffic lights, and more. The on-board processor then deployed its AI programming—honed through those nine million miles of experience—to analyze the environment, spot trouble( or potential trouble), divine the intent of pedestrians and vehicles, and maneuver along while obeying the rules of the road. The systems are engineered to learn from the experiences and distribute that knowledge to the other vehicles in Waymo’s fleet, so the whole system learns together.
The whole time this is happening, a user-friendly, cleaned-up version of the virtual view streamed to the LCD displays in the headrests for second-row occupants. The idea is that such systems will telegraph to passengers the gist of what the vehicle is seeing and doing, to inspire confidence. It did that—though the engineering version on the laptop was much more engaging, with its more granular detail. I can see, however, why the cleaned-up version is good enough. The novelty wears off quickly either way, however, and you either trust it or you don’t within a few minutes—at least until something happens to shake you out of that trust.
Passenger interaction with the vehicle—assuming there won’t be human nannies present forever—happens through the app you use to summon the ride and tell it your destination, as well as four buttons on the ceiling of the Pacifica. Those include a "start" button you’d press to get rolling once everyone’s in and buckled, a "help" button to immediately ring a human employee in the command center if you start freaking out, a "pull over" button in case you freak out (it makes the vehicle pull over and stop at the first safe opportunity), and one to lock/unlock the van.
For the Early Rider program in Phoenix, Waymo has recruited a cross-section of volunteers who would use the service on a more-or-less daily basis. They sought out as many demographics as possible—all ages, incomes, personalities, etc.—and asked the individuals to sign up their families as well, in the hopes of pulling in members of the population who aren’t necessarily early-adopter types. The better, of course, to see how the normies feel about robotic taxis. After all, they represent the late-adopters who will be critical in every aspect of autonomous driving in the future. These systems have to be engineered for inclusivity from the outset, so the technophobes, skeptics, and crabby resistors will be coaxed in and won over.
This will also include, by the way, those outside the vehicles, which brings me to the “needs improvement” category. At the moment, there’s no apparent mechanism for communicating with other drivers and pedestrians beyond the turn signals. That will need to change quickly, as a few simple communication strategies could easily defuse tensions such as what’s being reported in Phoenix. It could be as simple as a digital display that tells drivers behind it—literally, with big, bright text—that it’s “waiting” or “evaluating” or “moving now” or something that will clue people in enough that they’ll cut it some slack if it appears uncertain about something. In short, it needs to be humanized on the outside as much as on the inside. [Just as long as the solution isn't Jaguar's terrifying car eyes. —Ed.]
The fact that the car's experimental-mystery-box nature isn’t billboarded better by the vehicles right now does give me pause—the same sort of pause I experience when an engineer in the autonomy industry reacts quizzically when I ask, say, how you tell the car to park diagonally in your driveway. “Why would you want to do that?” they’ve asked. The answer, of course, is because that’s life. We do illogical things every day, from the micro to the macro—and we need our cars to accommodate us, not the other way around. If I need to park crooked in my driveway because my daughter made a lovely chalk drawing there, my autonomous car should be able to do my bidding.
Same idea goes if you’re in a hurry, and need to go 10 or 15 miles per hour above the speed limit. Right now, Waymo cars are perfect law-abiding citizens. That’s wonderful, but it will be fascinating to see what extra-legal capabilities might be baked into these things in the future. That’s likely to happen in one form or another, because the engineers designing them overwhelmingly grasp the broader realities—the “edge cases”—of driving, and they'll certainly accommodate them in time. The nine million miles driven so far—including at the Castle AFB facility, where Waymo throws all kinds of weird challenges at the vehicles, and the untold extra millions of miles being run virtually in Waymo’s simulators—is pushing us toward that reality.
Other quibbles from my short drive were of the more minor sort, including occasional tiny panic-braking events when the car detected movement it was unsure about, or less-than-smooth steering arcs similarly caused by false alarms from the sensors. All of that will be smoothed out, as well—probably much of it before they even hit the magic 10-million-mile mark sometime in the next month or so. By then, even the most irritable Phoenicians won’t have much to complain about at all. Let’s hope, anyway.