Racist Self-Driving Cars Concerns Are (Well-Intentioned) Clickbait

Bias in artificial intelligence is a real problem, but self-driving cars running over people of color isn’t.

byCourtney Ehrlichman|
Self-Driving Tech photo

Recently, a series of

disturbing headlines about a study out of Georgia Tech have suggested a shocking possibility: self-driving vehicles, specifically the machine-learning they use for perception, pose a risk to people of color. The idea of racist self-driving cars is such an intoxicating cocktail of hot-button issues that it seems like an episode of Black Mirror come to life. But, as so often seems to be the case with such impossibly provocative stories, even a cursory investigation suggests that these headlines were designed more to generate traffic than cast light on a meaningful issue.

The study in question, “Predictive Inequity in Object Detection” was “pre-published” in a clearinghouse called arXiv, hosted by Cornell University. It’s essentially a working paper in the long waiting line to be peer-reviewed.  How long is that line?  In the last couple of days alone, 174 papers were accepted into arXiv’s computer science>computer vision category. That’s approximately 31 per day, by one volunteer moderator.  

For readers who have not spent time in academia, this is a far cry from the rigorous academic peer-review process that stamps approval on research that advances knowledge in specialized fields of expertise. Though it may seem like an academic journal, arXiv simply gives authors a place to host their unreviewed working paper online. Clearly this important distinction was lost on the journalists who ran with this particular study.  

PhD candidate Benjamin Wilson, the lead author of the study, sets out to interrogate “fairness in machine learning”. This is a worthy goal, to be sure, as there are peer-reviewed instances of real biases in AI, particularly involving facial recognition. But the study itself starts to lose relevance as soon as you realize that the authors are attempting to transpose meaningful biases in facial recognition systems into self-driving systems, which do not share an interest in skin color and in fact have safeguards against any bias based on the visible light spectrum.

“Self-driving sensors do not involve facial recognition of any kind, explains Martial Hebert, Director of the Robotics Institute at Carnegie Mellon University. “They don’t care at all—for a simple reason—the face only occupies a few points or pixels of the pedestrian.” 

Self-driving cars will never come near the level of fidelity in pedestrian detection required to distinguish skin color, because the high level of complexity within the perception task then has to be combined with localization, routing, and planning tasks in order to maintain safety in near real time.  Essentially, the self-driving vehicle can’t blow all its computing resources in this super fine grain detection, because it needs to save its computing resources for the other tasks. 

The authors might have been tipped off to this by the fact that they appear to have found it surprisingly difficult to even label a typical training data set based on skin color. “We found that this initial experiment had a large amount of disagreement, both amongst [Amazon’s Mechanical Turk workers used to label pedestrians based on skin color] and compared to our labeling,” they write. “We suspected this was due to the very small size of many of the cropped images.”

Autonomous cars are able to work with relatively small camera images because they don’t need to pull out details like skin color but also because they generally don’t rely entirely on them. Instead, they fuse camera data with data from sensors like lidar and radar, which use lasers and radio waves respectively and therefore cannot detect colors of any kind. These sensors build point clouds from which only two things can be inferred: how far a point is from the beam source and the reflectivity of that point. The fusion of these points and pixels from cameras over time allow for the car to infer trajectory by relating objects to other objects around it… and that’s it. 

Aaron Morris, CEO of Allvision (disclosure: a client of mine) was clearly aggravated when I brought up this study and the headlines it inspires.  “Look, in the context of self-driving cars, skin color doesn’t matter, because nobody is walking around naked,” he said. “It doesn’t make any sense in the context of self-driving cars - the vehicles just want to detect a pedestrian so they won’t hit it, any additional information about the pedestrian beyond its inferred trajectory is just noise.” 

Self-driving car developers care a lot about the pedestrian detection task, as it is fundamental to the vehicle’s main objective: to continuously and rapidly localize and plan their path on the road.  It’s also important to note the extent to which the autonomous drive sector has been influenced by the tragic death of pedestrian Elaine Herzberg, who was hit by an Uber autonomous test vehicle a year ago. If, in some dystopian, alternate reality, self-driving developers were not already aware of the absolute importance of the pedestrian detection task, they would be now. 

If self-driving cars had any need to interpret skin color or other physical details about pedestrians, perhaps this research might be relevant to their development. At the very best, this research could possibly be interpreted as yet another reason to avoid trying to build a 100% camera-based system, since radar and lidar are the literally color-blind safeguard against this kind of bias. But then most AV developers use all three sensors anyway because of the far more immediate risk: running over a pedestrian due to a physical occlusion caused by some fiendishly complex edge case that has nothing to do with racial bias.

I reached out to the lead author of the study via the email listed in the footnotes.  It bounced back 3 times.  The Forbes article covering the study has disappeared.  But the damage has been done. We cannot unsee the headlines.  The seeds of distrust have been planted in the public’s collective mind.  Amidst the sea of policy decisions the self-driving industry faces, the already enormous task of public education and acceptance has just been made a tremendously larger lift thanks to this misaligned study and the malfeasance of inept journalists.  

I’d much rather discuss the potential for self-driving cars to impact the horrendous practice of racial-profiling by the police. When that study gets published, click away.

COURTNEY EHRLICHMAN, founder, The Ehrlichman Group.  Courtney’s expertise in designing equitable, future-proof solutions stems from her expansive background in emerging mobility, connected/ automated vehicles, economic development, land-use planning, and human-centered design. She formerly served as Deputy Director of Traffic21 Institute and two National USDOT research centers at Carnegie Mellon University and co-founded RoadBotics, a CMU Robotics Institute spinoff that uses AI for road assessments.