` Joy Buolamwini, 28
MIT Media Lab and Algorithmic Justice League
When AI misclassified her face, she started a movement for accountability.
As a college student, Joy Buolamwini discovered that some facial-analysis systems couldn’t detect her dark-skinned face until she donned a white mask. “I was literally not seen by technology,” she says.
That sparked the research for her MIT graduate thesis. When she found that existing data sets for facial--analysis systems contained predominantly pale-skinned and male faces, Buolamwini created a gender-balanced set of over a thousand politicians from Africa and Europe. When she used it to test AI systems from IBM, Microsoft, and Face++, she found that their accuracy varied greatly with gender and skin color. When determining gender, the error rates of these systems were less than 1 percent for lighter-skinned males. But for darker-skinned female faces, the error rates were as high as 35 percent.
In some cases, as when Facebook mislabels someone in a photo, such mistakes are merely an annoyance. But with a growing number of fields coming to rely on AI—law enforcement is using it for predictive policing, and judges are using it to determine whether prisoners are likely to reoffend—the opportunities for injustice are frightening. “We have to continue to check our systems, because they can fail in unexpected ways,” Buolamwini says.
A former Rhodes scholar and Fulbright fellow, she founded the Algorithmic Justice League to confront bias in algorithms. Beyond merely bringing these biases to light, she hopes to develop practices to prevent them from arising in the first place—like making sure facial-recognition systems undergo accuracy tests.
—Erika Beras