Business Report

Artificial Intolerance

Artificial intelligence is being integrated into our lives, but there’s a lot we don’t understand about how these systems work. How do we feel about that?

For five months in 2011, a robot wheeled around an office building at Carnegie Mellon University delivering bananas, cookies, and other afternoon snacks to workers. With wide-set eyes and a pink mouth, Snackbot had a friendly look, but it was prone to mistakes. Long delays in conversations with workers were common. Sometimes the system running Snackbot froze.

Still, the workers became comfortable with Snackbot. It apologized when it made a mistake, something its human customers found ingratiating.

It was part of an experiment designed to determine whether people would respond positively to a robot that personalized its interactions with them. Snackbot was trained to recognize patterns in the snacks half the people liked and would comment on them. It never learned the preferences of the other half.

Over time, the people with whom Snackbot became more personal returned the favor. They were more likely to greet Snackbot by name, compliment it, and share unsolicited news. At the end of the trial, one office worker brought Snackbot a good-bye present—a AA battery, even though she knew the robot couldn’t use it—and said she would miss it.

“She said this had started to feel real,” says CMU researcher Min Kyung Lee, who led the study.

In the years since, machines have learned to be even more personal. Artificial-­intelligence technologies help computerized personal assistants like Apple’s Siri answer increasingly complex questions and use humor to deflect unsuitable topics. AI-powered recommendation engines on Netflix and Amazon continually get better at suggesting movies and books.

Less progress has been made in understanding how people feel about this kind of software. A recent examination of a group of 40 Facebook users found that more than half didn’t know an algorithm curates their news feed; when told, they expressed surprise and in some cases anger.

Geolocation and other technologies feed data to machine-learning systems to create a level of personalization we have come to expect. “If I am looking for the nearest Starbucks, who cares if Siri knows where I am standing?” says Ali Lange, a policy analyst at the Center for Democracy and Technology.

But these AI systems also make decisions for reasons we may never understand. That’s why researchers, consumer rights lawyers, and policy makers have begun to voice concern that unintentional or intentional bias in machine-­learning systems could give rise to patterns of algorithmic discrimination with causes that may be difficult to identify. This isn’t theoretical: studies have already found evidence of bias in online advertising, recruiting, and pricing, all driven by presumably neutral algorithms.

In one study, Harvard professor Latanya Sweeney looked at the Google AdSense ads that came up during searches of names associated with white babies (Geoffrey, Jill, Emma) and names associated with black babies (DeShawn, Darnell, Jermaine). She found that ads containing the word “arrest” were shown next to more than 80 percent of “black” name searches but fewer than 30 percent of “white” name searches. Sweeney worries that the ways Google’s advertising technology perpetuates racial bias could undermine a black person’s chances in a competition, whether it’s for an award, a date, or a job.

The founders of new companies such as BlueVine, ZestFinance, and Affirm, which are using AI-driven analysis to approve loans and offer credit, say they are particularly attuned to the regulatory dangers of discriminatory lending.

But still, data practices vary. The algorithms that power Affirm, founded by PayPal cofounder Max Levchin, use social-media feeds to help establish a customer’s identity but not to determine an applicant’s ability to repay a loan. ­Douglas Merrill, founder of ZestFinance and former CIO of Google, won’t use social-media data at all. “It feels creepy to me,” he says.

Affirm and ZestFinance are both founded on the idea that by looking at tens of thousands of data points, machine-learning programs can expand the number of people deemed creditworthy. In other words, algorithms fed by so much diverse data will be less prone to discrimination than traditional human-driven lending based on a much more limited number of factors. Among the insights discovered by ZestFinance’s algorithms: that income is not as good a predictor of creditworthiness as the combination of income, spending, and the cost of living in any given city. The algorithm also takes note of people who fill out a loan application in all capital letters, whom their model has found to be worse credit prospects than those who fill it out in all lowercase letters.

Could data with unrecognized biases, when fed into such systems, inadvertently turn them into discriminators? Fairness is one of the most important goals, says Merrill. To guard against discrimination, his company has built machine-learning tools to test its own results. But for consumers, unable to unpack the complexities of these secret and multifaceted programs, it will be hard to know whether they have been treated fairly.