Your browser doesn't support HTML5 video.
Connectivity
AI Can Beat Us at Poker—Now Let’s See If It Can Work with Us
Researchers say we need to develop software capable of striking mutually beneficial relationships with humans.
Progress in artificial intelligence has long been measured by its mastery of board games like chess, backgammon, and Go. Researchers are now working on poker and computer games such as Starcraft.
Iyad Rahwan, a professor at MIT, respects those milestones but says the focus on beating humans in direct competition has led us to neglect other ways of measuring and advancing AI. He argues that as smart machines look set to become pervasive, more effort should be devoted to creating software that learns to coöperate with humans.
“This is the next important problem, because AIs don’t always have to replace us, they have to live with us,” says Rahwan. “Most human interaction is not zero sum—this was somehow a blind spot for ambitious AI projects.”
Rahwan has been trying to call attention to that blind spot with collaborators in the U.S., U.K., France, Australia, and the United Arab Emirates. In a recent study, they repurposed simple games used in behavioral science to study how humans coöperate (or don’t) to test how algorithms could learn to work with humans.
Those games included the Prisoners’ Dilemma, a standard in game theory research in which players in the role of criminals must decide whether to betray one another. Although simple, it can be used to analyze strategies in messy areas such as climate policy and advertising.
Initial results were disappointing, with coöperation between human and artificial players less common than between humans. That changed when the researchers gave both humans and their algorithms the opportunity to communicate in advance of a game using a menu of 19 phrases, including “Do as I say, or I’ll punish you,” “I’m changing my strategy,” and “Give me another chance.”
“The instant the machine starts to talk there’s a completely different response from people,” says Jacob Crandall, an associate professor at Brigham Young University also involved in the work. “They had a hard time distinguishing humans from machines.” It takes two to coöperate, and using the simple messages was enough for machines to get humans to be open to working together.
Across the three different games tested, people ended up being roughly as likely to coöperate with a machine player as with another human one. Overall, machine-machine pairs scored highest on average in the games because they coöperated more reliably (and unlike human players, they never lied).
The algorithm that achieved that calculates some promising strategies for the game being played in advance, before learning which to use based on the actions of its co-player. It isn’t likely to become the foundation of future human-robot relations, but is intended to show how experiments can test coöperation, and inspire further research into the idea, says Rahwan.
Oren Etzioni, director of the Allen Institute for Artificial Intelligence, in Seattle, hopes that happens. “The future that we need to have is a future where we coöperate with machines in the workplace, so it makes sense to study the form of that coöperation,” he says.
Transitioning from simple behavioral science games to more complex scenarios will require significant work, though, says Etzioni. Coöperation in complex situations would require software with a good mastery of language to communicate with other players, something that doesn’t trouble software taking on Go or Starcraft. Researchers don’t have to give up on board games, though. Etzioni suggests that Risk or Diplomacy, in which players must strike alliances and bargains, could be good test beds for machines’ coöperation skills.