A remarkably inquisitive artificial-intelligence system developed by a team of researchers at NYU has learned to play a game similar to Battleship with mind-blowing skill.
In the simple game the researchers created, players seek to find their opponent’s ships hidden on a small grid of squares by asking a series of questions that can be answered with a single number or word. Their program figures out how to ask these questions as efficiently as possible.
Taking inspiration from cognitive psychology, and using a fundamentally different approach from most of today’s AIs, the system shows how machines may learn how to ask useful questions about the world. The program treats questions as miniature programs, allowing it to learn from just a few examples and to construct its own questions on the basis of what it has learned.
The game was developed by Brenden Lake, an assistant professor at NYU; Todd Gureckis, an associate professor; and Anselm Rothe, a graduate student. “There’s a tremendous gap between the human and the machine ability to ask questions when seeking information about the world,” Lake says. The researchers describe the work in a paper posted online.
The researchers had humans play their game and recorded the questions they asked. They then translated the questions into conceptual components. For example, the questions “How long is the blue ship?” and “Does the blue ship have four tiles?” concern the length of a target. The question “Do the blue and red ships touch?” concerns position. The researchers then encoded these questions using a simple programming language and built a probabilistic model to determine which questions should yield the most useful information. This methodology allowed the AI system to efficiently construct novel questions that helped it win the game.
Most AI approaches involve simply feeding a computer huge quantities of example data and having it generate its own examples after that. While the NYU team’s method requires more hand-coding, it is far more efficient and effective at discovering smart questions to pose. The system also builds smart questions in a more methodical way, and it can even produce questions that no human thought to ask.
The researchers are exploring how their technology might make chatbots and other dialogue system more effective and less painful to use. With a little preprogramming, such a system might be able to help customers solve their problem more quickly by posing the right questions.
“Having dialogue systems that generate novel questions so that they can get more informative answers on the fly is going to make human-computer interaction more effortless and make these systems more useful and fun to use,” says Lake.
Remarkably, the game-playing program was able to construct “the ultimate question” for the battleship game. This consisted of asking an opponent to go through a series of mathematical steps, adding the length of one ship to 10 times the length of the next and so on. Such a question would be difficult for a person to follow or to answer correctly, but in theory the result could be used to back-calculate the entire board. “It was pretty interesting,” says Lake.
Sam Gershman, an assistant professor at Harvard University who develops approaches to AI inspired by cognitive neuroscience, says the NYU research provides insights into how humans think up good questions. “First, you need some form of compositionality in order to capture the bewildering variety of questions,” Gershman says. “Second, you need a set of criteria that weigh the relative strengths and weaknesses of a question.”
Gershman adds that humans seem to follow a similar strategy to the more successful approach employed by the program, carefully assessing the complexity of their questions in order to use cognitive resources sparingly.
Ultimately, machines won’t become truly intelligent unless they begin to get curious about the world around them. That begins with asking probing questions.