Intelligent Machines

Can This Man Make AI More Human?

One cognitive scientist thinks the leading approach to machine learning can be improved by ideas gleaned from studying children.

Like any proud father, Gary Marcus is only too happy to talk about the latest achievements of his two-year-old son. More unusually, he believes that the way his toddler learns and reasons may hold the key to making machines much more intelligent.

Sitting in the boardroom of a bustling Manhattan startup incubator, Marcus, a 45-year-old professor of psychology at New York University and the founder of a new company called Geometric Intelligence, describes an example of his boy’s ingenuity. From the backseat of the car, his son had seen a sign showing the number 11, and because he knew that other double-digit numbers had names like “thirty-three” and “seventy-seven,” he asked his father if the number on the sign was “onety-one.”

“He had inferred that there is a rule about how you put your numbers together,” Marcus explains with a smile. “Now, he had overgeneralized it, and he made a mistake, but it was a very sophisticated mistake.”

Marcus has a very different perspective from many of the computer scientists and mathematicians now at the forefront of artificial intelligence. He has spent decades studying the way the human mind works and how children learn new skills such as language and musicality. This has led him to believe that if researchers want to create truly sophisticated artificial intelligence—something that readily learns about the world—they must take cues from the way toddlers pick up new concepts and generalize. And that’s one of the big inspirations for his new company, which he’s running while on a year’s leave from NYU. With its radical approach to machine learning, Geometric Intelligence aims to create algorithms for use in an AI that can learn in new and better ways.

Nowadays almost everyone else trying to commercialize AI, from Google to Baidu, is focused on algorithms that roughly model the way neurons and synapses in the brain change as they are exposed to new information and experiences. This approach, known as deep learning, has produced some astonishing results in recent years, especially as more data and more powerful computer hardware have allowed the underlying calculations to grow in scale. Deep-learning methods have matched—or even surpassed—human accuracy in recognizing faces in images or identifying spoken words in audio recordings. Google, Facebook, and other big companies are applying the approach to just about any task in which it is useful to spot a pattern in huge amounts of data, such as refining search results or teaching computers how to hold a conversation (see “Teaching Machines to Understand Us”).

But is deep learning based on a model of the brain that is too simple? Geometric Intelligence—indeed, Marcus himself—is betting that computer scientists are missing a huge opportunity by ignoring many subtleties in the way the human mind works. In his writing, public appearances, and comments to the press, Marcus can be a harsh critic of the enthusiasm for deep learning. But despite his occasionally abrasive approach, he does offer a valuable counterperspective. Among other things, he points out that these systems need to be fed many thousands of examples in order to learn something. Researchers who are trying to develop machines capable of conversing naturally with people are doing it by giving their systems countless transcripts of previous conversations. This might well produce something capable of simple conversation, but cognitive science suggests it is not how the human mind acquires language.

In contrast, a two-year-old’s ability to learn by extrapolating and generalizing—albeit imperfectly—is far more sophisticated. Clearly the brain is capable of more than just recognizing patterns in large amounts of data: it has a way to acquire deeper abstractions from relatively little data. Giving machines even a basic ability to learn such abstractions quickly would be an important achievement. A self-driving car might not need to travel millions of miles in order to learn how to cope with new road conditions. Or a robot could identify and fetch a bottle of pills it has been shown only once or twice. In other words, these machines would think and act a bit more the way we do.

With slightly unkempt hair and a couple of days of stubble, Marcus seems well suited to his new role as an entrepreneur. In his company’s space, a handful of programmers work away at expensive computer workstations running powerful graphics processors. At one point, when Marcus wants to illustrate a point about how the brain works, he reaches for what he thinks is a whiteboard marker. It turns out to be a misplaced dart from a Nerf gun.

Marcus talks rapidly when excited, and he has a quick sense of humor and a mischievous grin. He refuses to explain exactly what products and applications his company is working on, for fear that a big company like Google might gain an advantage by learning the crucial insights behind it. But he says it has developed algorithms that can learn from relatively small amounts of data and can even extrapolate and generalize, in a crude way, from the information they are fed. Marcus says that his team has tested these algorithms using tasks at which deep-learning approaches excel, and they have proved significantly better in several cases. “We know something about what the properties of the brain should be,” he explains. “And we’re trying, in some sense, to reverse-engineer from those properties.”

Boy wonder

Marcus, who was born in Baltimore, became fascinated by the mind in high school after reading The Mind’s I, a collection of essays on consciousness edited by the cognitive scientist Douglas Hofstadter and the philosopher Daniel Dennett, as well as Hofstadter’s metaphorical book on minds and machines, Gödel, Escher, Bach. Around the same time, he wrote a computer program designed to translate Latin into English. The difficulty of the task made him realize that re-creating intelligence in machines would surely require a much greater understanding of the phenomena at work inside the human mind.

Marcus’s Latin-to-English program wasn’t particularly practical, but it helped convince Hampshire College to let him embark on an undergraduate degree a couple of years early. Students at the small liberal-arts school in Amherst, Massachusetts, are encouraged to design their own degree programs. Marcus devoted himself to studying the puzzle of human cognition.

The mid-1980s were an interesting time for the field of AI. It was becoming split between those who sought to produce intelligent machines by copying the basic biology of the brain and those who aimed to mimic higher cognitive functions using conventional computers and software. Early work in AI was based on the latter approach, using programming languages built to handle logic and symbolic representation. Birds are the classic example. The fact that birds can fly could be encoded as one piece of knowledge. Then, if a computer were told that a starling was a bird, it would deduce that starlings must be able to fly. Several big projects were launched with the aim of encoding human knowledge in vast databases, in hopes that some sort of complex intelligence might eventually emerge.

But while some progress was made, the approach proved increasingly complex and unwieldy. Rules often have exceptions; not all birds can fly. And while penguins are entirely earthbound, a bird in a cage and one with a broken wing cannot fly for very different reasons. It proved impossibly complicated to encode all the exceptions to such rules. People seem to learn such exceptions quickly, but the computers balked. (Of course, exceptions, including “eleven” rather than “onety-one,” can be confusing for humans too.)

Gary Marcus

Around the time Marcus was preparing to enter Hampshire College, a group of psychologists came up with an approach that threatened to turn artificial intelligence upside down. Back in the 1940s, Donald Hebb had presented a theory of how the nerves in the brain might learn to recognize an input. He showed how the repeated firing of neurons might strengthen their connections to each other, thereby increasing the likelihood that they would all fire again when fed the same input. Some researchers built computers with a similar design. But the abilities of these so-called neural networks were limited until 1986, when a group of researchers discovered ways to increase their learning power. These researchers also showed how neural networks could be used to do various things, from recognizing patterns in visual data to learning the past tense of English verbs. Train these networks on enough examples, and they form the connections needed to perform such tasks.

Calling their approach “connectionism,” the researchers argued that sufficiently large neural networks could re-create intelligence. Although their ideas didn’t take over immediately, they eventually led to today’s era of deep learning.

Just as connectionism was taking off, Marcus was deciding where to do his graduate studies, and he attended a lecture by the renowned cognitive scientist Steven Pinker, then a professor at MIT. Pinker was talking about the way children learn and use verbs, and he was arguing, contrary to a pure connectionist perspective, that they do not seem to acquire the past tense of verbs purely by memorizing examples and generalizing to similar ones. Pinker showed evidence that children quickly detect rules of language and then generalize. He and others believe, essentially, that evolution has shaped the neural networks found in the human brain to provide the tools necessary for more sophisticated intelligence.

Marcus joined Pinker’s lab at MIT at 19, and Pinker remembers him as a precocious student. “I assigned to him a project analyzing a simple yes-no hypothesis on a small data set of the recorded speech from three children,” he said in an e-mail. “A few days later he had performed an exhaustive analysis on the speech of 25 children which tested a half-dozen hypotheses and became the basis for a major research monograph.”

As a graduate student, Marcus gathered further evidence to support Pinker’s ideas about learning and added insights of his own. He pioneered the computerized analysis of large quantities of cognitive research data, studying thousands of recordings of children’s speech to find instances where they made errors like “breaked” and “goed” instead of “broke” and “went.” This seemed to confirm that children grasp the rules of grammar and then apply them to new words, while learning the exceptions to these rules by rote.

On the basis of this research, Marcus began questioning the connectionist belief that intelligence would essentially emerge from larger neural networks, and he started focusing on the limitations and quirks of deep learning. A deep-­learning system could be trained to recognize particular species of birds in images or video clips, and to tell the difference between ones that can fly and ones that can’t. But it would need to see millions of sample images in order to do this, and it wouldn’t know anything about why a bird isn’t able to fly.

Marcus’s work with children, in fact, led him to an important conclusion. In a 2001 book called The Algebraic Mind, he argued that the developing human mind learns both from examples and by generating rules from what it has learned. In other words, the brain uses something like a deep-learning system for certain tasks, but it also stores and manipulates rules about how the world works so that it can draw useful conclusions from just a few experiences.

This doesn’t exactly mean that Geometric Intelligence is trying to mimic the way things happen in the brain. “In an ideal world, we would know how kids do it,” Marcus says. “We would know what brain circuits are involved, the computations they are doing. But the neuroscience remains a mystery.” Rather, he hints that the company is using a grab bag of techniques, including ones “compatible” with deep learning, to try to re-create human learning.

Common sense

The work at Geometric Intelligence is surely significant, because blending new ideas from cognitive science and neuroscience will undoubtedly be important for the future of artificial intelligence. Still, after meeting with Marcus, I felt a bit like a toddler trying to make sense of some unfamiliar digits. How will all this come together? I needed one of Marcus’s collaborators to show me another piece in the puzzle of what the company is developing.

Zoubin Ghahramani, a professor of information engineering at the University of Cambridge in the U.K., is a cofounder of Geometric Intelligence. Ghahramani grew up in the Soviet Union and Iran before moving to Spain and the United States, and although he is precisely the same age as Marcus, he arrived at MIT a year later. But because the pair shared a birthday, they ended up throwing parties and socializing together.

Ghahramani is focused on using probability to make machines smarter. The mathematics behind that is complicated, but the reason is simple: probability provides a way to cope with uncertainty or incomplete information. Flightless birds may, once again, help illustrate this. A probability-based system can assign a high likelihood to the concept that a bird is capable of flight. Then, when it learns that an ostrich is a bird, it will assume that it can most probably fly. But other information, such as the fact that an adult ostrich usually weighs more than 200 pounds, could change this assumption, reducing the probability that an ostrich can fly to near zero. This flexible approach can imbue machines with something resembling a crude form of common sense, a quality that is fundamentally important to human intelligence.

Speaking via Skype from his office in Cambridge, England, Ghahramani suggests one particular application that he and Marcus have their eye on: training robots to handle complex environments. In robotics research, “having experiences is expensive,” he says. “If you want to get a robot to learn to walk, or an autonomous vehicle to learn to drive, you can’t present it with a data set of a million examples of it falling over and breaking or having accidents—that just doesn’t work.”

Given that probabilistic algorithms and other technology in the works at Geometric Intelligence would be compatible with deep learning, it is possible that eventually the likes of Google or Facebook will acquire the company and add it to its overall AI portfolio. And despite Marcus’s criticism of connectionism and deep-learning fever, I have a hunch that he would be quite satisfied with such an outcome.

Even if that does happen, it will be significant if ­Marcus can show that the most miraculous learning system we know—the human mind—is key to the future of artificial intelligence. Marcus gives me another example of his son’s cleverness. “My wife asked him, ‘Which of your animal friends will come to school today?’” Marcus says. “And he says, ‘Big Bunny, because Bear and Platypus are eating.’ Then my wife goes back into his room and, sure enough, those toys are on a chair ‘eating.’”

Marcus marvels that his two-year-old can reason about rules concerning human behavior—realizing that you’re either going to school or doing something else—and construct a completely new sentence based on his growing understanding of the way language works. After a pause, and a smile, he adds: “Well, you show me the AI system that can do that.”