Intelligent Machines

A.I. Reboots

“Artificial intelligence” used to mean robots that think like people; now it means software for rejecting junk e-mail. Low expectations could yield better applications, sooner.

It was the spring of 2000. The scene was a demonstration of an advanced artificial-intelligence project for the U.S. Department of Defense; the participants were a programmer, a screen displaying an elaborate windowed interface and an automated “intelligence”-a software application animating the display. The subject, as the programmer typed on his keyboard, was anthrax.

Instantly the machine responded: “Do you mean Anthrax (the heavy-metal band), anthrax (the bacterium) or anthrax (the disease)?”

“The bacterium,” was the typed answer, followed by the instruction, “Comment on its toxicity to people.”

“I assume you mean people (homo sapiens),” the system responded, reasoning, as it informed its programmer, that asking about People magazine “would not make sense.”

Through dozens of similar commonsense-ish exchanges, the system gradually absorbed all that had been published in the standard bioweapons literature about a bacterium then known chiefly as the cause of a livestock ailment. When the programmer’s input was ambiguous, the system requested clarification. Prompted to understand that the bacterium anthrax somehow fit into the higher ontology of biological threats, it issued queries aimed at filling out its knowledge within that broader framework, assembling long lists of biological agents, gauging their toxicity and strategies of use and counteruse. In the process, as its proud creators watched, the system came tantalizingly close to that crossover state in which it knew what it did not know and sought, without being prompted, to fill those gaps on its own.

The point of this exercise was not to teach or learn more about anthrax; the day when the dread bacterium would start showing up in the mail was still 18 months in the future. Instead, it was to demonstrate the capabilities of one of the most promising and ambitious A.I. projects ever conceived, a high-performance knowledge base known as Cyc (pronounced “psych”). Funded jointly by private corporations, individual investors and the Pentagon’s Defense Advanced Research Projects Agency, or DARPA, Cyc represents the culmination of an 18-year effort to instill common sense into a computer program. Over that time its creator, the computer scientist Douglas B. Lenat, and his cadres of programmers have infused Cyc with 1.37 million assertions-including names, abstract concepts, descriptions and root words. They’ve also given Cyc a common-sense inference engine that allows it, for example, to distinguish among roughly 30 definitions of the word “in” (being in politics is different from being in a bus).

Cyc and its rival knowledge bases are among several projects that have recently restored a sense of intellectual accomplishment to A.I.-a field that once inspired dreams of sentient computers like 2001: A Space Odyssey’s HAL 9000 and laid claim to the secret of human intelligence, only to be forced to back off from its ambitions after years of experimental frustrations. Indeed, there is a palpable sense among A.I.’s faithful-themselves survivors of a long, cold research winter-that their science is on the verge of new breakthroughs. “I believe that in the next two years things will be dramatically changing,” says Lenat.

It may be too early to declare that a science with such a long history of fads and fashions is experiencing a new springtime, but a greater number of useful applications are being developed now than at any time in A.I.’s more than 50-year history. These include not only technologies to sort and retrieve the vast quantity of information embodied in libraries and databases, so that the unruly jungle of human knowledge can be tamed, but improvements in system interfaces that allow humans and computers to communicate faster and more directly with each other-through, for instance, natural language, gesture, or facial expression. And not only are artificial-intelligence-driven devices venturing into places that might be unsafe for humans-one fleet of experimental robots with advanced A.I.-powered sensors assisted the search for victims in the World Trade Center wreckage last September-they’re showing up in the most mundane of all environments, the office. Commercial software soon to reach the market boasts “smart” features that employ A.I.-based Bayesian probability models to prioritize e-mails, phone messages and appointments according to a user’s known habits and (presumed) desires.

These and other projects are the talk of artificial-intelligence labs around the United States. What one does not hear much about anymore, however, is the traditional goal of understanding and replicating human intelligence.

“Absolutely none of my work is based on a desire to understand how human cognition works,” says Lenat. “I don’t understand, and I don’t care to understand. It doesn’t matter to me how people think; the important thing is what we know, not how do we know it.”

One might call this trend the “new” A.I., or perhaps the “new new new” A.I., for in the last half-century the field has redefined itself too many times to count. The focus of artificial intelligence today is no longer on psychology but on goals shared by the rest of computer science: the development of systems to augment human abilities. “I always thought the field would be healthier if it could get rid of this thing about consciousness,’” says Philip E. Agre, an artificial-intelligence researcher at the University of California, Los Angeles. “It’s what gets its proponents to overpromise.” It is the scaling back of its promises, oddly enough, that has finally enabled A.I. to start scoring significant successes.

Brilliance Proves Brittle

To a great extent, artificial-intelligence researchers had no choice but to exchange their dreams of understanding intelligence for a more utilitarian focus on real-world applications. “People became frustrated because so little progress was being made on the scientific questions,” says David L. Waltz, an artificial-intelligence researcher who is president of the NEC Research Institute in Princeton, NJ. “Also, people started expecting to see something useful come out of A.I.” And “useful” no longer meant “conscious.”

For example, the Turing test-a traditional trapping of A.I. based on British mathematician Alan Turing’s argument that to be judged truly intelligent a machine must fool a neutral observer into believing it is human-came to be seen by many researchers as “a red herring,” says Lenat. There’s no reason a smart machine must mimic a human being by sounding like a person, he argues, any more than an airplane needs to mimic a bird by flapping its wings.

Of course, the idea that artificial intelligence may be on the verge of fulfilling its potential is something of a chestnut: A.I.’s 50-year history is nothing if not a chronicle of lavish promises and dashed expectations. In 1957, when Herbert Simon of Carnegie Tech (now Carnegie Mellon University) and colleague Allen Newell unveiled Logic Theorist-a program that automatically derived logical theorems, such as those in Alfred North Whitehead and Bertrand Russell’s Principia Mathematica, from given axioms-Simon asserted extravagantly that “there are now in the world machines that think, that learn and that create.” Within 10 years, he continued, a computer would beat a grandmaster at chess, prove an “important new mathematical theorem” and write music of “considerable aesthetic value.”

“This,” as the robotics pioneer Hans Moravec would write in 1988, “was an understandable miscalculation.” By the mid-1960s, students of such artificial-intelligence patriarchs as John McCarthy of Stanford University and Marvin Minsky of MIT were producing programs that played chess and checkers and managed rudimentary math; but they always fell well short of grandmaster caliber. Expectations for the field continued to diminish, so much so that the period from the mid-1970s to the mid-1980s became known as the “A.I. winter.” The best expert systems, which tried to replicate the decision-making of human experts in narrow fields, could outperform humans at certain tasks, like the solving of simple algebraic problems, or the diagnosis of diseases like meningitis (where the number of possible causes is small). But the moment they moved outside their regions of expertise they tended to go seriously, even dangerously, wrong. A medical program adept at diagnosing human infectious diseases, for example, might conclude that a tree losing its leaves had leprosy.

Even in solving the classic problems there were disappointments. The IBM system Deep Blue finally won Simon’s 40-year-old wager by defeating chess grandmaster Garry Kasparov in 1997, but not in the way Simon had envisioned. “The earliest chess programs sought to duplicate the strategies of grandmasters through pattern recognition, but it turned out that the successful programs relied more on brute force,” says David G. Stork, chief scientist at Ricoh Innovations, a unit of the Japanese electronics firm, and the editor of HAL’s Legacy, a 1996 collection of essays assessing where the field stood in relation to that paradigmatic, if fictional, intelligent machine. Although Deep Blue did rely for much of its power on improved algorithms that replicated grandmaster-style pattern recognition, Stork argues that the system “was evaluating 200 million board positions per second, and that’s a very un-humanlike method.”

Many A.I. researchers today argue that any effort to replace humans with computers is doomed. For one thing, it is a much harder task than many pioneers anticipated, and for another there is scarcely any market for systems that make humans obsolete. “For revenue-generating applications today, replacing the human is not the goal,” says Patrick H. Winston, an MIT computer scientist and cofounder of Ascent Technology, a private company based in Cambridge, MA, that develops artificial-intelligence applications. “We don’t try to replace human intelligence, but complement it.”

Commonsense Solutions

“What we want to do is work toward things like cures for human diseases, immortality, the end of war,” Doug Lenat is saying. “These problems are too huge for us to tackle today. The only way is to get smarter as a species-through evolution or genetic engineering, or through A.I.”

We’re in a conference room at Cycorp, in a nondescript brick building nestled within an Austin, TX, industrial park. Here, teams of programmers, philosophers and other learned intellectuals are painstakingly inputting concepts and assertions into Cyc in a Socratic process similar to that of the anthrax dialogue above. Surprisingly, despite the conversational nature of the interaction, the staff seems to avoid the layman’s tendency to anthropomorphize the system.

“We don’t personalize Cyc,” says Charles Klein, a philosophy PhD from the University of Virginia who is one of Cycorp’s “ontologists.” “We’re pleased to see it computing commonsense outputs from abstract inputs, but we feel admiration toward it rather than warmth.”

That’s a mindset they clearly absorb from Lenat, a burly man of 51 whose reputation derives from several programming breakthroughs in the field of heuristics, which concerns rules of thumb for problem-solving-procedures “for gathering evidence, making hypotheses and judging the interestingness” of a result, as Lenat explained later. In 1976 he earned his Stanford doctorate with Automated Mathematician, or AM, a program designed to “discover” new mathematical theorems by building on an initial store of 78 basic concepts from set theory and 243 of Lenat’s heuristic rules. AM ranged throughout the far reaches of mathematics before coming to a sudden halt, as though afflicted with intellectual paralysis. As it happened, AM had been equipped largely with heuristics from finite-set theory; as its discoveries edged into number theory, for which it had no heuristics, it eventually ran out of discoveries “interesting” enough to pursue, as ranked by its internal scoring system.

AM was followed by Eurisko (the present tense of the Greek eureka, and root of the word heuristic), which improved on Automated Mathematician by adding the ability to discover not only new concepts but new heuristics. At the 1981 Traveller Trillion Credit Squadron tournament, a sort of intellectuals’ war game, Eurisko defeated all comers by outmaneuvering its rivals’ lumbering battleships with a fleet of agile little spacecraft no one else had envisioned. Within two years the organizers were threatening to cancel the tournament if Lenat entered again. Taking the cue and content with his rank of intergalactic admiral, he began searching for a new challenge.

The task he chose was nothing less than to end A.I.’s long winter by overcoming the limitations of expert systems. The reason a trained geologist is easier for a computer system to replicate than a six-year-old child is not a secret: it’s because the computer lacks the child’s common sense-that collection of intuitive facts about the world that are hard to reduce to logical principles. In other words, it was one thing to infuse a computer with data about global oil production or meningitis, but quite another to teach it all the millions of concepts that humans absorb through daily life-for example, that red is not pink or that rain will moisten a person’s skin but not his heart. “It was essentially like assembling an encyclopedia, so most people spent their time talking about it, rather than doing it,” Lenat says.

And so Cyc was born. Lenat abandoned a tenure-track position at Stanford to launch Cyc under the aegis of Microelectronics and Computing Technology, an Austin-based research consortium. Now, 18 years later, Cyc ranks as by far the most tenacious artificial-intelligence project in history and one far enough advanced, finally, to have generated several marketable applications. Among these is CycSecure, a program to be released this year that combines a huge database on computer network vulnerabilities with assumptions about hacker activities to identify security flaws in a customer’s network before they can be exploited by outsiders. Lenat expects Cyc’s common-sense knowledge base eventually to underpin a wide range of search engines and data-mining tools, providing the sort of filter that humans employ instinctively to discard useless or contradictory information. If you lived in New York and you queried Cyc about health clubs, for example, it would use what it knows about you to find information about clubs near your home or office, screening out those in Boston or Bangor.

Numerous other promising applications of the new A.I.-such as advanced robotics and the “Semantic Web,” a sophisticated way of tagging information on Web pages so that it can be understood by computers as well as human users (see “A Smarter Web,” TR November 2001)-share Lenat’s focus on real-world applications and add to the field’s fresh momentum. Searching the World Trade Center wreckage, for example, provided a telling test for the work of the Center for Robot-Assisted Search and Rescue in Littleton, CO, a nonprofit organization founded by West Point graduate John Blitch, who believes that small, agile robots can greatly aid search-and-rescue missions where conditions remain too perilous for exclusively human operations. Having assembled for a DARPA project a herd of about a dozen robots-with lights, video cameras and tanklike treads mounted on bodies less than 30 centimeters wide-he brought them to New York just after September 11. Over the next week Blitch deployed the robots on five forays into the wreckage-during which their ability to combine data arriving from multiple sensors helped find the bodies of five buried victims.

Sending Clippy Back to School

The deployment of A.I. in applications like robotic search and rescue is at an early stage, but that’s not true on other fronts. One of the busiest artificial-intelligence labs today is at Microsoft Research, where almost everything is aimed at conjuring up real-world applications.

Here several teams under the direction of Eric Horvitz, senior researcher and manager of the Adaptive Systems and Interaction group, are working to improve the embedded functions of Microsoft products. Several of the group’s A.I.-related advances have found their way into Windows XP, the latest iteration of Microsoft’s flagship operating system, including a natural-language search assistant called Web Companion and “smart tags,” a feature that automatically turns certain words and phrases into clickable Web links and entices readers to explore related sites.

To demonstrate where things are heading, Horvitz fires up the latest in “smart” office platforms. It’s a system that analyzes a user’s e-mails, phone calls (wireless and land line), Web pages, news clips, stock quotes-all the free-floating informational bric-a-brac of a busy personal and professional lifestyle-and assigns every piece a priority based on the user’s preferences and observed behavior. As Horvitz describes the system, it can perform a linguistic analysis of a message text, judge the sender-recipient relationship by examining an organizational chart and recall the urgency of the recipient’s responses to previous messages from the same sender. To this it might add information gathered by watching the user by video camera or scrutinizing his or her calendar. At the system’s heart is a Bayesian statistical model-capable of evaluating hundreds of user-related factors linked by probabilities, causes and effects in a vast web of contingent outcomes-that infers the likelihood that a given decision on the software’s part will lead to the user’s desired outcome. The ultimate goal is to judge when the user can safely be interrupted, with what kind of message, and via which device.

Horvitz expects that such capabilities-to be built into coming generations of Microsoft’s Office software-will help workers become more efficient by freeing them from low-priority distractions such as spam e-mail, or scheduling meetings automatically, without the usual rounds of phone tag. That will be a big step forward from Clippy, the animated paper clip that first appeared as an office assistant in Microsoft Office 97, marking Microsoft’s first commercial deployment of Bayesian models. Horvitz says his group learned from Clippy and other intelligent assistants-which were derided as annoyingly intrusive-that A.I.-powered assistants need to be much more sensitive to the user’s context, environment and goals. The less they know about a user, Horvitz notes, the fewer assumptions they should make about his or her needs, to avoid distracting the person with unnecessary or even misguided suggestions.

Another ambitious project in Horvitz’s group aims to achieve better speech recognition. His team is building DeepListener, a program that assembles clues from auditory and visual sensors to clarify the ambiguities in human speech that trip up conventional programs. For instance, by noticing whether a user’s eyes are focused on the computer screen or elsewhere, DeepListener can decide whether a spoken phrase is directed at the system or simply part of a human conversation (see “Natural Language Processing,” TR January/February 2001).

In a controlled environment, the system turns in an impressive performance. When ambient noise makes recognition harder, DeepListener tends to freeze up or make wild guesses. But Horvitz’s group is developing algorithms that will enable the software to behave more as hard-of-hearing humans do-for example, by asking users to repeat or clarify phrases, or providing a set of possible meanings and asking for a choice, or sampling homonyms in search of a close fit. But this work veers toward the very limits of A.I. “Twenty-five years from now,” Horvitz acknowledges, “speech recognition will still be a problem.”

Still Searching for the Mind

Whether it takes 25 years or 50 to achieve perfect speech recognition, such practical goals are still far closer than understanding, let alone replicating, human consciousness. And embracing them has allowed significant progress-with far more to come. In corporate research departments and university programs alike, artificial-intelligence researchers are finding new ways to automate labor-saving devices, analyze information about our physical world or make sense of the vast reserves of information entombed in libraries and databases. These range from the interactive Web- and computer-based training courses devised by Cognitive Arts, a company founded by Roger Schank, the former head of the artificial-intelligence program at Yale University, to a system developed by the Centre for Marine and Petroleum Technology, a European petroleum company consortium, to analyze the results of oil well capacity tests.

But considering how dramatic a departure so much of this work is from traditional A.I., it is unsurprising that some researchers have their reservations about the field’s newly pragmatic bent.

Today’s artificial-intelligence practitioners “seem to be much more interested in building applications that can be sold, instead of machines that can understand stories,” complains Marvin Minsky of the MIT Media Lab, who as much as anyone alive can lay claim to the title of A.I.’s “grand old man.” Minsky wonders if the absence of a “big win” for artificial intelligence-what mapping the human genome has meant for biology or the Manhattan Project for physics-has been too discouraging, which in turn saddens him. “The field attracts not as many good people as before.”

Some vow to keep up the fight. “I’ve never failed to recognize in my own work a search for the secret of human consciousness,” says Douglas R. Hofstadter, whose 1979 book Gdel, Escher, Bach: An Eternal Golden Braid was constructed as an extended “fugue on minds and machines” and who evinces little patience with those who would reduce A.I. to a subfield of advanced engineering. “The field is founded on the idea that if intelligence is created on the computer, it will automatically be the same kind of consciousness that humans have,” says Hofstadter, currently director of the Center for Research on Concepts and Cognition at Indiana University. “For me it has always been a search for thinking.” Hofstadter warns researchers against losing sight of this quest.

The elusiveness of the goal, Hofstadter and others stress, does not mean that it is unattainable. “We should get used to the fact that some of these problems are very big,” says Stork, the editor of HAL’s Legacy. “We won’t have HAL in my lifetime, or my children’s.”

But even as artificial-intelligence researchers work toward the big win, many have turned their attention to more practical pursuits, giving us software that can sort our mail, find that one-in-a-billion Web page or help rescue workers pull us from the wreckage of an accident or terrorist attack. And their successes may be what will keep A.I. alive until it’s truly time to rekindle the quest for an understanding of consciousness.

Important Players in A.I.

Artificial intelligence is becoming so deeply embedded in computer applications of all kinds that many prominent companies have created A.I. teams. Here are five commercial entities doing important work in the field.

NameLocationFocusCycorpAustin, TX”Common sense” processing and ontologies for large databasesIBM (Watson Research Center)Yorktown Heights, NYData mining and intelligent-agent developmentiRobotSomerville, MARobots chiefly for industrial and military applicationsMicrosoft (Microsoft Research)Redmond, WAIntelligent assistants and user interfaces for the Windows operating system and Office software suiteNEC (NEC Research Institute)Princeton, NJVisual recognition, natural-language processing, learning algorithmsSRI InternationalMenlo Park, CAMachine vision, virtual reality, natural-language processing