In 1982, when he was still a student at MIT, Danny Hillis cofounded Thinking Machines, one of the most famous failures in the history of computing. A hive of wayward and brilliant researchers, Thinking Machines tried to build the world’s first artificial intelligence. But if the company did not succeed in “building a machine that will be proud of us” (its corporate motto), its Connection Machine demonstrated the practicality of parallel processing, the foundation of modern supercomputing. Today, Danny Hillis is cochair of Applied Minds, a design and invention company, and he is building the Clock of the Long Now, a mechanical timepiece meant to last 10,000 years.
TR: Why is creating an artificial intelligence so difficult?
Hillis: We look to our own minds and watch our patterns of conscious thought, reasoning, planning, and making analogies, and we think, “That’s thinking.” Actually, it’s just the tip of a very deep iceberg. When early AI researchers began, they assumed that hard problems were things like playing chess and passing calculus exams. That stuff turned out to be easy. But the types of thinking that seemed effortless, like recognizing a face or noticing what is important in a story, turned out to be very, very hard.
TR: Why did Thinking Machines fail to create a thinking machine?
Hillis: Well, the glib answer is that we just didn’t have enough time. But enough time would have been decades, maybe lifetimes. It is a hard problem, probably many hard problems, and we don’t really know how to solve them. We still have no real scientific answer to “What is a mind?”
TR: The Connection Machine was an effective platform for supercomputing. Why didn’t Thinking Machines prosper as a supercomputing company?
Hillis: Supercomputing turned out to be a technology, not a business. My friend Nathan Myhrvold, who was running Microsoft Research at the time, once told me, “It is at least as hard to make software for a supercomputer as for a PC, but you only have a few thousand customers, and we have billions. Not only that, but each of those customers actually expects you to give them exactly what they need.”
TR: What were the successful commercial applications of the research at Thinking Machines?
Hillis: The commercial applications were mostly chip design, data mining, text search, cryptology, computational chemistry, computer graphics, financial optimization, seismic processing, and fluid flow modeling. Scientific applications like astronomy, climate modeling, or quantum chromodynamics were exciting when they helped get a result on the cover of Nature, but we never made money on them.
TR: What happened to the patents from Thinking Machines? More than anyone else, you are responsible for massive parallel processing. You get credit, but no payment. Who gets it, and why?
Hillis: Well, first of all, I should be clear that I am just one of many people who contributed to developing massively parallel computing. As for the patents, one of the consequences of Thinking Machines’ failure is that I lost any rights to the technologies. In retrospect, that turned out to be a blessing, because it saved me from spending the next decade of my life in court.
TR: How is your philosophy of artificial intelligence different from Marvin Minsky’s famous “society of mind”?
Hillis: Marvin is my mentor, so any philosophy of AI that I have starts with his. I was living in his basement while he was writing the book Society of Mind, and every day he would write a new page or two and let me read it. Then we would get to talk about it, and I would get to hear all the thought that he had put behind it. I still can’t imagine what it would be like to read that book, cover to cover, without a long conversation on each page. But that is the point of the book: as Marvin would put it, “The brain is a kludge.” There are a lot of different things going on, and they interact in complicated ways. Marvin is surely wrong on most of the details, but I think the big picture of lots of different, loosely coupled semiautonomous processes is basically right.
TR: You were ahead of your time in applying computation to immunology, genetics, and neurobiology. Today, computation is ubiquitous in biology. What will this mean?
Hillis: I am excited that computational biology is coming into its own. It feels like the field of computing did in 1970. Everything seems possible, and the only constraint is our imagination. There are still so many basic, simple questions that are unanswered: “How are memories encoded?” “How does the immune system have a sense of ‘self’?”
I am especially interested in what will come of computational models of evolution, although I have to admit that the field seems a bit stuck right now. Most current models of evolution reduce it to a very weak kind of search algorithm, but I have always felt that there is something more to it than that. It is not that the biologists are wrong about the mechanisms, but rather that the models are much simpler than the biology. It may be that the interaction of evolution and development is the key, or behavior and environment, or something like that.