Intelligent Machines
Melding Mind and Machine
The computer and the human brainwork differently. Instead of trying to force one to emulate the other, designers would do better to ensure complementarity.
A common prediction among technologists-and a common fear among the general population-is that computers and robots will come to mimic and even surpass people. But there is no way this is likely to happen in the foreseeable future.
The reason is that computers and people work according to very different principles. One obeys strict logic and yields precise, repeatable results. The other follows a complex, history-dependent mode of operation and yields approximate, variable results. One has been carefully designed according to well-determined goals. The other has been cobbled together over eons of evolutionary trial and error.
It’s good that computers don’t work like the brain. The reason I like my electronic calculator is because it is accurate. If it were like my brain, I wouldn’t always get the right answer. Together we are a more powerful team than either of us is alone: I think about the problems and the method of attack; it does the dull, dreary details of arithmetic or, in more advanced machines, of algebraic manipulations and integration.
The computer’s strength results from a large number of simple, high-speed devices following binary logic and working reliably and consistently. Errors in the operation of any of the underlying components are not tolerated and are avoided either by careful design to minimize failure rates or through error-correcting coding in critical areas.
The power of biological computation results from a large number of slow, complex devices-neurons-working in parallel through intricate electrical-chemical interactions. Errors are frequent-whole cells may die-but reliability is maintained through massive redundancy.
Unnatural Acts
While the biological computation process itself is inherently error-tolerant, our actions-unlike the output of computers-are fraught with imprecision and inaccuracy. In response, humans have developed systems that can handle ambiguity. In fact, in many cases they were forced to do so in order to survive as a species.
Human language, for instance, provided people with a powerful means of communicating and cooperating that enabled them to adapt to complex, changing environmental forces. As language evolved and conformed to human needs, it developed into a robust, redundant, and relatively noise-insensitive means of social communication. It is flexible, ambiguous, and heavily dependent on shared understanding, a shared knowledge base, and shared cultural experiences. As a result, errors in speech are seldom important. Utterances can be interrupted, restarted, even contradicted, without causing confusion.
Human language is so natural to learn that people master their native tongue without any formal instruction-they must suffer severe brain impairment to be incapable of learning such language. Human language is also easy to use since any errors in speech are corrected so effortlessly that often neither party is aware of the error or the correction.
Conversely, a large proportion of the population would find programming languages difficult to learn and use. Even the most skilled programmers make errors, and error correction occupies a significant amount of a programming team’s time and effort.
Human error often goes unnoticed, but it can be a problem in a technology-centered society where people are asked to perform “unnatural” tasks that would never be required in the natural world-such as to do detailed arithmetic calculations, or to remember details of some lengthy sequence or statement, or to perform precisely repeated actions. All are a result of the artificial nature of manufactured and invented artifacts, and the resulting errors are the consequence of an inelegant fit between human cognition and arbitrary demands.
Best of Both Worlds
Alas, today’s attempts to minimize errors are misguided. One strategy has been to make humans more like computers. According to today’s machine-centered point of view, humans would rate all the negative characteristics (vague, disorganized, distractible, emotional, illogical), while computers would earn all the positive ones (precise, orderly, undistractible, unemotional, logical). A complementary approach, however, would assign humans all the positive traits (creative, compliant, attentive to change, resourceful) and computers all the negative ones (dumb, rigid, insensitive to change, unimaginative).
For example, let’s compare the children’s game tic-tac-toe with the Game of 15. Tic-tac-toe is represented perceptually: the goal is to form a straight line of three of your pieces before your opponent does. The Game of 15 is numeric, or symbolic: it is played by two opponents alternately selecting digits until one person has succeeded in accumulating three digits whose sum is 15.
The Game of 15 is logically equivalent to tic-tac-toe, but people find the Game of 15 difficult and tic-tac-toe simple. The difference lies in the form of representation. People play tic-tac-toe by glancing at the board and observing which pieces form a straight line and which do not. Humans excel at such pattern recognition, especially context-dependent recognition. Thus they are very good at interpreting information, finding explanations of phenomena rapidly and efficiently, and integrating meaning and context into the task. They can even go beyond the information available, relying heavily on a large body of prior experience.
But people perform poorly when processing numeric information such as that required by the Game of 15, since human symbolic processes are slow, serial, limited in power and by the size of working memory. Thus people are poor at maintaining high accuracy, at integrating large quantities of symbolic information, and at detecting patterns in symbolically displayed information.
Computers, in contrast, lack any kind of perceptual system, so it is difficult to program a computer to play tic-tac-toe by doing a perceptual analysis of lines and other geometric patterns among the pieces. Of course, tic-tac-toe doesn’t present much of a challenge for computers. But this is because they are usually programmed so that the representation used for tic-tac-toe is very similar to that used for the Game of 15, relying on a brute-force search through the tree of possible moves.
A Meeting of Minds
It should be possible to ensure cooperative interaction between people and computers that relies on the strengths of each. Are there examples of systems that take the fullest advantage of the complementarity of people and machines? Yes. Consider white-water canoeing. The canoe is really a merger of the skills of the person and the capabilities of the craft. The canoe-person system negotiates the rapid: to the person, the canoe is part of the body, an integral whole with the activity.
At times I feel the same merger with my word processor. I think the words and they appear on the screen in front of me. Ideally, the word processor complements my thinking, simplifying the output process without getting in the way. Later, it helps me reflect on the writing, make “what-if” changes, and even get the spelling and punctuation right. “What-you-see-is-what-you-get” operations mean that I can manipulate the text so that its physical format looks right, just as the text reads right. Here, the computer handles what it does best, while letting me concentrate on what I do best.
Unfortunately, this state is not long maintained, for modern word processors are really still technology-centered in design, continually interrupting me with so-called “dialog” boxes, presenting more menu choices than I care to think about, and in general getting in the way. Still, there are moments when the technology fades and it is just me, my creativity, and the resulting writing.
One way to exploit these differences is to use the power of computers to translate data sets into perceptual representations that people are more comfortable with. The development of modern computers and their associated fast, real-time, interactive display systems makes it possible to translate otherwise symbolic information into a format that fits human cognition. Usually this means presenting perceptual information rather than symbolic or numeric. But it also means eliminating or minimizing the need for people to provide precise numerical information, so they are free to do higher-level evaluation, to state intentions, to make midcourse corrections, and to reformulate the problem.
Complementarity between people and computers also exists in some systems for scientific visualization. Scientists use the computer as the medium for displaying data from a variety of viewpoints and the human perceptual system as the medium for noticing significant patterns.
We should develop more systems that rely on the complementary properties of people and machines. The principles are well known, if seldom followed. We need a different breed of designer, one less immersed in technology and more focused on the people the technology is intended to serve.