Letters

Letters from our readers.

Second Life
I’ve been following the virtual world called Second Life for some time, so it was a pleasure to read Wade Roush’s thoughtful and intelligent cover story (“Second Earth,” July/August 2007). The piece benefited greatly from the fact that your writer entered into the life of the community he was trying to understand.

I’m sure you’ll receive some splenetic, sarcastic criticism of the piece from someone disgusted by the very idea of a Second Life. Unlike Roush, though, your critic will almost certainly have spent no time in acquiring one.
Michael Parsons
Editor, CNET.co.uk
London, England

Artificial Intelligence
In his essay arguing against the possibility of producing conscious machines (“Artificial Intelligence Is Lost in the Woods,” July/August 2007), is Yale computer science professor David Gelernter arguing against artificial intelligence or artificial humanity? Intelligence does not require all the human interactions with the world or emotions that he lists, unless there is a particular need to provide those for the intended application.

Consciousness is hard to define. Maybe someone should make a replacement for the Turing test, Alan Turing’s suggestion that if a computer can answer questions the same way a human would, then it can be considered intelligent. A Helen Keller test, perhaps: it may be possible, after all, that there is or will be a computer in existence that is conscious, but for whom we have not provided the means for input or output that it would need to signal to us that it is conscious. Or maybe it’s speaking “Chinese” to an “English” world or broadcasting radio to a television world.

I think we’d better find a more general concept of consciousness than ­Gelernter’s so that, at a minimum, we’ll recognize that aliens have landed if they ever do.
Stanley D. Young
Fort Collins, CO

I side with the anticognitivists (and thus David Gelernter). AI software running on von Neumann machines will never be conscious, and without consciousness there can be no experience, human or otherwise. Believing that somehow consciousness will arise like a deus ex machina on your Pentium is an article of religious faith.

Still, while AI software cannot replicate consciousness, networks of artificial neurons have considerably more promise. Consider machines being built by Kwabena Boahen’s group at Stanford or earlier by Carver Mead’s student Misha Mahowald at Caltech.

There are also hybrids in which real neural circuits are emulated in very large-scale integration (VLSI): Paul Rhodes’s group at Evolved Machines in Palo Alto is working on that, as is Theodore Berger’s group at the University of Southern California.

Digital computers are so second millennium. As my MIT classmate Ray Kurzweil might say, “Plug that silicon retina into your optic nerve, and you won’t know the difference.”
Robert Blum
Menlo Park, CA

Good Design
Your design-focused May/June 2007 issue was very interesting and thought-provoking, but I think it missed an opportunity to focus attention on the most pervasive problems of electronic- product design.

Several experts and writers equated operational simplicity with minimal functions, and several cited the iPod as an example of gaining simplicity by avoiding feature creep. But the history of the iPod is feature creep itself. It started out as a music player. Now it plays music, podcasts, video, and games; it can act as a stopwatch or alarm clock, show you the time in other world cities, maintain your contacts and calendar, show photos, allow you to read text files, and serve as a backup hard drive. Why does it remain simple to use? Because all the functions work the same way. The user needs to learn only one rule about the interface and can apply it to every function on the device.
Victor Riley
Point Roberts, WA

Changing Human Nature
I read with interest the essay by philosopher Roger Scruton (“The Trouble with Knowledge,” May/June 2007), since I enjoy seeing things in new ways and respect philosophers for their penetrating insight and clear logic. But I found neither in Scruton’s piece.

Scruton fears that future technology will enable men and machines to interact in increasingly intimate ways and eventually merge to the degree that human nature itself is altered. He is terrified of this possibility.

But what, exactly, is so great about human nature that he is so scared of its changing? One need only read a newspaper to see, not only that human nature is deeply flawed, but also that it is human nature not to need a reason to believe something that makes you feel good; it is human nature to believe whatever superstitions you were taught as a child. Scruton certainly seems to. When he starts to mention God, and refers to the Fall of Adam, I suspect that nobody is going to get much of a clear and rational discussion from him.
Don Dilworth
East Boothbay, ME

How to contact us:
E-mail letters@technologyreview.com
Write Technology Review, One Main Street,
7th Floor, Cambridge MA 02142
Fax 617-475-8043

Please include your address, telephone number, and e-mail address. Letters may be edited for both clarity and length.