A View from Angelica Lim

Robots Aren’t as Smart as You Think

As robots get good at mimicking human behavior, people can be deceived into thinking they have human intelligence. So let’s put them to the test.

A few years ago I met a robot in a Japanese-style café in Osaka. She wore a traditional kimono and greeted me from where she sat in the corner of the dim room. She took my order and called it out to the barista at the bar: “One tea!”

But I knew she wasn’t doing it on her own—the robot understood nothing. Somewhere upstairs, I knew, must be the human controlling this hyper-realistic android. Researchers call it the “Wizard of Oz” technique—controlling a robot from a distance, perhaps fooling an unsuspecting passerby into thinking that the mechanical creature itself is alive. The tele-operated Geminoids from Hiroshi Ishiguro’s laboratory, like the one I met at the café, are perfect examples of these superbly crafted silicon marionettes.

Hiroshi Ishiguro wants his robots to look as human as possible—but does that just end up confusing people?

Today’s AIs, much like the robot I encountered in Osaka, are “weak”—they don’t have any real understanding. Instead, they are powered by giant rulebooks containing massive quantities of data stored on the Internet. They can act intelligent but can’t understand the true meaning of what they say or do.

People tend to think the robots are smarter than they really are. In a recent study by universities in Italy and Australia, researchers showed that people attribute mental experience and agency to robots, simply on the basis of their appearance. This sort of projection may be behind the unfortunate wording of popular news articles suggesting, for instance, that robots want to “take over the world” or that we may be in for a “robot uprising.” This is misleading and confusing, and when people are confused, they get scared. And fear has a way of hindering progress.

It would help if we had a sort of “robot Turing test” to measure how smart the robots really are. You might create such a test using the original Turing test as a guide. First published in 1950 by Alan Turing, the test was conceived as a way to measure the progress of artificial intelligence with the technology of the time: computer terminals and keyboards. A person communicates with an unknown being via text on the screen and must guess whether the typed responses are being written by a human or software. The more often the AI is mistaken for a human, the better it is.

Today’s software chatbots would do well on that kind of test. Dating sites use these artificially intelligent bots to fool people into thinking a real person is flirting with them. The chatbots are so good that there are websites listing strategies to fool them into revealing their true nature. (Hint: try sarcasm.)

Does that mean that robots are close to passing the Turing test, too? Could we simply pop a software chatbot into a robot and be done with it? The answer is no, for many reasons. Factors such as human-like gaze, blinking, gestures, vocal tone, and other emotional expressions must be varied and natural, and timed together perfectly. It would be strange, for example, if the robot never broke eye contact with you, or always said “I’m feeling great!” in the exact same way.

The flip side of this is that we don’t want robots to be so realistic we’re confused about how much they actually know. We don’t want robots, like the one I saw in Osaka, whose capabilities are unclear to a casual passerby. We don’t want robots that can deceive you into thinking they’re human.

The U.K. principles of robotics, published in 2010, spell this out. The document specifies that robots should not be designed so as to exploit vulnerable users; that users should always be able to “lift the curtain”—another Wizard of Oz reference—and see the robot’s inner workings. For example, there might be a database somewhere that would allow anyone using a robot to get details on the robot’s functionality.

So how would a robot Turing test work in practice? We could look to the current Loebner Prize, which runs the test on chatbots. The Loebner has challenges that run for five minutes, 25 minutes, and so on. The same times could be applied to robots. For example, we could imagine that a robot labeled “Turing 25” could mean that it would run up to 25 minutes without being revealed as non-human. Any robot that’s purely tele-operated—operated remotely by a human at all times—would have to be labeled as such.

Robots like the one I saw in Osaka can help free us of menial and repetitive tasks, in the same way the dishwasher or washing machine revolutionized the role of women in society. If people’s confusion about the technology leads to irrational fears, we might risk missing out on a revolution like the ones brought about by the computer and Internet, bringing benefits that we can’t even imagine yet.

Angelica Lim is an assistant professor of professional practice in computing science at Simon Fraser University in Canada. She formerly built artificial-intelligence software for SoftBank Robotics.