Intelligent Machines
This Robot Knows When It’s Confused and Asks for Help
Misunderstandings will be an unavoidable part of communication between robots and humans. One robot is learning to cope with them.
Robots aren’t generally meant to get confused, but modeling confusion might help make them more useful workmates.
As part of an effort to explore ways for humans and robots to work together more naturally and effectively, a team of researchers at Brown University has developed a robot that measures its own confusion and then asks for help if it feels it needs it.
The work is important because it’s so easy for confusion to arise in everyday interactions. So making relations with a robot as natural as possible means figuring out ways of coping with this. The robot takes a command and measures with what certainty it can respond. And when it isn’t sure what’s being asked of it, it requests help.
Previous work by the Brown University team allowed a robot to read both speech and hand gesture cues to infer what’s being asked of it.
The researchers have shown that this is more effective than voice commands alone. If, however, a human asks for a wrench but there are two wrenches near each other, the robot will now decide if the situation is too uncertain and ask for further information, pointing to one and saying, “This one?”
This is the latest step toward mimicking the way two people hold a conversation, says Stefanie Tellex, an assistant professor at Brown University and the lead researcher on the project.
“That interactive, collaborative process is what allows humans to be so effective when they’re talking to each other and making plans,” Tellex says.
Indeed, Tellex says clarifying misunderstandings may be especially important for human-robot interactions. “We realized that robots were kind of limited because they can’t see as well as a person; they can’t hear as well as a person; they can’t understand as well as a person,” she says. “[But] despite encountering many more errors in understanding, they were losing out on this opportunity to try to make things better using this feedback process.”
The researchers tested the robot by introducing it to volunteers asked to get the robot to perform simple tasks, like picking up a wrench, but who were given no specific instructions on how to operate it.
This worked so well that the testers often assumed the robot was more capable than it really was, believing perhaps that it was tracking their gaze or had more sophisticated language skills.
Jim Boerkoel, an assistant professor at Harvey Mudd College in Claremont, California, who specializes in human-robot interaction, says misunderstanding can often lead to frustration.
“Not only is asking for help critical for the short-term efficiency of human-robot tasks, as we've seen in this application, but it can also have long-term benefits by engendering trust and transparency in the robotic system,” Boerkoel says. “For instance, asking for help communicates to the human the robot’s intent and an understanding in its own limitations.”