Humans and Technology / AR

Seeing the mind of a robot in augmented reality

Roboticist Stefanie Tellex seeks new ways for robots and humans to work together.

Jun 4, 2018
Justin Saglio

Humans and Technology / AR

Seeing the mind of a robot in augmented reality

Roboticist Stefanie Tellex seeks new ways for robots and humans to work together.

Jun 4, 2018
Justin Saglio

 

When the movers came to the Brown University robotics lab of Stefanie Tellex last week, her students watched with interest. Look how they deftly teamed to pick up a couch using body language, eye contact, and just a few commands, like “1-2-3 … lift.

Can robots and humans work together just as smoothly? That’s the goal of research in the Tellex lab, which is trying to give both robots and humans the tools to understand each other a little better and work together more fluidly in real environments.

Some robots, like the Roomba vacuum cleaner, really need only one command—clean or stop. “That is the right interface for a Roomba, but we are seeing robots move beyond a single function. We’d like to be able to tell them anything that is within the robot’s physical capabilities,” says Tellex. “I am working on a system where you talk to the robot like a person. You say ‘Put the crate there’ and the robot figures it out.’”

That’s a hard problem, not least because there a lot of ways to describe what you want done. (I only have to think of what happens when my wife and I—no expert movers—try to reposition our own couch.)  

In work presented last year, Tellex’s team used a voice interface to see if a person and a grasping robot could work together to pick from a group of similar objects on a table—including bowls, markers, and spoons. A command like “Can I have that bowl?” could leave the robot in doubt. So they programmed the robot to ask some clarifying questions, like “This one?”

The Brown group invited 16 volunteers into the lab and found that with such a mini-dialogue, the robots got the job done about 25 percent faster and with better accuracy. People also thought the robot was a lot smarter than it actually is. “It was so good people thought the system could understand phrases like ‘to the left of,’ even when it didn’t. But it would ask a question, so it seemed like it understood,” says Tellex.

The next step of the project, led by PhD student David Whitney, is to combine verbal commands with the augmented-reality headset HoloLens.

During MIT Technology Review’s EmTech Next conference I tried the setup, which shows the user ghostly purple-colored versions of the robot depicting what actions it plans to make—so you can fix or fine-tune them if needed.

Here, the goal is to get inside the robot’s mind, says Whitney. “Good movers use a lot of body language, but robots don’t look like us,” he says. “So this is a way to visualize information about the robot—what is it thinking?"