Intelligent Machines
This Ikea kitchen might teach industrial robots to be less dumb and more helpful
Nvidia’s new robotics lab will see if robots can learn to fetch the ketchup, load the dishwasher, and—eventually—make a cake.
For all the recent progress in artificial intelligence, industrial robots remain amazingly dumb and dangerous. Sure, they can perform arduous tasks precisely and repetitively, but they cannot respond to variations in their environment or tackle something new. That severely limits just how useful robots can be in the workplace.
Nvidia wants to use machine learning to help solve this problem. The world’s leading producer of the specialistcomputer chips that are crucial to artificial intelligenceis opening a new robotics lab in Seattle to make the robots that work alongside humans—co-bots— smarter and more capable. And it’s all based on a standard kitchen from Ikea.
One system inside the kitchen-lab—a single robot arm sitting atop a wheeled platform—already spends its days fetching jars and bottles and boxes and putting them into drawers. Dieter Fox, the creator of the lab and a professor at the University of Washington, says such tasks equip robots with the skills required to do more challenging factory work, or to help in hospitals. “I see this as the ideal domain to represent any challenge,” Fox says.
The kitchen tasks will get progressively harder, from finding and moving familiar objects to working with unfamiliar ones. Eventually, if all goes well, a robot will be able to work alongside a human doing something as complex as preparing a meal.
Another advantage of using a standard Ikea kitchen is that other robotics labs will be able to replicate the Nvidia team’s work and compare it with their own. The new lab will comprise roughly 50 robotics researchers, including part-time faculty and interns.
The past few years have offered encouraging signs that advances in machine learning might boost the capabilities of industrial robots significantly. Reinforcement learning, in particular, has emerged as an exciting way for robots to tackle difficult challenges. It involves controlling a robot with a deep neural network and rewarding the behavior that brings the system closer to a given goal.
For instance, a robot called Dactyl, from OpenAI, a San Francisco nonprofit, learned to manipulate a child’s block by practicing for the equivalent of 100 years in a computer simulation. However, commercial failures show how hard it is to make smarter robots more practical.
The problem with most projects is that they work only in relatively narrow situations. Change the environment slightly, and the system has to relearn everything from scratch. “Most of the demos in this field are not nearly as general as they are made to appear,” says Emo Todorov, a roboticist also at the University of Washington, who developed a popular simulation environment for robots called MuJoCo.
Fox says the key to more generalizable robot learning may be equipping robots with a foundational understanding of the physical world, including elements such as gravity. This idea—which they are experimenting with—might be somewhat akin to the intuitive understanding of physics that babies exhibit.
Few companies have ridden the artificial-intelligence revolution quite like Nvidia. So it’s worth taking note when the company bets on robotics. Nvidia stands to benefit from an AI revolution in industrial robotics partly because its hardware is crucial for running modern machine-learning software. Its technology is also valuable for performing virtual simulations that are increasingly key to training robots.
Fox and his colleagues at the lab in Seattle are especially excited about an emerging trend toward virtual environments that are both physically realistic and visually indistinguishable from the real world. “Simulation is going to play a very, very important role in the future of robots,” Fox says.