IBM

Business Impact

IBM and MIT Bet That Materials and Quantum Advances Will Supercharge AI

A new center at MIT could advance artificial intelligence and help IBM reestablish itself as a leader in the field.

A new $240 million center at MIT may help advance the field of artificial intelligence by developing novel devices and materials to power the latest machine-learning algorithms. It could, perhaps, also help IBM reclaim its reputation for doing cutting-edge AI.

The project, announced by IBM and MIT today, will research new approaches in deep learning, a technique in AI that has led to big advances in areas such as machine vision and voice recognition. But it will also explore completely new computing devices, materials, and physical phenomena, including efforts to harness quantum computers—exotic but potentially very powerful new machines—to make AI even more capable.

“A lot of innovation is happening using standard silicon and architectures, but what about the devices and the material science?” says Dario Gil, vice president of AI at IBM Research. “It’s an area no one is touching, and it has the potential for orders-of-magnitude improvements.”

The center will also look at ways that AI can be more effectively deployed in industries like health care and security. And it will study the economic impact of artificial intelligence and automation, a hugely significant issue for society.

The move is significant for MIT. The university was at the forefront of AI research during the 1950s, but the field’s center of gravity has moved westward more recently, with big tech companies like Google, Facebook, Microsoft, and Amazon leading the charge.

The investment also signals a shift for IBM. The company pushed AI forward by developing Deep Blue, a machine that beat the world chess champion, Garry Kasparov, in 1997 (see “How the Chess Was Won”). The Watson supercomputer that won the game show Jeopardy! in 2010 used cutting-edge machine-learning and natural-language processing techniques. In recent years, however, other companies have stolen the limelight in AI research, and the company has sometimes been accused of overhyping the AI services available under the Watson brand.

Focusing on hardware, in particular, may be a good way to reboot. Though there has been dramatic progress in AI in recent years, most of it has come thanks to a handful of algorithms, as well as the growing availability of powerful supercomputers and large quantities of training data. Even as new approaches emerge, novel materials and computing architectures offer huge potential to enhance these AI algorithms.

Most cutting-edge machine learning is done today on conventional computer chips, whether originally designed for graphics processing or custom-made to handle the necessary computations as efficiently as possible. Rethinking chip architectures and the kinds of components used could boost performance significantly. IBM already has a strong research focus on materials science and novel computing devices. “As excited as we all are about AI, the field has multiple decades ahead of it,” Gil says.

Rafael Reif, president of MIT (left), and John Kelly, senior vice president of cognitive solutions and research at IBM.

One of those opportunities could come from quantum computing. A research curiosity for decades, it is now progressing toward practical machines capable of tackling real problems, particularly in areas like chemical research. Its possible effects on machine learning and AI make for fascinating questions.

Gil says it’s too early to predict how things will pan out, but he believes experimentation could provide some surprises. “That will only occur if you have a darn quantum computer, and that’s what we have,” he say.

Besides hardware advances, the new MIT center will research new kinds of machine-learning algorithms. In particular, it will investigate algorithms that let computers learn from raw, or unlabeled, data and ones that could make it possible to transfer learning from one domain to another.

Anantha Chandrakasan, dean of MIT’s school of engineering, says research efforts in hardware and software should ideally feed into one another. “We’re not going to be designing algorithms that are completely independent of the architectures we’re going to be using,” he says. “We’ll see system-level thinking.”

The lab will also examine how AI can be applied in specific domains, such as health care and computer security. Chandrakasan says he is particularly excited to explore the practical applications of AI, and he hopes the endeavor will spawn new spinout companies in coming years.

This area of interest could prove especially important to IBM’s current business. The company has found it more difficult than anticipated to deploy Watson in areas such as health care (see “A Reality Check for IBM’s AI Ambitions”).

The collaboration will also pursue research on the implications of AI for global prosperity. Francesca Rossi, a distinguished research scientist at the IBM T.J. Watson Research Center, says that project will dovetail with the work on AI algorithms. “To advance shared prosperity through AI, you also need to advance the AI algorithms you would use,” she says. 

In its focus on using AI to deliver economic and societal benefits, the effort overlaps in some ways with the Partnership on AI, a consortium that IBM helped found in September 2016 to study how AI is influencing society. But Rossi says the MIT-IBM collaboration will produce research, while the Partnership provides an open platform for discussing these issues. For example, the Partnership on AI might recommend that every AI system be able to explain itself, as a general guideline. But AI experts still don’t understand how algorithms make decisions (see “The Dark Secret at the Heart of AI”). MIT and IBM could devise ways to address this conundrum by working together, says Rossi.