Business Impact
Intel Buys a Startup to Catch Up in Deep Learning
Acquisition should let Nervana Systems speed development of its chips radically redesigned for artificial intelligence.
Earlier this year, Nervana Systems CEO Naveen Rao was asked what would happen if Intel began attacking the fast-growing market for chips designed specifically for running “deep learning” software.
“They would be unstoppable,” he said.
Now, Rao will be a key player in Intel’s attempt to catch up in one of the most promising new silicon markets to emerge since the smartphone. Intel revealed Tuesday that it is buying Nervana and its deep-learning hardware and software for an undisclosed amount.
The acquisition marks a departure for Intel and comes at a crucial moment. The company became the world’s largest chip maker with a single-minded strategy to make its x86 microprocessors the standard for running a huge swath of applications, from solitaire to massive payroll systems. Nervana and other startups believe deep learning requires entirely new chip architectures that work more like the human brain—by processing millions of random bits of data to come up with an insight, rather than by running an algorithm created by a programmer. (You can see how Rao described this in detail at EmTech Digital in May.)
So far, Intel is a bit player in this market, which could grow from less than a billion to $2.4 billion by 2024, according to the market research firm Tractica.
The current market share leader is Nvidia, the leading maker of graphics processing chips used in game consoles and PCs.
Since researchers discovered the chips’ aptitude for running neural networks, Nvidia has built software to help deep-learning experts use the chips—and it has developed aggressive sales and marketing programs to lock in a big lead. But although graphics chips are much more efficient at running deep-learning software than Intel’s conventional CPUs, chips specifically designed for the task should be even more efficient.
Intel had largely ignored the market until this June, when it announced a version of its Xeon Phi co-processor that is well-suited for certain deep-learning jobs. To create a new family of chips specifically for deep learning could have taken years. "While artificial intelligence is often equated with great science fiction, it isn’t relegated to novels and movies. AI is all around us," said Diane Bryant, executive vice president of Intel’s Data Center Group, in a blog post.
Rao says that Nervana will complete development of its first chip, called the Nervana Engine, this year as originally planned, and make it available via the cloud to customers in early 2017. Nervana claims the chip can do as much neural processing as 200 microprocessors or 10 GPUs, in large part because it features a new memory technology that lets it process far more bits at the same time.
Nervana's first chip is not being made by Intel, but later ones will be. Rao says Intel's leading chip fabrication technology and a new, high-speed disk drive technology coming to market this year will help Nervana's ideas to reach their full potential. With its vast marketing budget, Intel should also be able to put pressure on Nvidia to reduce prices.
Intel also gets Rao, a long-time chip designer who went back to school in 2011 to get a PhD in computational neuroscience from Brown University, according to his LinkedIn profile. He then joined Qualcomm, where he led a research project to build a “biologically inspired artificial neural network.” This became Zeroth AI, a software platform to help companies use Qualcomm chips to build deep-learning systems.
When Qualcomm declined to turn his research into an actual product, he cofounded Nervana in 2014. Rather than sell the chips, Rao’s strategy has been to use them to power a deep-learning cloud service, saving customers the hassle of developing their own neural networks.
“We timed it well,” Rao said in the interview earlier this year. “We’ve been building something that not many people thought was a good idea a couple of years ago.”