Moore’s Law is stalling in the face of the physical limits of what can be done with silicon. Now a new kind of transistor design promises to keep it alive for a little longer—though the chip industry is already planning ways to cope when it finally kicks the bucket.
The problem currently facing chip design is, sadly, physics itself. Using silicon, it’s impossible to create a transistor in which the gate—the part of a transistor that switches on and off to control the flow of electrons—is smaller than seven nanometers. Make them any smaller, and electrons can move between transistors through a process known as quantum tunneling, which means that a transistor in an “off” state could be unexpectedly switched “on” even if it’s not supposed to be.
That places a theoretical limit on Moore’s Law—the idea that the number of transistors that can fit onto a chip doubles every two years or so. But now researchers from the Lawrence Berkeley National Laboratory, led by previous TR Innovator Under 35 Ali Javey, have built what’s claimed to be the world’s smallest working transistor.
Publishing their achievement in Science, the researchers explain that the device has been built using carbon nanotubes and molybdenum disulfide, creating a transistor with a gate length of just one nanometer. It’s an impressive achievement, and—in theory, at least—it means that it would be possible to squeeze far more of the small switches into a chip than could ever be achieved with silicon. For some context, the current state-of-the-art chips use transistors with a 14-nanometer gate, and 10-nanometer chips are on the way.
The result is, however, just a proof of concept—a long way from a viable product. Turning these nanotube transistors into a processor would require billions of the switches to be reliably created on a single chip. That may be possible, but it could also be cripplingly expensive.
Indeed, the chip industry has already acknowledged that it’s prepared for transistors to stop shrinking. Earlier this year, the Semiconductor Industry Association—made up of the likes of Intel, AMD, and Global Foundries—published a report announcing that by 2021 it will not be economically efficient to reduce the size of silicon transistors any further. Instead, chips look set to change in different ways.
We’re already seeing the processor industry fracture, with a movement away from super-fast all-rounder hardware and toward more specialized chips. To that end, Intel recently bought Movidius, a company that makes chips dedicated to computer vision tasks. Nvidia, meanwhile, is selling specialized AI chips to an industry eager to capitalize on machine learning.
More efficient chip designs will also help increase computational speed at lower rates of power consumption. Microsoft and Intel are both working on using reconfigurable chips known as FPGAs to run artificial-intelligence algorithms more efficiently, for instance. And the Japanese telecom and Internet company SoftBank recently acquired British chip designer ARM for its incredibly successful low-power chips, which will provide processing power for the growing crop of Internet-of-things hardware.
Less specialized processors will likely change shape to increase processing power. Chips will increasingly use multiple layers of circuitry to increase transistor density, for example. Or, just maybe, they might shrink down using the Berkeley Lab breakthrough to achieve the same ends.
(Read more: Science, “Chip Makers Admit Transistors Are About to Stop Shrinking,” “Moore’s Law Is Dead. Now What?” “$32 Billion Buyout of ARM Is a Giant Bet on the Internet of Things,” “Intel Buys the Company That Gives Machines 'The Power of Sight,'” “The Man Selling Shovels in the Machine-Learning Gold Rush,” “$32 Billion Buyout of ARM Is a Giant Bet on the Internet of Things”)