Google CEO Sundar Pichai on achieving quantum supremacy
In an exclusive interview with MIT Technology Review, Pichai explains why quantum computing could be as important for Google as AI.
In a paper today in Nature, and a company blog post, Google researchers claim to have attained “quantum supremacy” for the first time. Their 53-bit quantum computer, named Sycamore, took 200 seconds to perform a calculation that, according to Google, would have taken the world’s fastest supercomputer 10,000 years. (A draft of the paper was leaked online last month.)
The calculation has almost no practical use—it spits out a string of random numbers. It was chosen just to show that Sycamore can indeed work the way a quantum computer should. Useful quantum machines are many years away, the technical hurdles are huge, and even then they’ll probably beat classical computers only at certain tasks. (See “Here’s what quantum supremacy does—and doesn’t—mean for computing.”)
But still, it’s an important milestone—one that Sundar Pichai, Google’s CEO, compares to the 12-second first flight by the Wright brothers. I spoke to him to understand why Google has already spent 13 years on a project that could take another decade or more to pay off.
The interview has been condensed and edited for clarity. (Also, it was recorded before IBM published a paper disputing Google’s quantum supremacy claim.)
MIT TR: You got a quantum computer to perform a very narrow, specific task. What will it take to get to a wider demonstration of quantum supremacy?
Sundar Pichai: You would need to build a fault-tolerant quantum computer with more qubits so that you can generalize it better, execute it for longer periods of time, and hence be able to run more complex algorithms. But you know, if in any field you have a breakthrough, you start somewhere. To borrow an analogy—the Wright brothers. The first plane flew only for 12 seconds, and so there is no practical application of that. But it showed the possibility that a plane could fly.
A number of companies have quantum computers. IBM, for example, has a bunch of them online that people can use in the cloud. Why can their machines not do what Google’s has done?
The main thing I would comment on is why Google, the team, has been able to do it. It takes a lot of systems engineering—the ability to work on all layers of the stack. This is as complicated as it gets from a systems engineering perspective. You are literally starting with a wafer, and there is a team which is literally etching the gates, making the gates and then [working up] layers of the stack all the way to being able to use AI to simulate and understand the best outcome.
The last sentence of the paper says “We’re only one creative algorithm away from valuable near-term applications.” Any guesses as to what those might be?
The real excitement about quantum is that the universe fundamentally works in a quantum way, so you will be able to understand nature better. It’s early days, but where quantum mechanics shines is the ability to simulate molecules, molecular processes, and I think that is where it will be the strongest. Drug discovery is a great example. Or fertilizers—the Haber process produces 2% of carbon [emissions] in the world [see Note 1]. In nature the same process gets done more efficiently.
Note 1: The Haber process
- The Haber-Bosch process, which makes ammonia for fertilizer by combining nitrogen from the air with hydrogen from natural gas and steam, produces an estimated 1.44% of global carbon dioxide emissions and just over 1% of total greenhouse gas emissions.
So how far away do you think an application like improving the Haber process might be?
I would think a decade away. We are still a few years away from scaling up and building quantum computers that will work well enough. Other potential applications [could include] designing better batteries. Anyway, you’re dealing with chemistry. Trying to understand that better is where I would put my money on.
Even people who care about them say quantum computers could be like nuclear fusion: just around the corner for the next 50 years. It seems almost an esoteric research project. Why is the CEO of Google so excited about this?
Google wouldn’t be here today if it weren’t for the evolution we have seen in computing over the years. Moore’s Law has allowed us to scale up our computational capacity to serve billions of users across many products at scale. So at heart, we view ourselves as a deep computer science company. Moore’s Law is, depending on how you think about it, at the end of its cycle. Quantum computing is one of the many components by which we will continue to make progress in computing.
The other reason we’re excited is—take a simple molecule. Caffeine has 243 states or something like that [actually 1048—see Note 2]. We know we can’t even understand the basic structure of molecules today with classical computing. So when I look at climate change, when I look at medicines, this is why I am confident one day quantum computing will drive progress there.
Note 2: Caffeine
- Caffeine, with 24 atoms, can exist in 1048 distinct quantum states, i.e., configurations of those atoms. That means that for a classical computer to perfectly represent caffeine, it would require 1048 bits—close to the number of atoms in the entire Earth (1049 or 1050). A 1-gigabyte memory chip has about 1010 bits.
A profile of you in Fast Company described you as feeling a sense of “premonition” when you saw an AI learning to identify cat pictures all by itself, back in 2012. [“This thing was going to scale up and maybe reveal the way the universe works,” Pichai is quoted as saying. “This will be the most important thing we work on as humanity.”] Does quantum computing feel as important?
Absolutely. Being able to be in the lab and actually physically manipulate the qubit and being able to put it in a superposition state was an equally profound moment for me because, to my earlier point, it’s how nature works. It opens up a whole new range of possibilities which didn’t exist until today.
It could take a very long time to get to quantum systems that can do something serious. How do you manage patience at a company that is used to very fast progress?
You know, I was spending time with Hartmut [Neven], who leads the quantum team along with John Martinis, the chief hardware scientist. And I mentioned that I dropped out of my PhD in materials science, and people around me were working on high-temperature superconductors. This was 26 years ago, and I was sitting in the lab and I’m like, “Wow, this is going to need a lot of patience to go through.” And I felt like I didn’t have quite that kind of patience. I have deep respect for the people in the team who have stayed on this journey for a long time. But pretty much all fundamental breakthroughs work that way, and you need that kind of a long-term vision to build it.
The reason I’m excited about a milestone like this is that, while things take a long time, it’s these milestones that drive progress in the field. When Deep Blue beat Garry Kasparov, it was 1997. Fast-forward to when AlphaGo beat [Lee Sedol in 2016]—you can look at it and say, “Wow, that’s a lot of time.” But each milestone rewards the people who are working on it and attracts a whole new generation to the field. That’s how humanity makes progress.
And to my earlier systems engineering point—we are pushing at many layers of the stack. So we are driving progress which will be used in many, many different ways. For example, us building our own data centers is what allowed us to build something like TPUs [tensor processing units, specialized chips for Google’s deep-learning framework, TensorFlow], which makes our algorithms go faster. So it’s a virtuous cycle. One of the great things about working on moonshots is even your failures along the side are worth something, and even interim milestones have other applications. So yes, you’re right, we have to be patient. But there is a lot of real gratification along the way.
How much are you investing in quantum computing at the moment?
It’s a relatively small team. But it builds on all the investments we've made across many years at various layers of Google. It’s built on the company’s years of research and the applied work we have done on top of it.
Can you talk about the difference in approach between Google and IBM? For one thing, IBM has a bunch of quantum machines that it puts in the cloud for people to program, whereas you’re doing it as an in-house research project [see Note 3].
Note 3: IBM on quantum supremacy
- On October 21, IBM researchers published a paper disputing Google's claim to have achieved quantum supremacy. They argued that, by using a modified form of Google's technique, it should be possible to simulate Sycamore's calculation on a classical system in just two and a half days instead of 10,000 years. A Google spokesperson says, "We welcome proposals to advance simulation techniques, though it’s crucial to test them on an actual supercomputer, as we have." He also noted that, since the complexity of quantum computers increases exponentially, adding just a few more qubits would put the task definitively out of bounds of a classical machine.
It’s great that IBM is providing it as a cloud facility and attracting other developers. I think we as a team have been focused on making sure we prove to ourselves and to the community that you can cross this important milestone of quantum supremacy.
IBM also says the term “quantum supremacy” is misleading, because it implies that quantum computers will eventually do everything better than classical computers, when in fact they will probably always have to work together on different bits of a problem. They’re accusing you of overhyping this.
My answer on that would be, it is a technical term of art. People in the community understand exactly what the milestone means.
But the contention is, the public may see it as a sign that quantum computers have now vanquished classical computers.
I mean, it’s no different from when we all celebrate AI. There are people who conflate it with general artificial intelligence. Which is why I think it’s important we publish. It’s important that people who are explaining these things help the public understand where we are, how early it is, and how you’re always going to apply classical computing to most of the problems you need in the world. That will still be true in the future.
AI generates business for Google at very many levels. It’s in services like Translate and Search. You provide AI tools to people through your cloud. You provide an AI framework, TensorFlow, that allows people to build their own tools. And you provide specialized chips [the TPUs mentioned above] that people can then use to run their tools on. Do you think of quantum computing as eventually being that pervasive for Google?
I absolutely do. And if you step back, we invested in AI and developed AI before we knew it would work for us across all layers of the stack.
Down the line, on all the practical applications you talked about—we don’t use AI technology just for ourselves; we provide it to customers around the world. We care about democratizing AI access. The same would be true for quantum computing, too.
What do you think quantum computing might mean for AI itself? Could it help us unlock the barrier to artificial general intelligence, for instance, if you combine quantum computing and AI?
I think it’ll be a very powerful symbiotic thing. Both fields are in early phases. There is exciting work in AI in terms of building larger models, more generalizable models, and what kind of computing resources you need to get there. I think AI can accelerate quantum computing and quantum computing can accelerate AI. And collectively, I think it’s what we would need to, down the line, solve some of the most intractable problems we face, like climate change.
You mentioned democratizing the technology. Google has run into some ethical controversies around AI—who should have access to these tools and how they should be used. What have you learned from handling those issues, and how is it informing your thinking on quantum technology, which is much earlier in its development?
Publishing and engaging with the academic community at these stages is very important. We work hard to engage. We’ve published our comprehensive AI principles. If you take an area like AI bias, I think we have published over 75 research papers in the last few years. So, codifying our ethics and engaging proactively.
I think there are areas where regulation may make sense. We want to constructively participate and help get the right regulations. And finally, there’s a process of engaging externally and getting feedback. These are all technologies which will impact society. There’s no one company which can figure out what the right thing is. There’s no silver bullet, but this is early enough that, over the next 10 years, we have to engage and work together on all of this.
Isn’t there a bit of a contradiction between, on the one hand, saying you won’t develop AI for certain purposes [as per the AI principles] and, on the other, creating a platform that enables people to use AI for whatever purpose they want?
AI safety is one of our most important ethical principles. You want to build and test systems for safety. That’s inherent in our development. If you’re worried about quantum systems breaking cryptography over time, you want to develop better quantum encryption technologies. When we built search, we had to solve for spam.
The stakes are clearly higher with these technologies, but part of it is the technical approach you take, and part of it, over time, is global governance and ethical agreements. You would need to arrive at global frameworks which result in outcomes we want. We are committed to doing what we can to help develop [the technology], not just responsibly, but to use it to safeguard safety, democracy, etc. And we would do that collectively with the institutions.
Is there any other technology that you’re also really excited about right now?
For me, just as a person, radically better ways to generate clean renewable energy have a lot of potential. But I’m excited just broadly about the combinations of all of this and how we practically apply it. In health care, I think we are on the verge of breakthroughs over the next decade or so which will be profound. But I would also say AI itself—the next generation of AI breakthroughs, new algorithms, better generalizable models, transfer learning, etc., are all equally exciting to me.