Google announces Willow, a major step forward in quantum computing, here’s what you need to know
Willow, one of my all time favorite movies as a kid, now has an entirely new meaning as Google rocked the world today with their announcement of a new quantum chip. As Sunday, the CEO of Google said in the above tweet, the new chip solved a problem that would take the leading supercomputer more time than the age of the current universe…in under five minutes.
That’s a big deal, and thirty years of hard work has paid off paving the way for the next generation of quantum computers.
So let’s talk about quantum computing and why it’s a big deal. First, quantum computing isn’t a new thing for Google, they started a division dedicated to it — “Google Quantum AI” twelve years ago.
And the division has a pretty sweet lab in Santa Barbara, because if you’re doing anything with quantum computing, you’re going to need a pretty serious hardware lab.
We built a campus to realize our mission of leading the development of a large-scale error-corrected quantum computers.
Our headquarters creates a hub for our team to integrate our leading research and quantum hardware with a tightly-designed quantum systems engineering practice. (Source — Google Quantum AI)
Currently, quantum computers are big, and you’re not going to get a computer for your house with anything “quantum inside” anytime soon…but that doesn’t mean you won’t be able to leverage the power of quantum computers, probably sooner than you’d think. Or at least that’s the direction things seem to be going.
So what exactly is a quantum computer?
I thought I’d give Google a little nod here since today is their big day and use Gemini to give you the definition:
A quantum computer is a machine that uses quantum mechanics to solve problems faster than a classical computer. Quantum computers use quantum bits, or qubits, instead of bits, which are the units of information in classical computers.
If you look at AI today, companies are using powerful Nvidia GPUs paying around $30,000 each on the low end. But to run LLMs like OpenAI’s ChatGPT takes a lot more than one of these.
And as you can see from the tweet above, renting a cluster like this comes at the cost of about five billion (yes billion) dollars, for a two year rental.
Not cheap.
And this is where quantum computing gets interesting. Quantum computers are so much faster than anything we have today it would make your head spin.
One of the, or maybe the, biggest challenge in quantum computing is error correction. Here’s the skinny on error correction and the advancements made with Willow from Hartmut Neven, the founder of Google Quantum AI.
Errors are one of the greatest challenges in quantum computing, since qubits, the units of computation in quantum computers, have a tendency to rapidly exchange information with their environment, making it difficult to protect the information needed to complete a computation. Typically the more qubits you use, the more errors will occur, and the system becomes classical.
Today in Nature, we published results showing that the more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes. We tested ever-larger arrays of physical qubits, scaling up from a grid of 3x3 encoded qubits, to a grid of 5x5, to a grid of 7x7 — and each time, using our latest advances in quantum error correction, we were able to cut the error rate in half. In other words, we achieved an exponential reduction in the error rate. This historic accomplishment is known in the field as “below threshold” — being able to drive errors down while scaling up the number of qubits. You must demonstrate being below threshold to show real progress on error correction, and this has been an outstanding challenge since quantum error correction was introduced by Peter Shor in 1995.
There are other scientific “firsts” involved in this result as well. For example, it’s also one of the first compelling examples of real-time error correction on a superconducting quantum system — crucial for any useful computation, because if you can’t correct errors fast enough, they ruin your computation before it’s done. And it’s a “beyond breakeven” demonstration, where our arrays of qubits have longer lifetimes than the individual physical qubits do, an unfakable sign that error correction is improving the system overall.
As the first system below threshold, this is the most convincing prototype for a scalable logical qubit built to date. It’s a strong sign that useful, very large quantum computers can indeed be built. Willow brings us closer to running practical, commercially-relevant algorithms that can’t be replicated on conventional computers. (Source — Google)
and the results from Willow that Google shared today are pretty staggering:
Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe.
What Willow could represent is a tipping point given that it has made a giant leap forward in solving for what is arguably the hardest problem in quantum computing.
From here though, the next step isn’t commercializing quantum computing for consumers, but instead seeing if there now might be a viable path towards creating more efficient quantum computing labs.
Where things get wild is when quantum meets AI and companies like OpenAI start moving away from traditional clusters full of Nvidia GPUs. I’m no quantum computing expert so I can’t begin to estimate how close or far away this might be.
What I can say is that Willow is a big deal, it tackles one of the biggest problems in quantum computing and moves us closer to a world where we move from AI powered by the computers of today, to AI powered by computers that are so much faster, the sky is no longer the limit.
If you want to read all the juicy details about Willow, here’s Google’s official article announcing the new chip.
Wild times, go Willow!