Inside big tech’s high-stakes race for quantum supremacy

Quantum computers used to be an impossible dream. Now, after a decade of research by some of the world’s biggest tech companies, they’re on the verge of changing everything

On June 4, 2019, Sergio Boixo gathered his colleagues on Google’s quantum research team together for an urgent meeting. The group, split across two sites in southern California, had spent the better part of a decade trying to build a working quantum computer – a revolutionary type of device that works according to the laws of quantum mechanics.

For months, Google had been inching closer to a milestone known as quantum supremacy – the point at which a quantum computer can accomplish something beyond even the world’s best classical supercomputers. But there was a problem.

Boixo, a tall Spaniard with a greying beard, had designed an experiment that was meant to be virtually impossible for a classical computer to solve, but easy for Google’s Sycamore quantum chip. The simulations looked good, and by the end of April 2019, Google seemed on the verge of achieving quantum supremacy. But then, on May 31, a parallel team inside Google discovered that the task was actually a million times easier for a classical computer than had been thought. Their quantum chip wasn’t going to be able to beat that. “I was panicking a little bit,” Boixo says. “But everyone was very understanding.”

Seven months later, Boixo – smartly dressed in chinos and a pink sweater – is sitting on a picnic bench outside Google’s Santa Barbara lab, joking with his colleagues about the brief setback. Anthony Megrant, a quantum hardware engineer who fell into the field after a stint in the US army, had returned from paternity leave in early June to find the lab in a fluster. “I was like, really? I’ve been gone a week!” he laughs.

The team went back to the drawing board, and by June 7 they had redesigned the task, which they programmed into the Sycamore quantum processor. The chip, no bigger than a thumbnail, sits at the bottom of a huge cryostat that keeps it chilled to a temperature colder than outer space. There are five of these inside the squat, beige building behind us. We walk past surfboards hanging on the wall and a group of men playing Super Smash Bros in a meeting room named after Nobel prize-winning physicist Richard Feynman, to the fridges – suspended from the ceiling like chandeliers: gold-plated copper discs and intricate wiring narrow to a point inside nested canisters, each painted in one of Google’s corporate colours.

Under the microscope, the Sycamore chip looks like any other – bewildering silver patterns on black. But on June 13 it achieved what had once been thought impossible. A Sycamore chip inside the green cryostat performed Boixo’s task – which would have taken the world-leading Summit supercomputer approximately 10,000 years – in three minutes and 20 seconds. When the news leaked in September 2019, it made global headlines and sparked huge controversy within the growing field. “There are people that literally think that the thing we did or the next steps are not possible,” says Megrant.

On May 6, 1981, Richard Feynman gave a lecture at Caltech, in Pasadena, about the challenge of simulating nature. Feynman was a leading voice in quantum mechanics – the study of the strange things that start to happen in physics when you get down to a really small scale. At the subatomic level, nature stops obeying the laws that we’re familiar with. Electrons and photons sometimes behave like waves, and sometimes like particles. Until they’re measured, they can even appear to be in both states simultaneously, or in two places at once – a phenomenon known as quantum superposition. Nature has uncertainty baked into its core.

Feynman was the first to realise the implications. If you want to accurately simulate physics, chemistry, or anything else both complex and minuscule, you need a simulation that can adhere to the same, probability-based laws of quantum mechanics.

That’s a problem for classical computers. They work using bits – tiny switches that can either be in the on position, represented by a "1", or in the off position, represented by a "0". Every website you visit, video game you play and YouTube video you watch is ultimately represented by some combination of these ones and zeroes. But bits are black and white, either/or – they’re not very good at coding for uncertainty, and that means that some seemingly simple problems can become exponentially more difficult for normal computers to handle.

“Say we want to send you from the UK to 14 cities in the US, and work out the optimal path – my laptop can do that in a second,” explains William "Whurley" Hurley, founder of Strangeworks, a company that aims to make quantum computing more accessible. “But if I made it 22 cities, using the same algorithm and the same laptop, it would take 2,000 years.”

This is the iconic travelling salesman problem, the kind of situation where a quantum computer could prove invaluable. A classical device trying to plot the best route has to check every single possible order in which you could visit the cities, so for every stop you add to the journey, the amount of computing power balloons – 11 cities have 20 million routes between them, 12 cities have 240 million routes, 15 cities have more than 650 billion. Modelling complex interactions between molecules, as Feynman wanted to, creates the same problem – with every variable you add, the challenge gets bigger.

For decades, chipmakers have been dealing with this problem by packing more and more bits into processors, by making the physical switches that control them smaller. We’ve gone from vacuum tubes in room-sized machines to billions of microscopic transistors on silicon. However, the pace of change predicted by Moore’s Law – a doubling of the number of transistors on a microchip every two years – is slowing down. In 2012, Australian researchers created a transistor that consisted of a single atom, switching between two states to signify 1s and 0s. After that, there was nowhere left for computers to go but into the quantum realm.

In 1985, Oxford-based physicist David Deutsch went a step further than Feynman. He realised that a computer built from quantum components could be so much more powerful than just a physics simulator. Instead of bits, which can only be 1 or 0, these components – which would eventually become known as quantum bits, or "qubits" – can be 1, 0, or in a state of superposition where they are both 1 and 0 at the same time. You can think of qubits as a globe, with 1 at the North Pole, 0 at the South Pole, and superposition at any other point on the planet. Or imagine a coin – if heads is 1 and tails is 0, then superposition is a spinning coin, laden with unrealised potential futures.

Deutsch figured out that a computer built of qubits instead of bits could use the uncertainty of quantum mechanics to its advantage. Instead of trying out each path of a maze in turn, it could go down every single path in parallel, at the same time. As well as simulating nature more efficiently, it would be able to hold uncertainty in its memory, and tackle things like the travelling salesman problem thousands of times faster than a classical machine.

This is why some believe that quantum computers could go well beyond the confines of classical computers to create powerful new materials, turbocharge the fight against climate change, and completely upend cryptography.

But to do calculations, you need to be able to measure things, and pass on the results of what you find to the next stage of the equation. Measuring something in superposition knocks it out of that state – the photon no longer appears to be in two places at once. Schrödinger’s cat is either dead or alive. You need to be able to move that spinning coin around without disturbing its spin. That’s only possible thanks to another weird feature of quantum mechanics called entanglement.

For reasons that physicists still can’t really explain after almost a century of trying, quantum mechanics allows for two particles to become interlinked – entangled. Even if they’re separated by a great distance, anything that happens to one entangled particle instantly happens to the other one – an observation that has given students a headache for decades, but that means quantum information can, in theory at least, be transferred from one place to another, without the underlying superposition collapsing.

By 1992, there were a handful of enthusiasts keeping an eye on the potential of quantum computing, but it might have remained in the world of theory if not for Giuseppe Castagnoli, head of IT at Elsag Bailey, a manufacturer of industrial control systems that is now part of ABB.

“He persuaded his company that instead of sponsoring some art exhibition, he would sponsor a series of conferences,” recalls Artur Ekert, a professor of quantum physics at the University of Oxford and an early attendee of Castagnoli’s annual workshops at Villa Gualino, a hillside hotel overlooking Turin, from 1993 to 1998. Here, the young academics who are now among the most influential people in quantum computing rubbed shoulders and exchanged ideas.

In 1994, Ekert gave a talk to the International Conference on Atomic Physics in Boulder, Colorado, based on some of the ideas that he’d absorbed at Villa Gualino. For the first time, he broke down quantum computation into its basic building blocks, drawing parallels with classical devices and describing the types of switches and logic gates that would be needed to build a quantum machine.

Ekert’s talk was the starting gun in the quantum race. “This meeting started the whole avalanche,” he says. “All of a sudden the computer scientists were talking about algorithms; atomic physicists saw that they could play a role. Later it started spilling over into other fields, it started accelerating, and it became the industry you see today.”

Before it could become an industry, though, scientists had to figure out how to actually build a qubit. In the 1990s, this was still an entirely theoretical construct. To make quantum computing work, scientists needed to find or create something that was small enough to adhere to the laws of quantum mechanics, but also big enough to be reliably controlled. It’s a quest that has pushed our understanding of physics and material science to the limit.

For the last ten years, some of the world’s biggest companies – Google, Amazon, Microsoft, IBM – have been racing to be the first to create a working, practically useful quantum computer.

Google set up its Quantum Artificial Intelligence Lab in 2013. Initially, the lab – led by Hartmut Neven, a co-founder of the Google Glass project – partnered Nasa and early quantum pioneers D-Wave. But in 2014, it changed tack, and signed a partnership with a research team led by John Martinis at the University of California, Santa Barbara, that was making good progress towards developing a type of qubit known as a superconducting qubit.

Superconducting qubits are based on a unique structure called a Josephson junction – a tiny ring of specially constructed metal that has a useful property called nonlinearity. This enables it to be restricted to just two energy states or a superposition of both, regardless of how much energy you put into it. Essentially, it behaves like a switch.

There are different approaches to quantum computing – qubits have been suspended in laser beams, trapped in diamonds, and inferred from the aggregate magnetic alignment of billions of particles in a machine that works like an MRI scanner. Some routes offer a gentle starting slope before accelerating in difficulty, while others – such as superconducting qubits – have a steep initial learning curve, but promise to be easier to scale up to the thousands or millions of qubits we’ll eventually need to solve real-world problems.

But superconducting qubits are currently preferred by most of the major players – including Google and IBM – because they mesh more neatly with the silicon-based architecture inside almost every classical computer on the planet. “This approach – superconducting qubits – has always been looked at as being the closest analogue to the classical integrated circuit that powers our lives,” says Boixo. “Once we get past certain shortcomings that came along with this package, we can scale up just like classical computing. We’re going to get all of those benefits and we just have to overcome the negatives.”

In the lab, Megrant explains how he can use a microwave pulse to flip each qubit’s energy state between 0 and 1, and how – by passing a current through the system – researchers can modify the thresholds for each state and the coupling strengths between qubits to achieve entanglement. But this works only at incredibly low temperatures, which is just one of the reasons that superconducting qubits are so difficult to get right.

Qubits of all types are incredibly finicky – the slightest interference can knock them out of superposition, so they need to be kept isolated from the environment as much as possible. But they also need to be controlled. “You're simultaneously trying to really well isolate the inner workings of a quantum computer and yet be able to tell it what to do and to get the answer out of it,” says Chetan Nayak, general manager for quantum hardware at Microsoft.

Google’s cryostats are designed to gradually step down the temperature. Each level gets progressively colder; it takes the whole machine almost two days to get the quantum chip down to 10 millikelvin, and nearly a week to warm back up to room temperature.

The Sycamore chip, like its predecessor, Bristlecone, was manufactured at UCSB, where it was sandwiched together like an Oreo to create the fragile Josephson junction. Under the microscope, thin silver lines lead out to the edge of the chip. Eventually, they connect up to a tangle of blue wires that carry and amplify the faint signal from the qubit to one of the racks of machines surrounding each cryostat.

It takes up to two weeks to wire up one of the machines: to increase the number of qubits, Google will need to find a new wiring solution that takes up less space, or find a way of controlling the qubit from inside the cryostat. “A lot of things will just break if you try to cool down to 10mK,” says Megrant. Both Microsoft and Google are now working on building classical chips that can operate at lower temperatures in order to control the qubits without adding interference.

It’s all part of a delicate balancing act. Each quantum computation is a frantic race to perform as many operations as possible in the fraction of a second before a qubit "decoheres" out of superposition. “The lifetime of the quantum information is super short,” explains Jan Goetz of Finnish startup IQM, which is developing technology to try and increase the clock speed of quantum chips and improve their performance in this regard. “The more complex you make the processors, the more the lifetime goes down.”

Over the last decade, we’ve seen an escalating race in the number of qubits being claimed by different companies. In 2016, Google simulated a hydrogen molecule with a nine-qubit quantum computer. In 2017, Intel reached 17 qubits, and IBM built a 50-qubit chip that could maintain its quantum state for 90 microseconds. In 2018, Google unveiled Bristlecone, its 72-qubit processor, and in 2019 IBM launched its first commercial quantum computer – the 20-qubit IBM Q System One.

D-Wave, a Canada-based company, has always been an outlier. It has been selling commercial quantum computers since the late 1990s, and claims to have several thousand "annealing qubits" in its devices, but these are based on a different theoretical approach that’s only useful in certain types of problem.

In any case, it’s becoming clear that the number of qubits isn’t nearly as important as what Heike Riel, head of the science and technology department at IBM Research Europe, calls “quantum volume” – a more practical measurement of the power of a quantum device. “The number of qubits is of course important, but it’s not everything,” she says. Quantum volume tells you how much useful computing you can do in the fractions of a second before your qubits fall out of superposition.

Much of Google’s work over the last decade has been on slowly improving both coherence time (how long qubits last) and gate time (the speed of its various logic gates – the building blocks of algorithms).

Google’s 54-qubit Sycamore chip has fewer qubits than its predecessor, but these are arranged in a grid that allows for faster computation. The task Boixo set for the chip involved simulating the output of a random series of quantum logic gates – something that would be incredibly difficult for a classical computer, but relatively straightforward for a quantum chip.

Over the first few months of 2019, the team gradually increased the difficulty of the experiment – adding more and more qubits to the operation. At first everything looked good. But in March 2019, they saw an alarming drop in performance with their quantum chip, right around the same level of complexity that supercomputers start to struggle with simulating qubits. The problem with operating at the fringes of our knowledge of physics is that when you run into a problem, you don’t know whether it’s down to a manufacturing error, noise and interference, or because you’ve hit a fundamental barrier – some undiscovered law of the universe. “Maybe quantum mechanics stops at 30 qubits,” Megrant jokes.

It doesn’t – the problem turned out to be a calibration error – but some researchers believe there might be other impediments to progress. Even with all the technology Google employs to shield its qubits from interference, the error rate is still astonishingly high. Qubits routinely flip into the wrong state, or decohere before they’re supposed to.

It’s possible to correct for those errors, but to do it, you need more qubits – and then more qubits to correct for those qubits. With current error rates, you would need thousands or millions of qubits to run algorithms that might be useful in the real world. That’s why John Preskill, the physicist who coined the term "quantum supremacy", has dubbed this the "noisy intermediate scale quantum" (NISQ) era, in recognition of the fact that we’re still a long way off practical devices. It’s also why Microsoft is convinced that superconducting qubits are a dead end. “We do not see a line of sight there to commercial-scale quantum computers that could solve today’s unsolvable problems,” Nayak says.

Instead, at the software giant’s sprawling campus (so big that the quickest way to go from meeting to meeting is by Uber) in the Seattle suburb of Redmond, researchers are testing a cryostat that looks very similar to Google’s, but will – if things go to plan – host a very different type of quantum processor.

If Google’s ascent up the quantum mountain is steep, Microsoft’s is potentially impossible. Instead of superconducting qubits, they’re trying to harness a different type of qubit known as a "topological qubit". The only problem is that it might not actually exist.

“Maybe we're on a marathon instead of a sprint,” says Krysta Svore, general manager for quantum software at Microsoft’s quantum research lab in Redmond. Topological qubits are based on a theoretical particle called a Majorana, which encodes the state of the qubit in several places at once. If they can be created, topological qubits could offer a more robust alternative to superconducting qubits that is harder to knock out of superposition. As a result you’d need ten times fewer qubits.

Nayak explains it using a Harry Potter analogy. “The main villain of the story, Voldemort, splits his soul into seven pieces called Horcruxes, and spreads out those Horcruxes so he can’t be killed,” he says. “What we’re doing with our topological qubit is spreading our qubit out over six Majoranas. Those are our Horcruxes. By doing something to just one or another of them locally, you actually can’t kill off Voldemort. Our qubit is still going to be there.”

But scientists still aren’t entirely sure that Majorana particles are real. They’ve been theorised about since the 1930s, but the experimental evidence isn’t watertight. Still, when I speak to Nayak and Svore in January 2020, they’re confident. “We're not hunting in the dark for this and hoping to find this,” says Nayak. “We're being guided by simulations.”

The news of Google’s claims of quantum supremacy leaked out a month ahead of schedule in September 2019, after reporters from the Financial Times found a draft copy of Google’s paper – due to be published in Nature – available to download on an open-access server. It sparked a few days of mild panic in Santa Barbara, the first hours of which were spent frantically trying to get the file taken down, and the remainder wondering if anyone had actually seen it. By the time the paper was actually officially published in October, the initial hype had been somewhat tempered. “It’s a stepping stone, but we see stepping stones every year,” says Robert Young, director of Lancaster University’s Quantum Technology Centre. “I don’t think it’s a threshold event.”

IBM prepared its own calculations, showing that its classical supercomputer would have been able to do the task in 2.5 days, not 10,000 years – still quantum supremacy, but not all that supreme (although the Google team argues that to do it that quickly, you’d need to hook your supercomputer up to a nuclear power station).

Instead of quantum supremacy, Microsoft and IBM now prefer to talk about "quantum advantage" – the point at which quantum computers allow you to do useful things that you couldn’t do before. “We are really focused on providing value and providing a quantum advantage rather than showing supremacy in problems that are not relevant to the industry,” says Riel.

To reach quantum advantage will require more than just a few chips in fridges in Santa Barbara, New York and Redmond. Quantum computing will need infrastructure around it and, post-supremacy, the race is on to achieve dominance in the algorithms and programming languages that these new devices will use.

In 1994, Peter Shor – another Villa Gualino alumnus – published a set of instructions for using a quantum computer to factor large numbers, called Shor’s algorithm. The computational resource required to divide two long numbers into their smallest factors is the bedrock of many modern encryption systems, but quantum computing could break it. Another algorithm – published and named for Indian-American computer scientist Lov Grover in 1996 – offers the tantalising prospect of searching large databases thousands of times faster. You can see why Google is interested.

But those algorithms were designed to be run on the perfect quantum computer – a device that doesn’t, and will never, exist. For quantum computers to be useful, hardware will need to be improved to reduce error rates, and the algorithms will need to be modified to account for the errors that will inevitably remain. “The vast majority of algorithms that are being considered today are so far ahead of where the performance metrics of the real quantum systems are,” says Young. “Theory is well ahead of experiment.”

Nonetheless, Google, Microsoft, IBM and others (including Berkeley-based Rigetti) – are all working on the layers that will sit above quantum computers in the same way that compilers and operating systems shield you from the 1s and 0s powering your laptop. “Right now the programs we write are almost machine code, very close to the hardware,” says Google’s Marissa Giustina. “We don’t have any of the high-level tools where you can abstract away the hardware.”

At Microsoft, Svore, who has a background in computer science, has helped develop Q# (pronounced Q-sharp) one of the first programming languages designed specifically for dealing with the quirks of quantum computers of all types. “We know quantum computers are going to develop,” says Svore. “But that same code is going to endure.”

Google’s Cirq and IBM’s Qiskit are both open-source frameworks that will help researchers develop algorithms in the NISQ era. Companies are also powering ahead with commercial applications: IBM is already working with more than 100 companies, including ExxonMobil, Barclays and Samsung, on practical applications; Microsoft has Azure Quantum, which allows its clients to plug into IonQ’s trapped-ion quantum computer and superconducting qubits being developed by Connecticut-based QCI.

Peter Chapman, CEO of IonQ, which is attempting to build a quantum device based on trapped ions, says these developments will enable people to start writing the "Hello World!" programs for quantum, referring to the on-screen message that is one of the first things people learn to produce when first being taught how to code.

Quantum algorithms are already having a small impact even in the absence of reliable quantum hardware, because you can simulate them on classical supercomputers, up to a point. These "quantum-inspired optimisation algorithms", as Svore calls them, have been used for traffic management, the development of better batteries, and for reducing the amount of time it takes to analyse an MRI scan.

Eventually, the end user of a quantum computer will probably be unaware that they’re actually using one. Quantum processors of various different types – superconducting, trapped ion, simulated – will form part of an arsenal of technologies that are automatically selected. “Our vision is you and me having a problem and we just use the normal software we use, and then the cloud has access to all these kinds of computers and decides which one to run the problem on,” says Riel.

For now, the problems that quantum computing is tackling are small, proof of concept problems that could be tackled just as effectively using a classical computer – they’re inspired by what we’re learning from trying to build quantum hardware, but they don’t actually use it yet. Quantum computers aren’t simply a better type of computer; they’ll only be useful for specific types of problem. You’ll never have a quantum chip in your own device – instead, you’ll access their powers, in the unlikely event that you’ll ever personally need to use them, via the cloud.

The first practical applications of real quantum computers are likely to be relatively low impact, such as verifying random numbers. After that will come what Feynman talked about – using a quantum mechanical device to simulate nature. That opens up possibilities for running simulations of chemical and biological processes, and trying things out before you test new drugs or materials experimentally.

In time, Boixo hopes that quantum computers will be able to tackle some of the existential crises facing our planet. “Climate change is an energy problem – energy is a physical, chemical process,” he says. “Maybe if we build the tools that allow the simulations to be done, we can construct a new industrial revolution that will hopefully be a more efficient use of energy.”

But we’re a long way off. “Quantum computing, from an impact perspective, in January 2020, is probably similar to the internet in January of 1993,” says Hurley. “In 1993 there were about 650 websites – nobody saw Uber or Airbnb or any of this stuff.” Quantum advantage could be five years away, or five decades. There’s a danger of overhyping the achievements so far – it’s still possible that there is some fundamental barrier that will prevent the number of qubits being scaled up, or that the noise will simply become insurmountable above a certain level.

Ekert, whose talk in 1994 kickstarted the race to quantum supremacy, thinks we’ll still need some major technological breakthrough akin to the development of the transistor, which transformed conventional computers from the 1960s onwards. We are not in the days of rival chipmakers battling to produce the best possible hardware; we’re in the days of vacuum chips and mechanical gates, and researchers wondering whether the thing they’re trying to do is even possible.

In one sense, Ekert actually hopes that it isn’t. “It would be an even more wonderful scenario if we cannot build a quantum computer for truly fundamental reasons – if actually it’s impossible, because of some new, truly fundamental laws of physics,” he says.

A practical, error-corrected quantum computer could change the world. But the battle to build one could reveal fundamental truths about the universe itself. “This is not a competition between companies,” says Google quantum researcher Yu Chen. “It’s our technology against nature.”

Amit Katwala is WIRED's culture editor. He tweets from @amitkatwala

Coronavirus coverage from WIRED

🏘️ Failing care homes are the real coronavirus scandal

🔒 The UK's new lockdown rules, explained

❓ The UK's job retention furlough scheme, explained

💲 Can Universal Basic Income help fight coronavirus?

👉 Follow WIRED on Twitter, Instagram, Facebook and LinkedIn

This article was originally published by WIRED UK