After decades of research, quantum computers are approaching the scale at which they could outperform their "classical" counterparts on some problems. They will be truly practical, however, only when they implement quantum error correction, which combines many physical quantum bits, or qubits, into a logical qubit that preserves its quantum information even when its constituents are disrupted. Although this task once seemed impossible, theorists have developed multiple techniques for doing so, including "surface codes" that could be implemented in an integrated-circuit-like planar geometry. For ordinary binary data, errors can be corrected, for example, using the majority rule: A desired bit, whether 1 or 0, is first triplicated as 111 or 000. Later, even if one of the three bits has been corrupted, the other two "outvote" it and allow recovery of the original data.
The cloud company published a new blueprint for a fault-tolerant quantum computer that describes a new way of controlling qubits to make sure that they carry out calculations as accurately as possible. Amazon's cloud subsidiary AWS has released its first research paper detailing a new architecture for a future quantum computer, which, if realized, could set a new standard for error correction. The cloud company published a new blueprint for a fault-tolerant quantum computer that, although still purely theoretical, describes a new way of controlling quantum bits (or qubits) to ensure that they carry out calculations as accurately as possible. The paper is likely to grab the attention of many experts who are working to improve quantum error correction (QEC), a field that's growing in parallel with quantum computing that seeks to resolve one of the key barriers standing in the way of realising useful, large-scale quantum computers. Quantum systems, which are expected to generate breakthroughs in industries ranging from finance to drug discovery thanks to exponentially greater compute capabilities, are effectively still riddled with imperfections, or errors, that can spoil the results of calculations.
After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. "There is a lot of energy in the community, and the recent progress is immense," said physicist Jens Eisert of the Free University of Berlin.
Quantum computers may one day upend cryptography, help design new materials and drugs, and accelerate many other computational tasks. A quantum computer's memory is a quantum system, capable of being in a superposition of many different bit strings at once. It can take advantage of quantum interference to run uniquely quantum algorithms which can solve some (but not all) computational problems much faster than a regular classical computer. Experimental efforts to build a quantum computer have taken enormous strides forward in the last decade, leading to today's devices with over 50 quantum bits ("qubits"). Governments and large technology companies such as Google, IBM, and Microsoft, as well as a slew of start-ups, have begun pouring money into the field hoping to be the first with a useful quantum computer.
The threshold theorem is a seminal result in the field of quantum computing asserting that arbitrarily long quantum computations can be performed on a faulty quantum computer provided that the noise level is below some constant threshold. This remarkable result comes at the price of increasing the number of qubits (quantum bits) by a large factor that scales polylogarithmically with the size of the quantum computation we wish to realize. Minimizing the space overhead for fault-tolerant quantum computation is a pressing challenge that is crucial to benefit from the computational potential of quantum devices. In this paper, we study the asymptotic scaling of the space overhead needed for fault-tolerant quantum computation. We show that the polylogarithmic factor in the standard threshold theorem is in fact not needed and that there is a fault-tolerant construction that uses a number of qubits that is only a constant factor more than the number of qubits of the ideal computation. This result was conjectured by Gottesman who suggested to replace the concatenated codes from the standard threshold theorem by quantum error-correcting codes with a constant encoding rate. The main challenge was then to find an appropriate family of quantum codes together with an efficient classical decoding algorithm working even with a noisy syndrome. The efficiency constraint is crucial here: bear in mind that qubits are inherently noisy and that faults keep accumulating during the decoding process. The role of the decoder is therefore to keep the number of errors under control during the whole computation. On a technical level, our main contribution is the analysis of the SMALL-SET-FLIP decoding algorithm applied to the family of quantum expander codes. We show that it can be parallelized to run in constant time while correcting sufficiently many errors on both the qubits and the syndrome to keep the error under control. These tools can be seen as a quantum generalization of the BIT-FLIP algorithm applied to the (classical) expander codes of Sipser and Spielman. Quantum computers are expected to offer significant, sometimes exponential, speedups compared to classical computers.