Collaborating Authors


Could quantum computers fix political polls?


It would be the harbinger of an entirely new medium of calculation, harnessing the powers of subatomic particles to obliterate the barriers of time in solving incalculable problems. You and I are being continually surveyed. We reveal information about ourselves with astonishingly little resistance. Social media has made many of us into veritable slot machines for our own personal data. We're fed a little token of encouragement that someone may yet like us, our arm is gently pulled, and we disgorge something we hope people will find valuable enough for commencing small talk. What personal facts, real or trivial, we do end up disclosing -- perhaps unwittingly -- immediately undergo unceasing analysis. The inferences these analyses draw about us as people are being aggregated, baselined, composited, deliberated, and profiled.

IonQ CEO Peter Chapman on quantum computing adoption, innovation and what's next


IonQ has a plan to commercialize quantum computing and Peter Chapman is CEO expected to make it happen. Chapman, son of a NASA astronaut, started working in the MIT AI Lab when he was 16, invented the first sound card for the IBM PC, wrote software for the FAA and led a Ray Kurzweil company to build tools for the blind. Simply put, Chapman has been ahead of the technology curve. Chapman joined IonQ in the summer of 2018 because he is betting that quantum computing can achieve Artificial General Intelligence (AGI). IonQ recently made news for its roadmap and proposing a new performance metric called Algorithmic Qubit.

Quantum Earth Mover's Distance: A New Approach to Learning Quantum Data Machine Learning

Quantifying how far the output of a learning algorithm is from its target is an essential task in machine learning. However, in quantum settings, the loss landscapes of commonly used distance metrics often produce undesirable outcomes such as poor local minima and exponentially decaying gradients. As a new approach, we consider here the quantum earth mover's (EM) or Wasserstein-1 distance, recently proposed in [De Palma et al., arXiv:2009.04469] as a quantum analog to the classical EM distance. We show that the quantum EM distance possesses unique properties, not found in other commonly used quantum distance metrics, that make quantum learning more stable and efficient. We propose a quantum Wasserstein generative adversarial network (qWGAN) which takes advantage of the quantum EM distance and provides an efficient means of performing learning on quantum data. Our qWGAN requires resources polynomial in the number of qubits, and our numerical experiments demonstrate that it is capable of learning a diverse set of quantum data.

Connecting ansatz expressibility to gradient magnitudes and barren plateaus Machine Learning

Parameterized quantum circuits serve as ans\"{a}tze for solving variational problems and provide a flexible paradigm for programming near-term quantum computers. Ideally, such ans\"{a}tze should be highly expressive so that a close approximation of the desired solution can be accessed. On the other hand, the ansatz must also have sufficiently large gradients to allow for training. Here, we derive a fundamental relationship between these two essential properties: expressibility and trainability. This is done by extending the well established barren plateau phenomenon, which holds for ans\"{a}tze that form exact 2-designs, to arbitrary ans\"{a}tze. Specifically, we calculate the variance in the cost gradient in terms of the expressibility of the ansatz, as measured by its distance from being a 2-design. Our resulting bounds indicate that highly expressive ans\"{a}tze exhibit flatter cost landscapes and therefore will be harder to train. Furthermore, we provide numerics illustrating the effect of expressiblity on gradient scalings, and we discuss the implications for designing strategies to avoid barren plateaus.

Deep Reinforcement Learning with Quantum-inspired Experience Replay Artificial Intelligence

In this paper, a novel training paradigm inspired by quantum computation is proposed for deep reinforcement learning (DRL) with experience replay. In contrast to traditional experience replay mechanism in DRL, the proposed deep reinforcement learning with quantum-inspired experience replay (DRL-QER) adaptively chooses experiences from the replay buffer according to the complexity and the replayed times of each experience (also called transition), to achieve a balance between exploration and exploitation. In DRL-QER, transitions are first formulated in quantum representations, and then the preparation operation and the depreciation operation are performed on the transitions. In this progress, the preparation operation reflects the relationship between the temporal difference errors (TD-errors) and the importance of the experiences, while the depreciation operation is taken into account to ensure the diversity of the transitions. The experimental results on Atari 2600 games show that DRL-QER outperforms state-of-the-art algorithms such as DRL-PER and DCRL on most of these games with improved training efficiency, and is also applicable to such memory-based DRL approaches as double network and dueling network.

Single-preparation unsupervised quantum machine learning: concepts and applications Machine Learning

The term "machine learning" especially refers to algorithms that derive mappings, i.e. intput/output transforms, by using numerical data that provide information about considered transforms. These transforms appear in many problems, related to classification/clustering, regression, system identification, system inversion and input signal restoration/separation. We here first analyze the connections between all these problems, in the classical and quantum frameworks. We then focus on their most challenging versions, involving quantum data and/or quantum processing means, and unsupervised, i.e. blind, learning. Moreover, we propose the quite general concept of SIngle-Preparation Quantum Information Processing (SIPQIP). The resulting methods only require a single instance of each state, whereas usual methods have to very accurately create many copies of each fixed state. We apply our SIPQIP concept to various tasks, related to system identification (blind quantum process tomography or BQPT, blind Hamiltonian parameter estimation or BHPE, blind quantum channel identification/estimation, blind phase estimation), system inversion and state estimation (blind quantum source separation or BQSS, blind quantum entangled state restoration or BQSR, blind quantum channel equalization) and classification. Numerical tests show that our framework moreover yields much more accurate estimation than the standard multiple-preparation approach. Our methods are especially useful in a quantum computer, that we propose to more briefly call a "quamputer": BQPT and BHPE simplify the characterization of the gates of quamputers; BQSS and BQSR allow one to design quantum gates that may be used to compensate for the non-idealities that alter states stored in quantum registers, and they open the way to the much more general concept of self-adaptive quantum gates (see longer version of abstract in paper).

Constant Overhead Quantum Fault Tolerance with Quantum Expander Codes

Communications of the ACM

The threshold theorem is a seminal result in the field of quantum computing asserting that arbitrarily long quantum computations can be performed on a faulty quantum computer provided that the noise level is below some constant threshold. This remarkable result comes at the price of increasing the number of qubits (quantum bits) by a large factor that scales polylogarithmically with the size of the quantum computation we wish to realize. Minimizing the space overhead for fault-tolerant quantum computation is a pressing challenge that is crucial to benefit from the computational potential of quantum devices. In this paper, we study the asymptotic scaling of the space overhead needed for fault-tolerant quantum computation. We show that the polylogarithmic factor in the standard threshold theorem is in fact not needed and that there is a fault-tolerant construction that uses a number of qubits that is only a constant factor more than the number of qubits of the ideal computation. This result was conjectured by Gottesman who suggested to replace the concatenated codes from the standard threshold theorem by quantum error-correcting codes with a constant encoding rate. The main challenge was then to find an appropriate family of quantum codes together with an efficient classical decoding algorithm working even with a noisy syndrome. The efficiency constraint is crucial here: bear in mind that qubits are inherently noisy and that faults keep accumulating during the decoding process. The role of the decoder is therefore to keep the number of errors under control during the whole computation. On a technical level, our main contribution is the analysis of the SMALL-SET-FLIP decoding algorithm applied to the family of quantum expander codes. We show that it can be parallelized to run in constant time while correcting sufficiently many errors on both the qubits and the syndrome to keep the error under control. These tools can be seen as a quantum generalization of the BIT-FLIP algorithm applied to the (classical) expander codes of Sipser and Spielman. Quantum computers are expected to offer significant, sometimes exponential, speedups compared to classical computers.

Variational Quantum Cloning: Improving Practicality for Quantum Cryptanalysis Artificial Intelligence

Cryptanalysis on standard quantum cryptographic systems generally involves finding optimal adversarial attack strategies on the underlying protocols. The core principle of modelling quantum attacks in many cases reduces to the adversary's ability to clone unknown quantum states which facilitates the extraction of some meaningful secret information. Explicit optimal attack strategies typically require high computational resources due to large circuit depths or, in many cases, are unknown. In this work, we propose variational quantum cloning (VQC), a quantum machine learning based cryptanalysis algorithm which allows an adversary to obtain optimal (approximate) cloning strategies with short depth quantum circuits, trained using hybrid classical-quantum techniques. The algorithm contains operationally meaningful cost functions with theoretical guarantees, quantum circuit structure learning and gradient descent based optimisation. Our approach enables the end-to-end discovery of hardware efficient quantum circuits to clone specific families of quantum states, which in turn leads to an improvement in cloning fidelites when implemented on quantum hardware: the Rigetti Aspen chip. Finally, we connect these results to quantum cryptographic primitives, in particular quantum coin flipping. We derive attacks on two protocols as examples, based on quantum cloning and facilitated by VQC. As a result, our algorithm can improve near term attacks on these protocols, using approximate quantum cloning as a resource.

On the experimental feasibility of quantum state reconstruction via machine learning Artificial Intelligence

We determine the resource scaling of machine learning-based quantum state reconstruction methods, in terms of both inference and training, for systems of up to four qubits. Further, we examine system performance in the low-count regime, likely to be encountered in the tomography of high-dimensional systems. Finally, we implement our quantum state reconstruction method on a IBM Q quantum computer and confirm our results.

How I Learn Quantum Computing


I love quantum mechanics, something is fascinating about the perception of how QM explains the world. How different it is than the reality we can see and live in. Everything we call real is made of things that cannot be regarded as real. This quotation was a revelation to me when I was a student. Looking at the matter isn't the good way because all of the elements are just waves.