Goto

Collaborating Authors

 Song, Zixuan


Experimental quantum adversarial learning with programmable superconducting qubits

arXiv.org Artificial Intelligence

State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou 310027, China Quantum computing promises to enhance machine learning and artificial intelligence [1-3]. Different quantum algorithms have been proposed to improve a wide spectrum of machine learning tasks [4-12]. Yet, recent theoretical works show that, similar to traditional classifiers based on deep classical neural networks, quantum classifiers would suffer from the vulnerability problem: adding tiny carefully-crafted perturbations to the legitimate original data samples would facilitate incorrect predictions at a notably high confidence level [13-17]. This will pose serious problems for future quantum machine learning applications in safety and security-critical scenarios [18-20]. Here, we report the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits. We train quantum classifiers, which are built upon variational quantum circuits consisting of ten transmon qubits featuring average lifetimes of 150 µs, and average fidelities of simultaneous single-and two-qubit gates above 99.94% and 99.4% respectively, with both real-life images (e.g., medical magnetic resonance imaging scans) and quantum data. We demonstrate that these well-trained classifiers (with testing accuracy up to 99%) can be practically deceived by small adversarial perturbations, whereas an adversarial training process would significantly enhance their robustness to such perturbations. Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios and demonstrate an effective defense strategy against adversarial attacks, which provide a valuable guide for quantum artificial intelligence applications with both near-term and future quantum devices. In recent years, artificial intelligence (AI) [21-23] and been proposed to enhance the robustness of quantum classifiers quantum computing [24-26] have made dramatic progress. However, demonstrating Their intersection gives rise to a research frontier called, quantum adversarial examples for quantum classifiers experimentally machine learning or generally, quantum AI [1-3]. A number and showing the effectiveness of the proposed countermeasures of quantum algorithms have been proposed to enhance in practice are challenging and have not previously various AI tasks [4-12].


Variable selection with false discovery rate control in deep neural networks

arXiv.org Machine Learning

Deep neural networks (DNNs) are famous for their high prediction accuracy, but they are also known for their black-box nature and poor interpretability. We consider the problem of variable selection, that is, selecting the input variables that have significant predictive power on the output, in DNNs. We propose a backward elimination procedure called SurvNet, which is based on a new measure of variable importance that applies to a wide variety of networks. More importantly, SurvNet is able to estimate and control the false discovery rate of selected variables, while no existing methods provide such a quality control. Further, SurvNet adaptively determines how many variables to eliminate at each step in order to maximize the selection efficiency. To study its validity, SurvNet is applied to image data and gene expression data, as well as various simulation datasets.