Experimental quantum adversarial learning with programmable superconducting qubits
Ren, Wenhui, Li, Weikang, Xu, Shibo, Wang, Ke, Jiang, Wenjie, Jin, Feitong, Zhu, Xuhao, Chen, Jiachen, Song, Zixuan, Zhang, Pengfei, Dong, Hang, Zhang, Xu, Deng, Jinfeng, Gao, Yu, Zhang, Chuanyu, Wu, Yaozu, Zhang, Bing, Guo, Qiujiang, Li, Hekang, Wang, Zhen, Biamonte, Jacob, Song, Chao, Deng, Dong-Ling, Wang, H.
–arXiv.org Artificial Intelligence
State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou 310027, China Quantum computing promises to enhance machine learning and artificial intelligence [1-3]. Different quantum algorithms have been proposed to improve a wide spectrum of machine learning tasks [4-12]. Yet, recent theoretical works show that, similar to traditional classifiers based on deep classical neural networks, quantum classifiers would suffer from the vulnerability problem: adding tiny carefully-crafted perturbations to the legitimate original data samples would facilitate incorrect predictions at a notably high confidence level [13-17]. This will pose serious problems for future quantum machine learning applications in safety and security-critical scenarios [18-20]. Here, we report the first experimental demonstration of quantum adversarial learning with programmable superconducting qubits. We train quantum classifiers, which are built upon variational quantum circuits consisting of ten transmon qubits featuring average lifetimes of 150 µs, and average fidelities of simultaneous single-and two-qubit gates above 99.94% and 99.4% respectively, with both real-life images (e.g., medical magnetic resonance imaging scans) and quantum data. We demonstrate that these well-trained classifiers (with testing accuracy up to 99%) can be practically deceived by small adversarial perturbations, whereas an adversarial training process would significantly enhance their robustness to such perturbations. Our results reveal experimentally a crucial vulnerability aspect of quantum learning systems under adversarial scenarios and demonstrate an effective defense strategy against adversarial attacks, which provide a valuable guide for quantum artificial intelligence applications with both near-term and future quantum devices. In recent years, artificial intelligence (AI) [21-23] and been proposed to enhance the robustness of quantum classifiers quantum computing [24-26] have made dramatic progress. However, demonstrating Their intersection gives rise to a research frontier called, quantum adversarial examples for quantum classifiers experimentally machine learning or generally, quantum AI [1-3]. A number and showing the effectiveness of the proposed countermeasures of quantum algorithms have been proposed to enhance in practice are challenging and have not previously various AI tasks [4-12].
arXiv.org Artificial Intelligence
Apr-4-2022
- Country:
- Asia > China
- Zhejiang Province > Hangzhou (0.24)
- North America (1.00)
- Asia > China
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (1.00)
- Therapeutic Area (1.00)
- Information Technology > Security & Privacy (0.88)
- Health & Medicine
- Technology: