Is AI Robust Enough for Scientific Research?

Zhang, Jun-Jie, Song, Jiahao, Wang, Xiu-Cheng, Li, Fu-Peng, Liu, Zehan, Chen, Jian-Nan, Dang, Haoning, Wang, Shiyao, Zhang, Yiyan, Xu, Jianhui, Shi, Chunxiang, Wang, Fei, Pang, Long-Gang, Cheng, Nan, Zhang, Weiwei, Zhang, Duo, Meng, Deyu

arXiv.org Artificial Intelligence 

Artificial Intelligence (AI) has become a transformative tool in scientific research, driving breakthroughs across numerous disciplines [5-11]. Despite these achievements, neural networks, which form the backbone of many AI systems, exhibit significant vulnerabilities. One of the most concerning issues is their susceptibility to adversarial attacks [1, 2, 12, 13]. These attacks involve making small, often imperceptible changes to the input data, causing AI systems to make incorrect predictions (Figure 1), highlighting a critical weakness: AI systems can fail under minimal perturbations - a phenomenon completely unseen in classical methods. The impact of adversarial attacks has been extensively studied in the context of image classification [14-16].