Goto

Collaborating Authors

 Wang, Xiu-Cheng


Is AI Robust Enough for Scientific Research?

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) has become a transformative tool in scientific research, driving breakthroughs across numerous disciplines [5-11]. Despite these achievements, neural networks, which form the backbone of many AI systems, exhibit significant vulnerabilities. One of the most concerning issues is their susceptibility to adversarial attacks [1, 2, 12, 13]. These attacks involve making small, often imperceptible changes to the input data, causing AI systems to make incorrect predictions (Figure 1), highlighting a critical weakness: AI systems can fail under minimal perturbations - a phenomenon completely unseen in classical methods. The impact of adversarial attacks has been extensively studied in the context of image classification [14-16].