Goto

Collaborating Authors

 Pang, Long-Gang


Is AI Robust Enough for Scientific Research?

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) has become a transformative tool in scientific research, driving breakthroughs across numerous disciplines [5-11]. Despite these achievements, neural networks, which form the backbone of many AI systems, exhibit significant vulnerabilities. One of the most concerning issues is their susceptibility to adversarial attacks [1, 2, 12, 13]. These attacks involve making small, often imperceptible changes to the input data, causing AI systems to make incorrect predictions (Figure 1), highlighting a critical weakness: AI systems can fail under minimal perturbations - a phenomenon completely unseen in classical methods. The impact of adversarial attacks has been extensively studied in the context of image classification [14-16].


An equation-of-state-meter of QCD transition from deep learning

arXiv.org Machine Learning

Supervised learning with a deep convolutional neural network is used to identify the QCD equation of state (EoS) employed in relativistic hydrodynamic simulations of heavy-ion collisions from the simulated final-state particle spectra $\rho(p_T,\Phi)$. High-level correlations of $\rho(p_T,\Phi)$ learned by the neural network act as an effective "EoS-meter" in detecting the nature of the QCD transition. The EoS-meter is model independent and insensitive to other simulation inputs, especially the initial conditions. Thus it provides a powerful direct-connection of heavy-ion collision observables with the bulk properties of QCD.