Plotting

 Ipp, Andreas


Strategic White Paper on AI Infrastructure for Particle, Nuclear, and Astroparticle Physics: Insights from JENA and EuCAIF

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is transforming scientific research, with deep learning methods playing a central role in data analysis, simulations, and signal detection across particle, nuclear, and astroparticle physics. Within the JENA communities-ECFA, NuPECC, and APPEC-and as part of the EuCAIF initiative, AI integration is advancing steadily. However, broader adoption remains constrained by challenges such as limited computational resources, a lack of expertise, and difficulties in transitioning from research and development (R&D) to production. This white paper provides a strategic roadmap, informed by a community survey, to address these barriers. It outlines critical infrastructure requirements, prioritizes training initiatives, and proposes funding strategies to scale AI capabilities across fundamental physics over the next five years.


Large Physics Models: Towards a collaborative approach with Large Language Models and Foundation Models

arXiv.org Artificial Intelligence

This paper explores ideas and provides a potential roadmap for the development and evaluation of physics-specific large-scale AI models, which we call Large Physics Models (LPMs). These models, based on foundation models such as Large Language Models (LLMs) - trained on broad data - are tailored to address the demands of physics research. LPMs can function independently or as part of an integrated framework. This framework can incorporate specialized tools, including symbolic reasoning modules for mathematical manipulations, frameworks to analyse specific experimental and simulated data, and mechanisms for synthesizing theories and scientific literature. We begin by examining whether the physics community should actively develop and refine dedicated models, rather than relying solely on commercial LLMs. We then outline how LPMs can be realized through interdisciplinary collaboration among experts in physics, computer science, and philosophy of science. To integrate these models effectively, we identify three key pillars: Development, Evaluation, and Philosophical Reflection. Development focuses on constructing models capable of processing physics texts, mathematical formulations, and diverse physical data. Evaluation assesses accuracy and reliability by testing and benchmarking. Finally, Philosophical Reflection encompasses the analysis of broader implications of LLMs in physics, including their potential to generate new scientific understanding and what novel collaboration dynamics might arise in research. Inspired by the organizational structure of experimental collaborations in particle physics, we propose a similarly interdisciplinary and collaborative approach to building and refining Large Physics Models. This roadmap provides specific objectives, defines pathways to achieve them, and identifies challenges that must be addressed to realise physics-specific large scale AI models.


Physics-Driven Learning for Inverse Problems in Quantum Chromodynamics

arXiv.org Artificial Intelligence

The integration of deep learning techniques and physics-driven designs is reforming the way we address inverse problems, in which accurate physical properties are extracted from complex data sets. This is particularly relevant for quantum chromodynamics (QCD), the theory of strong interactions, with its inherent limitations in observational data and demanding computational approaches. This perspective highlights advances and potential of physics-driven learning methods, focusing on predictions of physical quantities towards QCD physics, and drawing connections to machine learning(ML). It is shown that the fusion of ML and physics can lead to more efficient and reliable problem-solving strategies. Key ideas of ML, methodology of embedding physics priors, and generative models as inverse modelling of physical probability distributions are introduced. Specific applications cover first-principle lattice calculations, and QCD physics of hadrons, neutron stars, and heavy-ion collisions. These examples provide a structured and concise overview of how incorporating prior knowledge such as symmetry, continuity and equations into deep learning designs can address diverse inverse problems across different physical sciences.


Machine learning a fixed point action for SU(3) gauge theory with a gauge equivariant convolutional neural network

arXiv.org Artificial Intelligence

Fixed point lattice actions are designed to have continuum classical properties unaffected by discretization effects and reduced lattice artifacts at the quantum level. They provide a possible way to extract continuum physics with coarser lattices, thereby allowing to circumvent problems with critical slowing down and topological freezing toward the continuum limit. A crucial ingredient for practical applications is to find an accurate and compact parametrization of a fixed point action, since many of its properties are only implicitly defined. Here we use machine learning methods to revisit the question of how to parametrize fixed point actions. In particular, we obtain a fixed point action for four-dimensional SU(3) gauge theory using convolutional neural networks with exact gauge invariance. The large operator space allows us to find superior parametrizations compared to previous studies, a necessary first step for future Monte Carlo simulations.


Fixed point actions from convolutional neural networks

arXiv.org Artificial Intelligence

Lattice gauge-equivariant convolutional neural networks (L-CNNs) can be used to form arbitrarily shaped Wilson loops and can approximate any gauge-covariant or gauge-invariant function on the lattice. Here we use L-CNNs to describe fixed point (FP) actions which are based on renormalization group transformations. FP actions are classically perfect, i.e., they have no lattice artifacts on classical gauge-field configurations satisfying the equations of motion, and therefore possess scale invariant instanton solutions. FP actions are tree-level Symanzik-improved to all orders in the lattice spacing and can produce physical predictions with very small lattice artifacts even on coarse lattices. We find that L-CNNs are much more accurate at parametrizing the FP action compared to older approaches. They may therefore provide a way to circumvent critical slowing down and topological freezing towards the continuum limit.


Applications of Lattice Gauge Equivariant Neural Networks

arXiv.org Artificial Intelligence

The introduction of relevant physical information into neural network architectures has become a widely used and successful strategy for improving their performance. In lattice gauge theories, such information can be identified with gauge symmetries, which are incorporated into the network layers of our recently proposed Lattice Gauge Equivariant Convolutional Neural Networks (L-CNNs). L-CNNs can generalize better to differently sized lattices than traditional neural networks and are by construction equivariant under lattice gauge transformations. In these proceedings, we present our progress on possible applications of L-CNNs to Wilson flow or continuous normalizing flow. Our methods are based on neural ordinary differential equations which allow us to modify link configurations in a gauge equivariant manner. For simplicity, we focus on simple toy models to test these ideas in practice.


Equivariance and generalization in neural networks

arXiv.org Machine Learning

The crucial role played by the underlying symmetries of high energy physics and lattice field theories calls for the implementation of such symmetries in the neural network architectures that are applied to the physical system under consideration. In these proceedings, we focus on the consequences of incorporating translational equivariance among the network properties, particularly in terms of performance and generalization. The benefits of equivariant networks are exemplified by studying a complex scalar field theory, on which various regression and classification tasks are examined. For a meaningful comparison, promising equivariant and non-equivariant architectures are identified by means of a systematic search. The results indicate that in most of the tasks our best equivariant architectures can perform and generalize significantly better than their non-equivariant counterparts, which applies not only to physical parameters beyond those represented in the training set, but also to different lattice sizes.


Generalization capabilities of neural networks in lattice applications

arXiv.org Machine Learning

In recent years, the use of machine learning has become increasingly popular in the context of lattice field theories. An essential element of such theories is represented by symmetries, whose inclusion in the neural network properties can lead to high reward in terms of performance and generalizability. A fundamental symmetry that usually characterizes physical systems on a lattice with periodic boundary conditions is equivariance under spacetime translations. Here we investigate the advantages of adopting translationally equivariant neural networks in favor of non-equivariant ones. The system we consider is a complex scalar field with quartic interaction on a two-dimensional lattice in the flux representation, on which the networks carry out various regression and classification tasks. Promising equivariant and non-equivariant architectures are identified with a systematic search. We demonstrate that in most of these tasks our best equivariant architectures can perform and generalize significantly better than their non-equivariant counterparts, which applies not only to physical parameters beyond those represented in the training set, but also to different lattice sizes.


Preserving gauge invariance in neural networks

arXiv.org Machine Learning

In these proceedings we present lattice gauge equivariant convolutional neural networks (L-CNNs) which are able to process data from lattice gauge theory simulations while exactly preserving gauge symmetry. We review aspects of the architecture and show how L-CNNs can represent a large class of gauge invariant and equivariant functions on the lattice. We compare the performance of L-CNNs and non-equivariant networks using a non-linear regression problem and demonstrate how gauge invariance is broken for non-equivariant models.


Lattice gauge symmetry in neural networks

arXiv.org Machine Learning

The concept of symmetry or equivariance under symmetry transformations is at the theoretical foundation of modern physics, and it is hard to overstate its importance. Noether's first theorem establishes a clear relationship between invariance of Lagrangians under continuous global symmetries and the existence of conserved quantities and conserved currents [1]. Global symmetries, as the name implies, are transformations that are applied the same way at every point in space time. In mechanical systems and field theories, energy and momentum conservation laws follow from invariance under space-time translations, whereas rotational invariance implies the conservation of angular momentum. More generally, global symmetry under the Poincaré group, which includes translations, rotations and boosts, is the foundation of special relativity.