Goto

Collaborating Authors

 triangular lattice


CMT-Benchmark: A Benchmark for Condensed Matter Theory Built by Expert Researchers

Pan, Haining, Roggeveen, James V., Berg, Erez, Carrasquilla, Juan, Chowdhury, Debanjan, Ganguli, Surya, Ghimenti, Federico, Hasik, Juraj, Hunt, Henry, Jiang, Hong-Chen, Kamb, Mason, Kao, Ying-Jer, Khatami, Ehsan, Lawler, Michael J., Luo, Di, Neupert, Titus, Qi, Xiaoliang, Brenner, Michael P., Kim, Eun-Ah

arXiv.org Artificial Intelligence

Large language models (LLMs) have shown remarkable progress in coding and math problem-solving, but evaluation on advanced research-level problems in hard sciences remains scarce. To fill this gap, we present CMT-Benchmark, a dataset of 50 problems covering condensed matter theory (CMT) at the level of an expert researcher. Topics span analytical and computational approaches in quantum many-body, and classical statistical mechanics. The dataset was designed and verified by a panel of expert researchers from around the world. We built the dataset through a collaborative environment that challenges the panel to write and refine problems they would want a research assistant to solve, including Hartree-Fock, exact diagonalization, quantum/variational Monte Carlo, density matrix renormalization group (DMRG), quantum/classical statistical mechanics, and model building. We evaluate LLMs by programmatically checking solutions against expert-supplied ground truth. We developed machine-grading, including symbolic handling of non-commuting operators via normal ordering. They generalize across tasks too. Our evaluations show that frontier models struggle with all of the problems in the dataset, highlighting a gap in the physical reasoning skills of current LLMs. Notably, experts identified strategies for creating increasingly difficult problems by interacting with the LLMs and exploiting common failure modes. The best model, GPT5, solves 30\% of the problems; average across 17 models (GPT, Gemini, Claude, DeepSeek, Llama) is 11.4$\pm$2.1\%. Moreover, 18 problems are solved by none of the 17 models, and 26 by at most one. These unsolved problems span Quantum Monte Carlo, Variational Monte Carlo, and DMRG. Answers sometimes violate fundamental symmetries or have unphysical scaling dimensions. We believe this benchmark will guide development toward capable AI research assistants and tutors.


Innate Motivation for Robot Swarms by Minimizing Surprise: From Simple Simulations to Real-World Experiments

Kaiser, Tanja Katharina, Hamann, Heiko

arXiv.org Artificial Intelligence

Applications of large-scale mobile multi-robot systems can be beneficial over monolithic robots because of higher potential for robustness and scalability. Developing controllers for multi-robot systems is challenging because the multitude of interactions is hard to anticipate and difficult to model. Automatic design using machine learning or evolutionary robotics seem to be options to avoid that challenge, but bring the challenge of designing reward or fitness functions. Generic reward and fitness functions seem unlikely to exist and task-specific rewards often have undesired side effects. Approaches of so-called innate motivation try to avoid the specific formulation of rewards and work instead with different drivers, such as curiosity. Our approach to innate motivation is to minimize surprise, which we implement by maximizing the accuracy of the swarm robot's sensor predictions using neuroevolution. A unique advantage of the swarm robot case is that swarm members populate the robot's environment and can trigger more active behaviors in a self-referential loop. We summarize our previous simulation-based results concerning behavioral diversity, robustness, scalability, and engineered self-organization, and put them into context. In several new studies, we analyze the influence of the optimizer's hyperparameters, the scalability of evolved behaviors, and the impact of realistic robot simulations. Finally, we present results using real robots that show how the reality gap can be bridged.


A Cyclical Route Linking Fundamental Mechanism and AI Algorithm: An Example from Poisson's Ratio in Amorphous Networks

Zhu, Changliang, Fang, Chenchao, Jin, Zhipeng, Li, Baowen, Shen, Xiangying, Xu, Lei

arXiv.org Artificial Intelligence

Shenzhen JL Computational Science and Applied Research Institute, Shenzhen 518131, People's Republic of China (Dated: December 15, 2023) "AI for science" is widely recognized as a future trend in the development of scientific research. Currently, although machine learning algorithms have played a crucial role in scientific research with numerous successful cases, relatively few instances exist where AI assists researchers in uncovering the underlying physical mechanisms behind a certain phenomenon and subsequently using that mechanism to improve machine learning algorithms' efficiency. This article uses the investigation into the relationship between extreme Poisson's ratio values and the structure of amorphous networks as a case study to illustrate how machine learning methods can assist in revealing underlying physical mechanisms. Upon recognizing that the Poisson's ratio relies on the low-frequency vibrational modes of dynamical matrix, we can then employ a convolutional neural network, trained on the dynamical matrix instead of traditional image recognition, to predict the Poisson's ratio of amorphous networks with a much higher efficiency. Through this example, we aim to showcase the role that artificial intelligence can play in revealing fundamental physical mechanisms, which subsequently improves the machine learning algorithms significantly. Using artificial intelligence (AI) to help scientific research and design, reducing the reliance on extensive experimental has emerged as a prominent and well-recognized trial and error. Fueled by research generally encompasses the following three the vigorous advancements in computational science, machine stages: learning have experienced unprecedented growth in 1.


A new type of material called a mechanical neural network can learn and change its physical properties to create adaptable, strong structures

Robohub

This connection of springs is a new type of material that can change shape and learn new properties. A new type of material can learn and improve its ability to deal with unexpected forces thanks to a unique lattice structure with connections of variable stiffness, as described in a new paper by my colleagues and me. Architected materials – like this 3D lattice – get their properties not from what they are made out of, but from their structure. The new material is a type of architected material, which gets its properties mainly from the geometry and specific traits of its design rather than what it is made out of. Take hook-and-loop fabric closures like Velcro, for example.


Supplementing Recurrent Neural Network Wave Functions with Symmetry and Annealing to Improve Accuracy

Hibat-Allah, Mohamed, Melko, Roger G., Carrasquilla, Juan

arXiv.org Artificial Intelligence

Recurrent neural networks (RNNs) are a class of neural networks that have emerged from the paradigm of artificial intelligence and has enabled lots of interesting advances in the field of natural language processing. Interestingly, these architectures were shown to be powerful ansatze to approximate the ground state of quantum systems. Here, we build over the results of [Phys. Rev. Research 2, 023358 (2020)] and construct a more powerful RNN wave function ansatz in two dimensions. We use symmetry and annealing to obtain accurate estimates of ground state energies of the two-dimensional (2D) Heisenberg model, on the square lattice and on the triangular lattice. We show that our method is superior to Density Matrix Renormalisation Group (DMRG) for system sizes larger than or equal to $14 \times 14$ on the triangular lattice.


Connection Topology and Dynamics in Lateral Inhibition Networks

Marcus, C.M, Waugh, F. R., Westervelt, R. M.

Neural Information Processing Systems

We show analytically how the stability of two-dimensional lateral inhibition neural networks depends on the local connection topology. For various network topologies, we calculate the critical time delay for the onset of oscillation in continuous-time networks and present analytic phase diagrams characterizing the dynamics of discrete-time networks.


Connection Topology and Dynamics in Lateral Inhibition Networks

Marcus, C.M, Waugh, F. R., Westervelt, R. M.

Neural Information Processing Systems

We show analytically how the stability of two-dimensional lateral inhibition neural networks depends on the local connection topology. For various network topologies, we calculate the critical time delay for the onset of oscillation in continuous-time networks and present analytic phase diagrams characterizing the dynamics of discrete-time networks.


Connection Topology and Dynamics in Lateral Inhibition Networks

Marcus, C.M, Waugh, F. R., Westervelt, R. M.

Neural Information Processing Systems

We show analytically how the stability of two-dimensional lateral inhibition neural networks depends on the local connection topology. For various network topologies, we calculate the critical time delay for the onset of oscillation in continuous-time networks and present analytic phase diagrams characterizing the dynamics of discrete-time networks.