Goto

Collaborating Authors

 hamiltonian


A Sharp Universality Dichotomy for the Free Energy of Spherical Spin Glasses

Kim, Taegyun

arXiv.org Machine Learning

We study the free energy for pure and mixed spherical $p$-spin models with i.i.d.\ disorder. In the mixed case, each $p$-interaction layer is assumed either to have regularly varying tails with exponent $α_p$ or to satisfy a finite $2p$-th moment condition. For the pure spherical $p$-spin model with regularly varying disorder of tail index $α$, we introduce a tail-adapted normalization that interpolates between the classical Gaussian scaling and the extreme-value scale, and we prove a sharp universality dichotomy for the quenched free energy. In the subcritical regime $α<2p$, the thermodynamics is driven by finitely many extremal couplings and the free energy converges to a non-degenerate random limit described by the NIM (non-intersecting monomial) model, depending only on extreme-order statistics. At the critical exponent $α=2p$, we obtain a random one-dimensional TAP-type variational formula capturing the coexistence of an extremal spike and a universal Gaussian bulk on spherical slices. In the supercritical regime $α>2p$ (more generally, under a finite $2p$-th moment assumption), the free energy is universal and agrees with the deterministic Crisanti--Sommers/Parisi value of the corresponding Gaussian model, as established in [Sawhney-Sellke'24]. We then extend the subcritical and critical results to mixed spherical models in which each $p$-layer is either heavy-tailed with $α_p\le 2p$ or has finite $2p$-th moment. In particular, we derive a TAP-type variational representation for the mixed model, yielding a unified universality classification of the quenched free energy across tail exponents and mixtures.


Shallow-circuit Supervised Learning on a Quantum Processor

Candelori, Luca, Majumder, Swarnadeep, Mezzacapo, Antonio, Moreno, Javier Robledo, Musaelian, Kharen, Nagarajan, Santhanam, Pinnamaneni, Sunil, Sharma, Kunal, Villani, Dario

arXiv.org Machine Learning

Quantum computing has long promised transformative advances in data analysis, yet practical quantum machine learning has remained elusive due to fundamental obstacles such as a steep quantum cost for the loading of classical data and poor trainability of many quantum machine learning algorithms designed for near-term quantum hardware. In this work, we show that one can overcome these obstacles by using a linear Hamiltonian-based machine learning method which provides a compact quantum representation of classical data via ground state problems for k-local Hamiltonians. We use the recent sample-based Krylov quantum diagonalization method to compute low-energy states of the data Hamiltonians, whose parameters are trained to express classical datasets through local gradients. We demonstrate the efficacy and scalability of the methods by performing experiments on benchmark datasets using up to 50 qubits of an IBM Heron quantum processor.


Infusing Self-Consistency into Density Functional Theory Hamiltonian Prediction via Deep Equilibrium Models

Neural Information Processing Systems

In this study, we introduce a unified neural network architecture, the Deep Equilibrium Density Functional Theory Hamiltonian (DEQH) model, which incorporates Deep Equilibrium Models (DEQs) for predicting Density Functional Theory (DFT) Hamiltonians. The DEQH model inherently captures the self-consistency nature of Hamiltonian, a critical aspect often overlooked by traditional machine learning approaches for Hamiltonian prediction. By employing DEQ within our model architecture, we circumvent the need for DFT calculations during the training phase to introduce the Hamiltonian's self-consistency, thus addressing computational bottlenecks associated with large or complex systems. We propose a versatile framework that combines DEQ with off-the-shelf machine learning models for predicting Hamiltonians. When benchmarked on the MD17 and QH9 datasets, DEQHNet, an instantiation of the DEQH framework, has demonstrated a significant improvement in prediction accuracy. Beyond a predictor, the DEQH model is a Hamiltonian solver, in the sense that it uses the fixed-point solving capability of the deep equilibrium model to iteratively solve for the Hamiltonian.


Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms

Neural Information Processing Systems

A fundamental problem in quantum many-body physics is that of finding ground states of localHamiltonians. A number of recent works gave provably efficient machine learning (ML) algorithmsfor learning ground states.


Symplectic Spectrum Gaussian Processes: Learning Hamiltonians from Noisy and Sparse Data

Neural Information Processing Systems

Hamiltonian mechanics is a well-established theory for modeling the time evolution of systems with conserved quantities (called Hamiltonian), such as the total energy of the system. Recent works have parameterized the Hamiltonian by machine learning models (e.g., neural networks), allowing Hamiltonian dynamics to be obtained from state trajectories without explicit mathematical modeling. However, the performance of existing models is limited as we can observe only noisy and sparse trajectories in practice. This paper proposes a probabilistic model that can learn the dynamics of conservative or dissipative systems from noisy and sparse data. We introduce a Gaussian process that incorporates the symplectic geometric structure of Hamiltonian systems, which is used as a prior distribution for estimating Hamiltonian systems with additive dissipation. We then present its spectral representation, Symplectic Spectrum Gaussian Processes (SSGPs), for which we newly derive random Fourier features with symplectic structures. This allows us to construct an efficient variational inference algorithm for training the models while simulating the dynamics via ordinary differential equation solvers. Experiments on several physical systems show that SSGP offers excellent performance in predicting dynamics that follow the energy conservation or dissipation law from noisy and sparse data.


Optimal certification of constant-local Hamiltonians

Lee, Junseo, Shin, Myeongjin

arXiv.org Artificial Intelligence

We study the problem of certifying local Hamiltonians from real-time access to their dynamics. Given oracle access to $e^{-itH}$ for an unknown $k$-local Hamiltonian $H$ and a fully specified target Hamiltonian $H_0$, the goal is to decide whether $H$ is exactly equal to $H_0$ or differs from $H_0$ by at least $\varepsilon$ in normalized Frobenius norm, while minimizing the total evolution time. We introduce the first intolerant Hamiltonian certification protocol that achieves optimal performance for all constant-locality Hamiltonians. For general $n$-qubit, $k$-local, traceless Hamiltonians, our procedure uses $O(c^k/\varepsilon)$ total evolution time for a universal constant $c$, and succeeds with high probability. In particular, for $O(1)$-local Hamiltonians, the total evolution time becomes $Θ(1/\varepsilon)$, matching the known $Ω(1/\varepsilon)$ lower bounds and achieving the gold-standard Heisenberg-limit scaling. Prior certification methods either relied on implementing inverse evolution of $H$, required controlled access to $e^{-itH}$, or achieved near-optimal guarantees only in restricted settings such as the Ising case ($k=2$). In contrast, our algorithm requires neither inverse evolution nor controlled operations: it uses only forward real-time dynamics and achieves optimal intolerant certification for all constant-locality Hamiltonians.


Why is topology hard to learn?

Oriekhov, D. O., Bergkamp, Stan, Jin, Guliuxin, Luna, Juan Daniel Torres, Zouggari, Badr, van der Meer, Sibren, Yazidi, Naoual El, Greplova, Eliska

arXiv.org Artificial Intelligence

Phase classification has become a prototypical benchmark for data-driven analysis of condensed matter physics. The type and complexity of the phase transition dictate the level of complexity of the algorithm one has to employ. This topic has been broadly explored, offering a menu of both supervised and unsupervised techniques ranging from simple clustering [1-3] to more complex machine learning methods [4-7]. The phase classification problem is most commonly posed like so: we allow our model to view a dataset that is both relevant and straightforwardly obtainable in the scenario we wish to study. We introduce this data set to a model that has no prior knowledge of underlying physics.


Fermionic neural Gibbs states

Nys, Jannes, Carrasquilla, Juan

arXiv.org Artificial Intelligence

We introduce fermionic neural Gibbs states (fNGS), a variational framework for modeling finite-temperature properties of strongly interacting fermions. fNGS starts from a reference mean-field thermofield-double state and uses neural-network transformations together with imaginary-time evolution to systematically build strong correlations. Applied to the doped Fermi-Hubbard model, a minimal lattice model capturing essential features of strong electronic correlations, fNGS accurately reproduces thermal energies over a broad range of temperatures, interaction strengths, even at large dopings, for system sizes beyond the reach of exact methods. These results demonstrate a scalable route to studying finite-temperature properties of strongly correlated fermionic systems beyond one dimension with neural-network representations of quantum states.


Noise tolerance via reinforcement: Learning a reinforced quantum dynamics

Ramezanpour, Abolfazl

arXiv.org Artificial Intelligence

The performance of quantum simulations heavily depends on the efficiency of noise mitigation techniques and error correction algorithms. Reinforcement has emerged as a powerful strategy to enhance the efficiency of learning and optimization algorithms. In this study, we demonstrate that a reinforced quantum dynamics can exhibit significant robustness against interactions with a noisy environment. We study a quantum annealing process where, through reinforcement, the system is encouraged to maintain its current state or follow a noise-free evolution. A learning algorithm is employed to derive a concise approximation of this reinforced dynamics, reducing the total evolution time and, consequently, the system's exposure to noisy interactions. This also avoids the complexities associated with implementing quantum feedback in such reinforcement algorithms. The efficacy of our method is demonstrated through numerical simulations of reinforced quantum annealing with one- and two-qubit systems under Pauli noise.


On the role of non-linear latent features in bipartite generative neural networks

Bonnaire, Tony, Catania, Giovanni, Decelle, Aurélien, Seoane, Beatriz

arXiv.org Artificial Intelligence

We investigate the phase diagram and memory retrieval capabilities of bipartite energy-based neural networks, namely Restricted Boltzmann Machines (RBMs), as a function of the prior distribution imposed on their hidden units - including binary, multi-state, and ReLU-like activations. Drawing connections to the Hopfield model and employing analytical tools from statistical physics of disordered systems, we explore how the architectural choices and activation functions shape the thermodynamic properties of these models. Our analysis reveals that standard RBMs with binary hidden nodes and extensive connectivity suffer from reduced critical capacity, limiting their effectiveness as associative memories. To address this, we examine several modifications, such as introducing local biases and adopting richer hidden unit priors. These adjustments restore ordered retrieval phases and markedly improve recall performance, even at finite temperatures. Our theoretical findings, supported by finite-size Monte Carlo simulations, highlight the importance of hidden unit design in enhancing the expressive power of RBMs.