Leichenauer, Stefan
Retentive Neural Quantum States: Efficient Ans\"atze for Ab Initio Quantum Chemistry
Knitter, Oliver, Zhao, Dan, Stokes, James, Ganahl, Martin, Leichenauer, Stefan, Veerapaneni, Shravan
Neural-network quantum states (NQS) has emerged as a powerful application of quantum-inspired deep learning for variational Monte Carlo methods, offering a competitive alternative to existing techniques for identifying ground states of quantum problems. A significant advancement toward improving the practical scalability of NQS has been the incorporation of autoregressive models, most recently transformers, as variational ansatze. Transformers learn sequence information with greater expressiveness than recurrent models, but at the cost of increased time complexity with respect to sequence length. We explore the use of the retentive network (RetNet), a recurrent alternative to transformers, as an ansatz for solving electronic ground state problems in $\textit{ab initio}$ quantum chemistry. Unlike transformers, RetNets overcome this time complexity bottleneck by processing data in parallel during training, and recurrently during inference. We give a simple computational cost estimate of the RetNet and directly compare it with similar estimates for transformers, establishing a clear threshold ratio of problem-to-model size past which the RetNet's time complexity outperforms that of the transformer. Though this efficiency can comes at the expense of decreased expressiveness relative to the transformer, we overcome this gap through training strategies that leverage the autoregressive structure of the model -- namely, variational neural annealing. Our findings support the RetNet as a means of improving the time complexity of NQS without sacrificing accuracy. We provide further evidence that the ablative improvements of neural annealing extend beyond the RetNet architecture, suggesting it would serve as an effective general training strategy for autoregressive NQS.
Entanglement and Tensor Networks for Supervised Image Classification
Martyn, John, Vidal, Guifre, Roberts, Chase, Leichenauer, Stefan
Tensor networks, originally designed to address computational problems in quantum many-body physics, have recently been applied to machine learning tasks. However, compared to quantum physics, where the reasons for the success of tensor network approaches over the last 30 years is well understood, very little is yet known about why these techniques work for machine learning. The goal of this paper is to investigate entanglement properties of tensor network models in a current machine learning application, in order to uncover general principles that may guide future developments. We revisit the use of tensor networks for supervised image classification using the MNIST data set of handwritten digits, as pioneered by Stoudenmire and Schwab [Adv. in Neur. Inform. Proc. Sys. 29, 4799 (2016)]. Firstly we hypothesize about which state the tensor network might be learning during training. For that purpose, we propose a plausible candidate state $|\Sigma_{\ell}\rangle$ (built as a superposition of product states corresponding to images in the training set) and investigate its entanglement properties. We conclude that $|\Sigma_{\ell}\rangle$ is so robustly entangled that it cannot be approximated by the tensor network used in that work, which must therefore be representing a very different state. Secondly, we use tensor networks with a block product structure, in which entanglement is restricted within small blocks of $n \times n$ pixels/qubits. We find that these states are extremely expressive (e.g. training accuracy of $99.97 \%$ already for $n=2$), suggesting that long-range entanglement may not be essential for image classification. However, in our current implementation, optimization leads to over-fitting, resulting in test accuracies that are not competitive with other current approaches.
Anomaly Detection with Tensor Networks
Wang, Jinhui, Roberts, Chase, Vidal, Guifre, Leichenauer, Stefan
Originating from condensed matter physics, tensor networks are compact representations of high-dimensional tensors. In this paper, the prowess of tensor networks is demonstrated on the particular task of one-class anomaly detection. We exploit the memory and computational efficiency of tensor networks to learn a linear transformation over a space with dimension exponential in the number of original features. The linearity of our model enables us to ensure a tight fit around training instances by penalizing the model's global tendency to a predict normality via its Frobenius norm---a task that is infeasible for most deep learning models. Our method outperforms deep and classical algorithms on tabular datasets and produces competitive results on image datasets, despite not exploiting the locality of images.
TensorNetwork for Machine Learning
Efthymiou, Stavros, Hidary, Jack, Leichenauer, Stefan
Tensor networks have seen numerous applications in the physical sciences [2-34], but there has been significant progress recently in applying the same methods to problems in machine learning [35-45]. The TensorNetwork library [1] was created to facilitate this research and accelerate the adoption of tensor network methods by the ML community. In a previous paper [46] we showed how TensorNetwork could be used in a physics setting. Here we are going to illustrate how to use a matrix product state (MPS) tensor network to classify MNIST and Fashion-MNIST images. The basic technique was applied to the MNIST dataset by Stoudenmire and Schwab [35], who adapted the DMRG algorithm from physics [3] to train the network.
TensorNetwork on TensorFlow: A Spin Chain Application Using Tree Tensor Networks
Milsted, Ashley, Ganahl, Martin, Leichenauer, Stefan, Hidary, Jack, Vidal, Guifre
TensorNetwork is an open source library for implementing tensor network algorithms in TensorFlow. We describe a tree tensor network (TTN) algorithm for approximating the ground state of either a periodic quantum spin chain (1D) or a lattice model on a thin torus (2D), and implement the algorithm using TensorNetwork. We use a standard energy minimization procedure over a TTN ansatz with bond dimension $\chi$, with a computational cost that scales as $O(\chi^4)$. Using bond dimension $\chi \in [32,256]$ we compare the use of CPUs with GPUs and observe significant computational speed-ups, up to a factor of $100$, using a GPU and the TensorNetwork library.
TensorNetwork: A Library for Physics and Machine Learning
Roberts, Chase, Milsted, Ashley, Ganahl, Martin, Zalcman, Adam, Fontaine, Bruce, Zou, Yijian, Hidary, Jack, Vidal, Guifre, Leichenauer, Stefan
Tensor networks are sparse data structures engineered for the efficient representation and manipulation of very high-dimensional data. They have largely been developed and used in condensed matter physics [2-19], quantum chemistry [20-23], statistical mechanics [24-27], quantum field theory [28, 29], and even quantum gravity and cosmology [30-34]. Substantial progress has been made recently in applying tensor networks to machine learning.