Sornborger, Andrew
Probabilistic Flux Limiters
Nguyen-Fotiadis, Nga T. T., Chiodi, Robert, McKerns, Michael, Livescu, Daniel, Sornborger, Andrew
The stable numerical integration of shocks in compressible flow simulations relies on the reduction or elimination of Gibbs phenomena (unstable, spurious oscillations). A popular method to virtually eliminate Gibbs oscillations caused by numerical discretization in under-resolved simulations is to use a flux limiter. A wide range of flux limiters has been studied in the literature, with recent interest in their optimization via machine learning methods trained on high-resolution datasets. The common use of flux limiters in numerical codes as plug-and-play blackbox components makes them key targets for design improvement. Moreover, while aleatoric (inherent randomness) and epistemic (lack of knowledge) uncertainty is commonplace in fluid dynamical systems, these effects are generally ignored in the design of flux limiters. Even for deterministic dynamical models, numerical uncertainty is introduced via coarse-graining required by insufficient computational power to solve all scales of motion. Here, we introduce a conceptually distinct type of flux limiter that is designed to handle the effects of randomness in the model and uncertainty in model parameters. This new, {\it probabilistic flux limiter}, learned with high-resolution data, consists of a set of flux limiting functions with associated probabilities, which define the frequencies of selection for their use. Using the example of Burgers' equation, we show that a machine learned, probabilistic flux limiter may be used in a shock capturing code to more accurately capture shock profiles. In particular, we show that our probabilistic flux limiter outperforms standard limiters, and can be successively improved upon (up to a point) by expanding the set of probabilistically chosen flux limiting functions.
Generalization in quantum machine learning from few training data
Caro, Matthias C., Huang, Hsin-Yuan, Cerezo, M., Sharma, Kunal, Sornborger, Andrew, Cincio, Lukasz, Coles, Patrick J.
Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set, and subsequently making predictions on a testing data set (i.e., generalizing). In this work, we provide a comprehensive study of generalization performance in QML after training on a limited number $N$ of training data points. We show that the generalization error of a quantum machine learning model with $T$ trainable gates scales at worst as $\sqrt{T/N}$. When only $K \ll T$ gates have undergone substantial change in the optimization process, we prove that the generalization error improves to $\sqrt{K / N}$. Our results imply that the compiling of unitaries into a polynomial number of native gates, a crucial application for the quantum computing industry that typically uses exponential-size training data, can be sped up significantly. We also show that classification of quantum states across a phase transition with a quantum convolutional neural network requires only a very small training data set. Other potential applications include learning quantum error correcting codes or quantum dynamical simulation. Our work injects new hope into the field of QML, as good generalization is guaranteed from few training data.
The Backpropagation Algorithm Implemented on Spiking Neuromorphic Hardware
Renner, Alpha, Sheldon, Forrest, Zlotnik, Anatoly, Tao, Louis, Sornborger, Andrew
There is particular interest in Spike-based learning in plastic neuronal networks is deep learning, which is a central tool in modern machine playing increasingly key roles in both theoretical neuroscience learning. Deep learning relies on a layered, feedforward and neuromorphic computing. The brain learns network similar to the early layers of the visual cortex, in part by modifying the synaptic strengths between neurons with threshold nonlinearities at each layer that resemble and neuronal populations. While specific synaptic mean-field approximations of neuronal integrate-and-fire plasticity or neuromodulatory mechanisms may vary in models. While feedforward networks are readily translated different brain regions, it is becoming clear that a significant to neuromorphic hardware [6-8], the far more computationally level of dynamical coordination between disparate intensive training of these networks'on chip' neuronal populations must exist, even within an individual has proven elusive as the structure of backpropagation neural circuit [1]. Classically, backpropagation (BP, makes the algorithm notoriously difficult to implement and other learning algorithms) has been essential for supervised in a neural circuit [9, 10]. A feasible neural implementation learning in artificial neural networks (ANNs). of the backpropagation algorithm has gained renewed Although the question of whether or not BP operates in scrutiny with the rise of new neuromorphic computational the brain is still an outstanding issue [2], BP does solve architectures that feature local synaptic plasticity the problem of how a global objective function can be [5, 11-13]. Because of the well-known difficulties, neuromorphic related to local synaptic modification in a network.
Long-time simulations with high fidelity on quantum hardware
Gibbs, Joe, Gili, Kaitlin, Holmes, Zoë, Commeau, Benjamin, Arrasmith, Andrew, Cincio, Lukasz, Coles, Patrick J., Sornborger, Andrew
Moderate-size quantum computers are now publicly accessible over the cloud, opening the exciting possibility of performing dynamical simulations of quantum systems. However, while rapidly improving, these devices have short coherence times, limiting the depth of algorithms that may be successfully implemented. Here we demonstrate that, despite these limitations, it is possible to implement long-time, high fidelity simulations on current hardware. Specifically, we simulate an XY-model spin chain on the Rigetti and IBM quantum computers, maintaining a fidelity of at least 0.9 for over 600 time steps. This is a factor of 150 longer than is possible using the iterated Trotter method. Our simulations are performed using a new algorithm that we call the fixed state Variational Fast Forwarding (fsVFF) algorithm. This algorithm decreases the circuit depth and width required for a quantum simulation by finding an approximate diagonalization of a short time evolution unitary. Crucially, fsVFF only requires finding a diagonalization on the subspace spanned by the initial state, rather than on the total Hilbert space as with previous methods, substantially reducing the required resources.