Goto

Collaborating Authors

 reservoir system


Mixed Delay/Nondelay Embeddings Based Neuromorphic Computing with Patterned Nanomagnet Arrays

Ti, Changpeng, Hassan, Usman, Vatsavai, Sairam Sri, McCarter, Margaret, Vasdev, Aastha, An, Jincheng, Achinuq, Barat, Welp, Ulrich, Cheung, Sen-Ching, Thakkar, Ishan G, Hastings, J. Todd

arXiv.org Artificial Intelligence

Patterned nanomagnet arrays (PNAs) have been shown to exhibit a strong geometrically frustrated dipole interaction. Some PNAs have also shown emergent domain wall dynamics. Previous works have demonstrated methods to physically probe these magnetization dynamics of PNAs to realize neuromorphic reservoir systems that exhibit chaotic dynamical behavior and high-dimensional nonlinearity. These PNA reservoir systems from prior works leverage echo state properties and linear/nonlinear short-term memory of component reservoir nodes to map and preserve the dynamical information of the input time-series data into nondelay spatial embeddings. Such mappings enable these PNA reservoir systems to imitate and predict/forecast the input time series data. However, these prior PNA reservoir systems are based solely on the nondelay spatial embeddings obtained at component reservoir nodes. As a result, they require a massive number of component reservoir nodes, or a very large spatial embedding (i.e., high-dimensional spatial embedding) per reservoir node, or both, to achieve acceptable imitation and prediction accuracy. These requirements reduce the practical feasibility of such PNA reservoir systems. To address this shortcoming, we present a mixed delay/nondelay embeddings-based PNA reservoir system. Our system uses a single PNA reservoir node with the ability to obtain a mixture of delay/nondelay embeddings of the dynamical information of the time-series data applied at the input of a single PNA reservoir node. Our analysis shows that when these mixed delay/nondelay embeddings are used to train a perceptron at the output layer, our reservoir system outperforms existing PNA-based reservoir systems for the imitation of NARMA 2, NARMA 5, NARMA 7, and NARMA 10 time series data, and for the short-term and long-term prediction of the Mackey Glass time series data.


Universality of Real Minimal Complexity Reservoir

Fong, Robert Simon, Li, Boyu, Tiňo, Peter

arXiv.org Artificial Intelligence

Reservoir Computing (RC) models, a subclass of recurrent neural networks, are distinguished by their fixed, non-trainable input layer and dynamically coupled reservoir, with only the static readout layer being trained. This design circumvents the issues associated with backpropagating error signals through time, thereby enhancing both stability and training efficiency. RC models have been successfully applied across a broad range of application domains. Crucially, they have been demonstrated to be universal approximators of time-invariant dynamic filters with fading memory, under various settings of approximation norms and input driving sources. Simple Cycle Reservoirs (SCR) represent a specialized class of RC models with a highly constrained reservoir architecture, characterized by uniform ring connectivity and binary input-to-reservoir weights with an aperiodic sign pattern. For linear reservoirs, given the reservoir size, the reservoir construction has only one degree of freedom -- the reservoir cycle weight. Such architectures are particularly amenable to hardware implementations without significant performance degradation in many practical tasks. In this study we endow these observations with solid theoretical foundations by proving that SCRs operating in real domain are universal approximators of time-invariant dynamic filters with fading memory. Our results supplement recent research showing that SCRs in the complex domain can approximate, to arbitrary precision, any unrestricted linear reservoir with a non-linear readout. We furthermore introduce a novel method to drastically reduce the number of SCR units, making such highly constrained architectures natural candidates for low-complexity hardware implementations. Our findings are supported by empirical studies on real-world time series datasets.


Universality of reservoir systems with recurrent neural networks

Yasumoto, Hiroki, Tanaka, Toshiyuki

arXiv.org Artificial Intelligence

Approximation capability of reservoir systems whose reservoir is a recurrent neural network (RNN) is discussed. In our problem setting, a reservoir system approximates a set of functions just by adjusting its linear readout while the reservoir is fixed. We will show what we call uniform strong universality of a family of RNN reservoir systems for a certain class of functions to be approximated. This means that, for any positive number, we can construct a sufficiently large RNN reservoir system whose approximation error for each function in the class of functions to be approximated is bounded from above by the positive number. Such RNN reservoir systems are constructed via parallel concatenation of RNN reservoirs.


Learning strange attractors with reservoir systems

Grigoryeva, Lyudmila, Hart, Allen, Ortega, Juan-Pablo

arXiv.org Artificial Intelligence

This paper shows that the celebrated Embedding Theorem of Takens is a particular case of a much more general statement according to which, randomly generated linear state-space representations of generic observations of an invertible dynamical system carry in their wake an embedding of the phase space dynamics into the chosen Euclidean state space. This embedding coincides with a natural generalized synchronization that arises in this setup and that yields a topological conjugacy between the state-space dynamics driven by the generic observations of the dynamical system and the dynamical system itself. This result provides additional tools for the representation, learning, and analysis of chaotic attractors and sheds additional light on the reservoir computing phenomenon that appears in the context of recurrent neural networks.


Long-term prediction of chaotic systems with recurrent neural networks

Fan, Huawei, Jiang, Junjie, Zhang, Chun, Wang, Xingang, Lai, Ying-Cheng

arXiv.org Machine Learning

The prediction horizon demonstrated has been about half dozen Lyapunov time. Is it possible to significantly extend the prediction time beyond what has been achieved so far? We articulate a scheme incorporating time-dependent but sparse data inputs into reservoir computing and demonstrate that such rare "updates" of the actual state practically enable an arbitrarily long prediction horizon for a variety of chaotic systems. A physical understanding based on the theory of temporal synchronization is developed. Starting from the same initial condition, a well-trained reservoir system can generate a trajectory that stays close to that of the target system for a finite amount of time, realizing short-term prediction.


Risk bounds for reservoir computing

Gonon, Lukas, Grigoryeva, Lyudmila, Ortega, Juan-Pablo

arXiv.org Machine Learning

We analyze the practices of reservoir computing in the framework of statistical learning theory. In particular, we derive finite sample upper bounds for the generalization error committed by specific families of reservoir computing systems when processing discrete-time inputs under various hypotheses on their dependence structure. Non-asymptotic bounds are explicitly written down in terms of the multivariate Rademacher complexities of the reservoir systems and the weak dependence structure of the signals that are being handled. This allows, in particular, to determine the minimal number of observations needed in order to guarantee a prescribed estimation accuracy with high probability for a given reservoir family. At the same time, the asymptotic behavior of the devised bounds guarantees the consistency of the empirical risk minimization procedure for various hypothesis classes of reservoir functionals.


Echo state networks are universal

Grigoryeva, Lyudmila, Ortega, Juan-Pablo

arXiv.org Artificial Intelligence

Many recently introduced machine learning techniques in the context of dynamical problems have much in common with system identification procedures developed in the last decades for applications in signal treatment, circuit theory and, in general, systems theory. In these problems, system knowledge is only available in the form of input-output observations and the task consists in finding or learning a model that approximates it for mainly forecasting or classification purposes. An important goal in that context is to find a family of transformations that is both computationally feasible and versatile enough to reproduce a rich number of patterns just by modifying a limited number of procedural parameters. This feature is usually referred to as universality. A first solution to this problem was pioneered in the works of Fréchet [Frec 10] and Volterra [Volt 30] one century ago when they proved that finite Volterra series can be used to uniformly approximate continuous functionals defined on compact sets of continuous functions. These results were further extended in the 1950s by the MIT school lead by N. Wiener [Wien 58, Bril 58, Geor 59] but always under compactness assumptions on the input space and the time interval in which inputs are defined. A major breakthrough was the generalization to infinite time intervals carried out by Boyd and Chua in [Boyd 85] using the so called fading memory property. In this paper we address that problem for transformations or filters of discrete time signals of infinite length that have the fading memory property. The approximating set that we use is generated by nonlinear state-space transformations and that is referred to as reservoir computers (RC) [Jaeg 10, Jaeg 04, Maas 02, Maas 11, Croo 07, Vers 07, Luko 09] or reservoir systems.


Machine-learning prediction of fluid variables from data using reservoir computing

Nakai, Kengo, Saiki, Yoshitaka

arXiv.org Machine Learning

We predict both microscopic and macroscopic variables of a chaotic fluid flow using reservoir computing. In our procedure of the prediction, we assume no prior knowledge of physical model describing a fluid flow except that its behavior is complex but deterministic. We present two ways of prediction of the complex behavior; the first called partial-prediction requires continued knowledge of partial time-series data during the prediction as well as past time-series data, while the second called full-prediction requires only past time-series data as training data. For the first case, we are able to predict long-time motion of microscopic fluid variables. For the second case, we show that the reservoir dynamics constructed from only past data of energy functions can predict the future behavior of energy functions and reproduce the energy spectrum. This implies that the obtained reservoir system constructed without the knowledge of microscopic data is equivalent to the dynamical system describing macroscopic behavior of energy functions.


Phoneme Recognition with Large Hierarchical Reservoirs

Triefenbach, Fabian, Jalalvand, Azarakhsh, Schrauwen, Benjamin, Martens, Jean-pierre

Neural Information Processing Systems

Automatic speech recognition has gradually improved over the years, but the reliable recognition of unconstrained speech is still not within reach. In order to achieve a breakthrough, many research groups are now investigating new methodologies that have potential to outperform the Hidden Markov Model technology that is at the core of all present commercial systems. In this paper, it is shown that the recently introduced concept of Reservoir Computing might form the basis of such a methodology. In a limited amount of time, a reservoir system that can recognize the elementary sounds of continuous speech has been built. The system already achieves a state-of-the-art performance, and there is evidence that the margin for further improvements is still significant.