Subramanian, Devika
Data-driven prediction of a multi-scale Lorenz 96 chaotic system using a hierarchy of deep learning methods: Reservoir computing, ANN, and RNN-LSTM
Chattopadhyay, Ashesh, Hassanzadeh, Pedram, Palem, Krishna, Subramanian, Devika
In this paper, the performance of three deep learning methods for predicting short-term evolution and reproducing the long-term statistics of a multi-scale spatio-temporal Lorenz 96 system is examined. The methods are: echo state network (a type of reservoir computing, RC-ESN), deep feed-forward artificial neural network (ANN), and recurrent neural network with long short-term memory (RNN-LSTM). This Lorenz system has three tiers of nonlinearly interacting variables representing slow/large-scale ($X$), intermediate ($Y$), and fast/small-scale ($Z$) processes. For training or testing, only $X$ is available; $Y$ and $Z$ are never known/used. It is shown that RC-ESN substantially outperforms ANN and RNN-LSTM for short-term prediction, e.g., accurately forecasting the chaotic trajectories for hundreds of numerical solver's time steps, equivalent to several Lyapunov timescales. RNN-LSTM and ANN show some prediction skills as well; RNN-LSTM bests ANN. Furthermore, even after losing the trajectory, data predicted by RC-ESN and RNN-LSTM have probability density functions (PDFs) that closely match the true PDF, even at the tails. PDF of the ANN data deviates from the true PDF. Implications, caveats, and applications to data-driven and inexact, data-assisted surrogate modeling of complex dynamical systems such as weather/climate are discussed.
AI Theory and Practice: A Discussion on Hard Challenges and Opportunities Ahead
Horvitz, Eric (Microsoft Research) | Getoor, Lise (University of Maryland) | Guestrin, Carlos (Carnegie Mellon University) | Hendler, James (Rensselaer Polytechnic Institute) | Konstan, Joseph (University of Minnesota) | Subramanian, Devika (Rice University) | Wellman, Michael (University of Michigan) | Kautz, Henry (University of Rochester)
So, we have a variety of people here with different interests and backgrounds that I asked to talk about not just the key challenges ahead but potential opportunities and promising pathways, trajectories to solving those problems, and their predictions about how R&D might proceed in terms of the timing of various kinds of development over time. I asked the panelists briefly to frame their comments sharing a little bit about fundamental questions, such as, "What is the research goal?" Not everybody stays up late at night hunched over a computer or a simulation or a robotic system, pondering the foundations of intelligence and human-level AI. We have here today Lise Getoor from the University ipate the liability and insurance industry; and the of Maryland; Devika Subramanian, who other one, that it was a human interface problem, comes to us from Rice University; we have Carlos that people don't necessarily want to go and type Guestrin from Carnegie Mellon University (CMU); a bunch of yes/no questions into a computer to get James Hendler from Rensselaer Polytechnic Institute an answer, even with a rule-based explanation, (RPI); Mike Wellman at the University of that if you'd taken that just a step further and Michigan; Henry Kautz at tjhe University of solved the human problem, it might have worked. Rochester; and Joe Konstan, who comes to us from Related to that, I was remembering a bunch of the Midwest, as our Minneapolis person here on these smart house projects. And I have to admit I the panel. I think everyone Joe Konstan: I was actually surprised when you hates smart spaces. I think of myself at the core there's nobody there, do you warn people and give in human-computer interaction. So I went back them a chance to answer? There's no good answer and started looking at what I knew of artificial to this question. I can tell you if that person is in intelligence to try to see where the path forward bed asleep, the answer is no, don't wake them up was, and I was inspired by the past.