Not enough data to create a plot.
Try a different view from the menu above.
Shen, Chaopeng
Update hydrological states or meteorological forcings? Comparing data assimilation methods for differentiable hydrologic models
Jamaat, Amirmoez, Song, Yalan, Rahmani, Farshid, Liu, Jiangtao, Lawson, Kathryn, Shen, Chaopeng
Data assimilation (DA) enables hydrologic models to update their internal states using near-real-time observations for more accurate forecasts. With deep neural networks like long short-term memory (LSTM), using either lagged observations as inputs (called "data integration") or variational DA has shown success in improving forecasts. However, it is unclear which methods are performant or optimal for physics-informed machine learning ("differentiable") models, which represent only a small amount of physically-meaningful states while using deep networks to supply parameters or missing processes. Here we developed variational DA methods for differentiable models, including optimizing adjusters for just precipitation data, just model internal hydrological states, or both. Our results demonstrated that differentiable streamflow models using the CAMELS dataset can benefit strongly and equivalently from variational DA as LSTM, with one-day lead time median Nash-Sutcliffe efficiency (NSE) elevated from 0.75 to 0.82. The resulting forecast matched or outperformed LSTM with DA in the eastern, northwestern, and central Great Plains regions of the conterminous United States. Both precipitation and state adjusters were needed to achieve these results, with the latter being substantially more effective on its own, and the former adding moderate benefits for high flows. Our DA framework does not need systematic training data and could serve as a practical DA scheme for whole river networks.
Differentiable modeling to unify machine learning and physical models and advance Geosciences
Shen, Chaopeng, Appling, Alison P., Gentine, Pierre, Bandai, Toshiyuki, Gupta, Hoshin, Tartakovsky, Alexandre, Baity-Jesi, Marco, Fenicia, Fabrizio, Kifer, Daniel, Li, Li, Liu, Xiaofeng, Ren, Wei, Zheng, Yi, Harman, Ciaran J., Clark, Martyn, Farthing, Matthew, Feng, Dapeng, Kumar, Praveen, Aboelyazeed, Doaa, Rahmani, Farshid, Beck, Hylke E., Bindas, Tadd, Dwivedi, Dipankar, Fang, Kuai, Hรถge, Marvin, Rackauckas, Chris, Roy, Tirthankar, Xu, Chonggang, Mohanty, Binayak, Lawson, Kathryn
Process-Based Modeling (PBM) and Machine Learning (ML) are often perceived as distinct paradigms in the geosciences. Here we present differentiable geoscientific modeling as a powerful pathway toward dissolving the perceived barrier between them and ushering in a paradigm shift. For decades, PBM offered benefits in interpretability and physical consistency but struggled to efficiently leverage large datasets. ML methods, especially deep networks, presented strong predictive skills yet lacked the ability to answer specific scientific questions. While various methods have been proposed for ML-physics integration, an important underlying theme -- differentiable modeling -- is not sufficiently recognized. Here we outline the concepts, applicability, and significance of differentiable geoscientific modeling (DG). "Differentiable" refers to accurately and efficiently calculating gradients with respect to model variables, critically enabling the learning of high-dimensional unknown relationships. DG refers to a range of methods connecting varying amounts of prior knowledge to neural networks and training them together, capturing a different scope than physics-guided machine learning and emphasizing first principles. Preliminary evidence suggests DG offers better interpretability and causality than ML, improved generalizability and extrapolation capability, and strong potential for knowledge discovery, while approaching the performance of purely data-driven ML. DG models require less training data while scaling favorably in performance and efficiency with increasing amounts of data. With DG, geoscientists may be better able to frame and investigate questions, test hypotheses, and discover unrecognized linkages.
The geometry of flow: Advancing predictions of river geometry with multi-model machine learning
Chang, Shuyu Y, Ghahremani, Zahra, Manuel, Laura, Erfani, Mohammad, Shen, Chaopeng, Cohen, Sagy, Van Meter, Kimberly, Pierce, Jennifer L, Meselhe, Ehab A, Goharian, Erfan
Hydraulic geometry parameters describing river hydrogeomorphic is important for flood forecasting. Although well-established, power-law hydraulic geometry curves have been widely used to understand riverine systems and mapping flooding inundation worldwide for the past 70 years, we have become increasingly aware of the limitations of these approaches. In the present study, we have moved beyond these traditional power-law relationships for river geometry, testing the ability of machine-learning models to provide improved predictions of river width and depth. For this work, we have used an unprecedentedly large river measurement dataset (HYDRoSWOT) as well as a suite of watershed predictor data to develop novel data-driven approaches to better estimate river geometries over the contiguous United States (CONUS). Our Random Forest, XGBoost, and neural network models out-performed the traditional, regionalized power law-based hydraulic geometry equations for both width and depth, providing R-squared values of as high as 0.75 for width and as high as 0.67 for depth, compared with R-squared values of 0.57 for width and 0.18 for depth from the regional hydraulic geometry equations. Our results also show diverse performance outcomes across stream orders and geographical regions for the different machine-learning models, demonstrating the value of using multi-model approaches to maximize the predictability of river geometry. The developed models have been used to create the newly publicly available STREAM-geo dataset, which provides river width, depth, width/depth ratio, and river and stream surface area (%RSSA) for nearly 2.7 million NHDPlus stream reaches across the rivers and streams across the contiguous US.
Probing the limit of hydrologic predictability with the Transformer network
Liu, Jiangtao, Bian, Yuchen, Shen, Chaopeng
For a number of years since its introduction to hydrology, recurrent neural networks like long short-term memory (LSTM) have proven remarkably difficult to surpass in terms of daily hydrograph metrics on known, comparable benchmarks. Outside of hydrology, Transformers have now become the model of choice for sequential prediction tasks, making it a curious architecture to investigate. Here, we first show that a vanilla Transformer architecture is not competitive against LSTM on the widely benchmarked CAMELS dataset, and lagged especially for the high-flow metrics due to short-term processes. However, a recurrence-free variant of Transformer can obtain mixed comparisons with LSTM, producing the same Kling-Gupta efficiency coefficient (KGE), along with other metrics. The lack of advantages for the Transformer is linked to the Markovian nature of the hydrologic prediction problem. Similar to LSTM, the Transformer can also merge multiple forcing dataset to improve model performance. While the Transformer results are not higher than current state-of-the-art, we still learned some valuable lessons: (1) the vanilla Transformer architecture is not suitable for hydrologic modeling; (2) the proposed recurrence-free modification can improve Transformer performance so future work can continue to test more of such modifications; and (3) the prediction limits on the dataset should be close to the current state-of-the-art model. As a non-recurrent model, the Transformer may bear scale advantages for learning from bigger datasets and storing knowledge. This work serves as a reference point for future modifications of the model.
Differentiable, learnable, regionalized process-based models with physical outputs can approach state-of-the-art hydrologic prediction accuracy
Feng, Dapeng, Liu, Jiangtao, Lawson, Kathryn, Shen, Chaopeng
Predictions of hydrologic variables across the entire water cycle have significant value for water resource management as well as downstream applications such as ecosystem and water quality modeling. Recently, purely data-driven deep learning models like long short-term memory (LSTM) showed seemingly-insurmountable performance in modeling rainfall-runoff and other geoscientific variables, yet they cannot predict untrained physical variables and remain challenging to interpret. Here we show that differentiable, learnable, process-based models (called {\delta} models here) can approach the performance level of LSTM for the intensively-observed variable (streamflow) with regionalized parameterization. We use a simple hydrologic model HBV as the backbone and use embedded neural networks, which can only be trained in a differentiable programming framework, to parameterize, enhance, or replace the process-based model modules. Without using an ensemble or post-processor, {\delta} models can obtain a median Nash Sutcliffe efficiency of 0.732 for 671 basins across the USA for the Daymet forcing dataset, compared to 0.748 from a state-of-the-art LSTM model with the same setup. For another forcing dataset, the difference is even smaller: 0.715 vs. 0.722. Meanwhile, the resulting learnable process-based models can output a full set of untrained variables, e.g., soil and groundwater storage, snowpack, evapotranspiration, and baseflow, and later be constrained by their observations. Both simulated evapotranspiration and fraction of discharge from baseflow agreed decently with alternative estimates. The general framework can work with models with various process complexity and opens up the path for learning physics from big data.
The data synergy effects of time-series deep learning models in hydrology
Fang, Kuai, Kifer, Daniel, Lawson, Kathryn, Feng, Dapeng, Shen, Chaopeng
When fitting statistical models to variables in geoscientific disciplines such as hydrology, it is a customary practice to regionalize - to divide a large spatial domain into multiple regions and study each region separately - instead of fitting a single model on the entire data (also known as unification). Traditional wisdom in these fields suggests that models built for each region separately will have higher performance because of homogeneity within each region. However, by partitioning the training data, each model has access to fewer data points and cannot learn from commonalities between regions. Here, through two hydrologic examples (soil moisture and streamflow), we argue that unification can often significantly outperform regionalization in the era of big data and deep learning (DL). Common DL architectures, even without bespoke customization, can automatically build models that benefit from regional commonality while accurately learning region-specific differences. We highlight an effect we call data synergy, where the results of the DL models improved when data were pooled together from characteristically different regions. In fact, the performance of the DL models benefited from more diverse rather than more homogeneous training data. We hypothesize that DL models automatically adjust their internal representations to identify commonalities while also providing sufficient discriminatory information to the model. The results here advocate for pooling together larger datasets, and suggest the academic community should place greater emphasis on data sharing and compilation.
Prediction in ungauged regions with sparse flow duration curves and input-selection ensemble modeling
Feng, Dapeng, Lawson, Kathryn, Shen, Chaopeng
Streamflow data is crucial for calibrating hydrologic models which quantify the water cycle (Wada et al., 2017) for various purposes ranging from climate modeling (Allen & Ingram, 2002) to climate change impact mitigation (Trabucco et al., 2008), from water sustainability studies to flood forecasting and humanitarian aid (Coughlan de Perez et al., 2016). Scanning over the Global Runoff Data Centre's worldwide map showing where streamflow data has been tracked (GRDC, 2020), one cannot help but notice vast swaths of lands with very few streamflow gauges, e.g., Asia, South America, Oceania, Central America, Africa, and even parts of the southwestern USA (Figure S1 in Supporting Information). In countries like China and Ethiopia, daily observations of streamflow are being recorded, but unfortunately not made openly accessible due to various reasons. In many cases, historical data (prior to the 1990s) are available, but it is difficult to obtain high-quality meteorological forcing data from the same periods. Many regions also typically lack data on physiographic attributes such as soil and aquifer properties.
Evaluating aleatoric and epistemic uncertainties of time series deep learning models for soil moisture predictions
Fang, Kuai, Shen, Chaopeng, Kifer, Daniel
Soil moisture is an important variable that determines floods, vegetation health, agriculture productivity, and land surface feedbacks to the atmosphere, etc. Accurately modeling soil moisture has important implications in both weather and climate models. The recently available satellite-based observations give us a unique opportunity to build data-driven models to predict soil moisture instead of using land surface models, but previously there was no uncertainty estimate. We tested Monte Carlo dropout (MCD) with an aleatoric term for our long short-term memory models for this problem, and asked if the uncertainty terms behave as they were argued to. We show that the method successfully captures the predictive error after tuning a hyperparameter on a representative training dataset. We show the MCD uncertainty estimate, as previously argued, does detect dissimilarity.
A trans-disciplinary review of deep learning research for water resources scientists
Shen, Chaopeng
Deep learning (DL), a new-generation artificial neural network research, has made profound strides in recent years. This review paper is intended to provide water resources scientists with a simple technical overview, trans-disciplinary progress update, and potentially inspirations about DL. Effective architectures, more accessible data, advances in regularization, and new computing power enabled the success of DL. A trans-disciplinary review reveals that DL is rapidly transforming myriad scientific disciplines including high-energy physics, astronomy, chemistry, genomics and remote sensing, where systematic DL toolkits, innovative customizations, and sub-disciplines have emerged. However, with a few exceptions, its adoption in hydrology has so far been gradual. The literature suggests that novel regularization techniques can effectively prevent high-capacity deep networks from overfitting. As a result, in most scientific disciplines, DL models demonstrated superior predictive and generalization performance to conventional methods. Meanwhile, less noticed is that DL may also serve as a scientific exploratory tool. A new area termed "AI neuroscience", has been born. This budding sub-discipline is accumulating a significant body of work, e.g., distilling knowledge obtained in DL networks to interpretable models, attributing decisions to inputs via back-propagation of relevance, or visualization of activations. These methods are designed to interpret the decision process of deep networks and derive insights. While scientists so far have mostly been using customized, ad-hoc methods for interpretation, vast opportunities await for DL to propel advancement in water science.
Prolongation of SMAP to Spatio-temporally Seamless Coverage of Continental US Using a Deep Learning Neural Network
Fang, Kuai, Shen, Chaopeng, Kifer, Daniel, Yang, Xiao
The Soil Moisture Active Passive (SMAP) mission has delivered valuable sensing of surface soil moisture since 2015. However, it has a short time span and irregular revisit schedule. Utilizing a state-of-the-art time-series deep learning neural network, Long Short-Term Memory (LSTM), we created a system that predicts SMAP level-3 soil moisture data with atmospheric forcing, model-simulated moisture, and static physiographic attributes as inputs. The system removes most of the bias with model simulations and improves predicted moisture climatology, achieving small test root-mean-squared error (<0.035) and high correlation coefficient >0.87 for over 75\% of Continental United States, including the forested Southeast. As the first application of LSTM in hydrology, we show the proposed network avoids overfitting and is robust for both temporal and spatial extrapolation tests. LSTM generalizes well across regions with distinct climates and physiography. With high fidelity to SMAP, LSTM shows great potential for hindcasting, data assimilation, and weather forecasting.