Goto

Collaborating Authors

 Beucler, Tom


Lightning-Fast Thunderstorm Warnings: Predicting Severe Convective Environments with Global Neural Weather Models

arXiv.org Artificial Intelligence

The recently released suite of AI weather models can produce multi-day, medium-range forecasts within seconds, with a skill on par with state-of-the-art operational forecasts. Traditional AI model evaluation predominantly targets global scores on single levels. Specific prediction tasks, such as severe convective environments, require much more precision on a local scale and with the correct vertical gradients between levels. With a focus on the convective season of global hotspots in 2020, we assess the skill of three top-performing AI models (Pangu-Weather, GraphCast, FourCastNet) for Convective Available Potential Energy (CAPE) and Deep Layer Shear (DLS) at lead-times of up to 10 days against the ERA-5 reanalysis and the IFS operational numerical weather prediction model. Looking at the example of a US tornado outbreak on April 12 and 13, 2020, all models predict elevated CAPE and DLS values multiple days in advance. The spatial structures in the AI models are smoothed in comparison to IFS and ERA-5. The models show differing biases in the prediction of CAPE values, with GraphCast capturing the value distribution the most accurately and FourCastNet showing a consistent underestimation. In seasonal analyses around the globe, we generally see the highest performance by GraphCast and Pangu-Weather, which match or even exceed the performance of IFS. CAPE derived from vertically coarse pressure levels of neural weather models lacks the precision of the vertically fine resolution of numerical models. The promising results here indicate that a direct prediction of CAPE in AI models is likely to be skillful. This would open unprecedented opportunities for fast and inexpensive predictions of severe weather phenomena. By advancing the assessment of AI models towards process-based evaluations we lay the foundation for hazard-driven applications of AI-based weather forecasts.


ClimSim: A large multi-scale dataset for hybrid physics-ML climate emulation

arXiv.org Artificial Intelligence

Modern climate projections lack adequate spatial and temporal resolution due to computational constraints. A consequence is inaccurate and imprecise predictions of critical processes such as storms. Hybrid methods that combine physics with machine learning (ML) have introduced a new generation of higher fidelity climate simulators that can sidestep Moore's Law by outsourcing compute-hungry, short, high-resolution simulations to ML emulators. However, this hybrid ML-physics simulation approach requires domain-specific treatment and has been inaccessible to ML experts because of lack of training data and relevant, easy-to-use workflows. We present ClimSim, the largest-ever dataset designed for hybrid ML-physics research. It comprises multi-scale climate simulations, developed by a consortium of climate scientists and ML researchers. It consists of 5.7 billion pairs of multivariate input and output vectors that isolate the influence of locally-nested, high-resolution, high-fidelity physics on a host climate simulator's macro-scale physical state. The dataset is global in coverage, spans multiple years at high sampling frequency, and is designed such that resulting emulators are compatible with downstream coupling into operational climate simulators. We implement a range of deterministic and stochastic regression baselines to highlight the ML challenges and their scoring.


Next-Generation Earth System Models: Towards Reliable Hybrid Models for Weather and Climate Applications

arXiv.org Artificial Intelligence

Recommendation 1: Develop Hybrid AI-Physical Models: Emphasize the integration of AI and physical modeling for improved reliability, especially for longer prediction horizons, acknowledging the delicate balance between knowledge-based and data-driven components required for optimal performance. Recommendation 2: Emphasize Robustness in AI Downscaling Approaches, favoring techniques that respect physical laws, preserve inter-variable dependencies and spatial structures, and accurately represent extremes at the local scale. Recommendation 3: Promote Inclusive Model Development: Ensure Earth System Model development is open and accessible to diverse stakeholders, enabling forecasters, the public, and AI/statistics experts to use, develop, and engage with the model and its predictions/projections. Figure Caption: Advancements in data collection, data access, hybrid AI-physical Earth system modeling, and downscaling empower stakeholders with increased accessibility to local predictions and projections, encouraging collaborative efforts across disciplines to improve climate change preparedness. Here, we review how machine learning has interactions (Rosenfeld et al., 2014). In the ocean, uncertainties persist due that can be integrated forward in time, serve the to unresolved mesoscale eddies and turbulent double purpose of understanding and prediction processes (Couldrey et al., 2021).


Identifying Three-Dimensional Radiative Patterns Associated with Early Tropical Cyclone Intensification

arXiv.org Artificial Intelligence

Cloud radiative feedback impacts early tropical cyclone (TC) intensification, but limitations in existing diagnostic frameworks make them unsuitable for studying asymmetric or transient radiative heating. We propose a linear Variational Encoder-Decoder (VED) to learn the hidden relationship between radiation and the surface intensification of realistic simulated TCs. Limiting VED model inputs enables using its uncertainty to identify periods when radiation has more importance for intensification. A close examination of the extracted 3D radiative structures suggests that longwave radiative forcing from inner core deep convection and shallow clouds both contribute to intensification, with the deep convection having the most impact overall. We find that deep convection downwind of the shallow clouds is critical to the intensification of Haiyan. Our work demonstrates that machine learning can discover thermodynamic-kinematic relationships without relying on axisymmetric or deterministic assumptions, paving the way towards the objective discovery of processes leading to TC intensification in realistic conditions.


Climate-Invariant Machine Learning

arXiv.org Artificial Intelligence

Projecting climate change is a generalization problem: we extrapolate the recent past using physical models across past, present, and future climates. Current climate models require representations of processes that occur at scales smaller than model grid size, which have been the main source of model projection uncertainty. Recent machine learning (ML) algorithms hold promise to improve such process representations, but tend to extrapolate poorly to climate regimes they were not trained on. To get the best of the physical and statistical worlds, we propose a new framework - termed "climate-invariant" ML - incorporating knowledge of climate processes into ML algorithms, and show that it can maintain high offline accuracy across a wide range of climate conditions and configurations in three distinct atmospheric models. Our results suggest that explicitly incorporating physical knowledge into data-driven models of Earth system processes can improve their consistency, data efficiency, and generalizability across climate regimes.


Lessons Learned: Reproducibility, Replicability, and When to Stop

arXiv.org Artificial Intelligence

While extensive guidance exists for ensuring the reproducibility of one's own study, there is little discussion regarding the reproduction and replication of external studies within one's own research. To initiate this discussion, drawing lessons from our experience reproducing an operational product for predicting tropical cyclogenesis, we present a two-dimensional framework to offer guidance on reproduction and replication. Our framework, representing model fitting on one axis and its use in inference on the other, builds upon three key aspects: the dataset, the metrics, and the model itself. By assessing the trajectories of our studies on this 2D plane, we can better inform the claims made using our research. Additionally, we use this framework to contextualize the utility of benchmark datasets in the atmospheric sciences. Our two-dimensional framework provides a tool for researchers, especially early career researchers, to incorporate prior work in their own research and to inform the claims they can make in this context.


Systematic Sampling and Validation of Machine Learning-Parameterizations in Climate Models

arXiv.org Artificial Intelligence

Progress in hybrid physics-machine learning (ML) climate simulations has been limited by the difficulty of obtaining performant coupled (i.e. online) simulations. While evaluating hundreds of ML parameterizations of subgrid closures (here of convection and radiation) offline is straightforward, online evaluation at the same scale is technically challenging. Our software automation achieves an order-of-magnitude larger sampling of online modeling errors than has previously been examined. Using this, we evaluate the hybrid climate model performance and define strategies to improve it. We show that model online performance improves when incorporating memory, a relative humidity input feature transformation, and additional input variables. We also reveal substantial variation in online error and inconsistencies between offline vs. online error statistics. The implication is that hundreds of candidate ML models should be evaluated online to detect the effects of parameterization design choices. This is considerably more sampling than tends to be reported in the current literature.


Selecting Robust Features for Machine Learning Applications using Multidata Causal Discovery

arXiv.org Artificial Intelligence

Robust feature selection is vital for creating reliable and interpretable Machine Learning (ML) models. When designing statistical prediction models in cases where domain knowledge is limited and underlying interactions are unknown, choosing the optimal set of features is often difficult. To mitigate this issue, we introduce a Multidata (M) causal feature selection approach that simultaneously processes an ensemble of time series datasets and produces a single set of causal drivers. This approach uses the causal discovery algorithms PC1 or PCMCI that are implemented in the Tigramite Python package. These algorithms utilize conditional independence tests to infer parts of the causal graph. Our causal feature selection approach filters out causally-spurious links before passing the remaining causal features as inputs to ML models (Multiple linear regression, Random Forest) that predict the targets. We apply our framework to the statistical intensity prediction of Western Pacific Tropical Cyclones (TC), for which it is often difficult to accurately choose drivers and their dimensionality reduction (time lags, vertical levels, and area-averaging). Using more stringent significance thresholds in the conditional independence tests helps eliminate spurious causal relationships, thus helping the ML model generalize better to unseen TC cases. M-PC1 with a reduced number of features outperforms M-PCMCI, non-causal ML, and other feature selection methods (lagged correlation, random), even slightly outperforming feature selection based on eXplainable Artificial Intelligence. The optimal causal drivers obtained from our causal feature selection help improve our understanding of underlying relationships and suggest new potential drivers of TC intensification.


Physics-constrained deep learning postprocessing of temperature and humidity

arXiv.org Artificial Intelligence

Weather forecasting centers currently rely on statistical postprocessing methods to minimize forecast error. This improves skill but can lead to predictions that violate physical principles or disregard dependencies between variables, which can be problematic for downstream applications and for the trustworthiness of postprocessing models, especially when they are based on new machine learning approaches. Building on recent advances in physics-informed machine learning, we propose to achieve physical consistency in deep learning-based postprocessing models by integrating meteorological expertise in the form of analytic equations. Applied to the post-processing of surface weather in Switzerland, we find that constraining a neural network to enforce thermodynamic state equations yields physically-consistent predictions of temperature and humidity without compromising performance. Our approach is especially advantageous when data is scarce, and our findings suggest that incorporating domain expertise into postprocessing models allows to optimize weather forecast information while satisfying application-specific requirements.


Deep Learning Based Cloud Cover Parameterization for ICON

arXiv.org Artificial Intelligence

A promising approach to improve cloud parameterizations within climate models and thus climate projections is to use deep learning in combination with training data from storm-resolving model (SRM) simulations. The ICOsahedral Non-hydrostatic (ICON) modeling framework permits simulations ranging from numerical weather prediction to climate projections, making it an ideal target to develop neural network (NN) based parameterizations for sub-grid scale processes. Within the ICON framework, we train NN based cloud cover parameterizations with coarse-grained data based on realistic regional and global ICON SRM simulations. We set up three different types of NNs that differ in the degree of vertical locality they assume for diagnosing cloud cover from coarse-grained atmospheric state variables. The NNs accurately estimate sub-grid scale cloud cover from coarse-grained data that has similar geographical characteristics as their training data. Additionally, globally trained NNs can reproduce sub-grid scale cloud cover of the regional SRM simulation. Using the game-theory based interpretability library SHapley Additive exPlanations, we identify an overemphasis on specific humidity and cloud ice as the reason why our column-based NN cannot perfectly generalize from the global to the regional coarse-grained SRM data. The interpretability tool also helps visualize similarities and differences in feature importance between regionally and globally trained column-based NNs, and reveals a local relationship between their cloud cover predictions and the thermodynamic environment. Our results show the potential of deep learning to derive accurate yet interpretable cloud cover parameterizations from global SRMs, and suggest that neighborhood-based models may be a good compromise between accuracy and generalizability.