Trojak, Will
Prithvi WxC: Foundation Model for Weather and Climate
Schmude, Johannes, Roy, Sujit, Trojak, Will, Jakubik, Johannes, Civitarese, Daniel Salles, Singh, Shraddha, Kuehnert, Julian, Ankur, Kumar, Gupta, Aman, Phillips, Christopher E, Kienzler, Romeo, Szwarcman, Daniela, Gaur, Vishal, Shinde, Rajat, Lal, Rohit, Da Silva, Arlindo, Diaz, Jorge Luis Guevara, Jones, Anne, Pfreundschuh, Simon, Lin, Amy, Sheshadri, Aditi, Nair, Udaysankar, Anantharaj, Valentine, Hamann, Hendrik, Watson, Campbell, Maskey, Manil, Lee, Tsengdar J, Moreno, Juan Bernabe, Ramachandran, Rahul
Triggered by the realization that AI emulators can rival the performance of traditional numerical weather prediction models running on HPC systems, there is now an increasing number of large AI models that address use cases such as forecasting, downscaling, or nowcasting. While the parallel developments in the AI literature focus on foundation models -- models that can be effectively tuned to address multiple, different use cases -- the developments on the weather and climate side largely focus on single-use cases with particular emphasis on mid-range forecasting. We close this gap by introducing Prithvi WxC, a 2.3 billion parameter foundation model developed using 160 variables from the Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2). Prithvi WxC employs an encoder-decoder-based architecture, incorporating concepts from various recent transformer models to effectively capture both regional and global dependencies in the input data. The model has been designed to accommodate large token counts to model weather phenomena in different topologies at fine resolutions. Furthermore, it is trained with a mixed objective that combines the paradigms of masked reconstruction with forecasting. We test the model on a set of challenging downstream tasks namely: Autoregressive rollout forecasting, Downscaling, Gravity wave flux parameterization, and Extreme events estimation. The pretrained model with 2.3 billion parameters, along with the associated fine-tuning workflows, has been publicly released as an open-source contribution via Hugging Face.
Probing optimisation in physics-informed neural networks
Fonseca, Nayara, Guidetti, Veronica, Trojak, Will
A novel comparison is presented of the effect of optimiser choice on the accuracy of physics-informed neural networks (PINNs). To give insight into why some optimisers are better, a new approach is proposed that tracks the training trajectory curvature and can be evaluated on the fly at a low computational cost. The linear advection equation is studied for several advective velocities, and we show that the optimiser choice substantially impacts PINNs model performance and accuracy. Furthermore, using the curvature measure, we found a negative correlation between the convergence error and the curvature in the optimiser local reference frame. It is concluded that, in this case, larger local curvature values result in better solutions. Consequently, optimisation of PINNs is made more difficult as minima are in highly curved regions. The idea of solving PDE problems using neural networks (NNs) was put forward by Lagaris et al. (1997; 1998); Lagaris et al. (2000) in the second half of the '90s and then revised in 2017 by Raissi et al. (2017a;b) who named the methodology Physics-Informed Neural Networks (PINNs).