MoWE : A Mixture of Weather Experts
Chakraborty, Dibyajyoti, Maulik, Romit, Harrington, Peter, Foster, Dallas, Nabian, Mohammad Amin, Choudhry, Sanjay
–arXiv.org Artificial Intelligence
Data-driven weather models have recently achieved state-of-the-art performance, yet progress has plateaued in recent years. This paper introduces a Mixture of Experts (MoWE) approach as a novel paradigm to overcome these limitations, not by creating a new forecaster, but by optimally combining the outputs of existing models. The MoWE model is trained with significantly lower computational resources than the individual experts. Our model employs a Vision Transformer-based gating network that dynamically learns to weight the contributions of multiple "expert" models at each grid point, conditioned on forecast lead time. This approach creates a synthesized deterministic forecast that is more accurate than any individual component in terms of Root Mean Squared Error (RMSE). Our results demonstrate the effectiveness of this method, achieving up to a 10% lower RMSE than the best-performing AI weather model on a 2-day forecast horizon, significantly outperforming individual experts as well as a simple average across experts. This work presents a computationally efficient and scalable strategy to push the state of the art in data-driven weather prediction by making the most out of leading high-quality forecast models.
arXiv.org Artificial Intelligence
Sep-12-2025
- Country:
- Asia (0.04)
- North America
- Mexico > Gulf of Mexico (0.04)
- United States
- California > Santa Clara County
- Santa Clara (0.04)
- Pennsylvania (0.04)
- California > Santa Clara County
- Genre:
- Research Report > New Finding (0.68)
- Technology: