Google is throwing the power of its AI and machine-learning algorithms behind developing faster and more accurate weather forecasts. In a blog post, Google describes a new model developed by the company called'nowcasting' which it says has shown initial success in being able to accurately predict weather patterns with'nearly instantaneous' results. According to a new paper, the method is able to produce forecasts for up to six hours in advance in only five to 10 minutes - figures that it says outperform traditional models even in early stages. While some traditional forecasts generate massive amounts of data, they can also take hours to complete. 'A significant advantage of machine learning is that inference is computationally cheap given an already-trained model, allowing forecasts that are nearly instantaneous and in the native high resolution of the input data,' Google writes.
Google hopes to tap AI and machine learning to make speedy local weather predictions. In a paper and accompanying blog post, the tech giant detailed an AI system that uses satellite images to produce "nearly instantaneous" and high-resolution forecasts -- on average, with a roughly one kilometer resolution and a latency of only 5-10 minutes. The researchers behind it say it outperforms traditional models "even at these early stages of development." The system takes a data-driven and physics-free approach to weather modeling, meaning it learns to approximate atmospheric physics from examples alone and not by incorporating prior knowledge. Underpinning it is a convolutional neural network that takes as input images of weather patterns and transforms them into new output images.
Among the many things we've become addicted to on our smartphones is checking the weather. If you're anything like me, you open a weather app at least twice a day: in the morning to know what to expect for the day ahead, maybe before your commute home so you can prepare for possible rain or snow, and sometimes before bed to get an idea of what to wear or what activities to plan for the next day. Depending where you live, how much time you spend outside, and how prone your area is to rapid weather changes, maybe you check the forecast even more frequently than that. The fact that our phones now contain hour-by-hour breakdowns of temperature and likelihood of precipitation means we can be well-informed and well-prepared. But these forecasts are coming at a greater cost than we know, and they're not always right.
A few weeks ago, Google's artificial intelligence (AI) used a machine learning model to improve screening for breast cancer,media reported. Now, the company has used convolutional neural networks (CNN) in instant forecasts of precipitation. Google AI researchers mentioned its use of CNN in short-term precipitation forecasts in an article called Machine Learning for Precipitation Now Fromcasting Radar Images. The results look promising, and according to Google itself, the results are better than the traditional method: this precipitation forecast focuses on 0-6 hours of forecasts, which produce a resolution of 1 km and a total delay of only 5-10 minutes (including data collection delays). Even in the early stages of development, it outperforms traditional models.
Flooding is a destructive and dangerous hazard and climate change appears to be increasing the frequency of catastrophic flooding events around the world. Physics-based flood models are costly to calibrate and are rarely generalizable across different river basins, as model outputs are sensitive to site-specific parameters and human-regulated infrastructure. In contrast, statistical models implicitly account for such factors through the data on which they are trained. Such models trained primarily from remotely-sensed Earth observation data could reduce the need for extensive in-situ measurements. In this work, we develop generalizable, multi-basin models of river flooding susceptibility using geographically-distributed data from the USGS stream gauge network. Machine learning models are trained in a supervised framework to predict two measures of flood susceptibility from a mix of river basin attributes, impervious surface cover information derived from satellite imagery, and historical records of rainfall and stream height. We report prediction performance of multiple models using precision-recall curves, and compare with performance of naive baselines. This work on multi-basin flood prediction represents a step in the direction of making flood prediction accessible to all at-risk communities.