gfdl
Conditional diffusion models for downscaling & bias correction of Earth system model precipitation
Aich, Michael, Hess, Philipp, Pan, Baoxiang, Bathiany, Sebastian, Huang, Yu, Boers, Niklas
Climate change exacerbates extreme weather events like heavy rainfall and flooding. As these events cause severe losses of property and lives, accurate high-resolution simulation of precipitation is imperative. However, existing Earth System Models (ESMs) struggle with resolving small-scale dynamics and suffer from biases, especially for extreme events. Traditional statistical bias correction and downscaling methods fall short in improving spatial structure, while recent deep learning methods lack controllability over the output and suffer from unstable training. Here, we propose a novel machine learning framework for simultaneous bias correction and downscaling. We train a generative diffusion model in a supervised way purely on observational data. We map observational and ESM data to a shared embedding space, where both are unbiased towards each other and train a conditional diffusion model to reverse the mapping. Our method can be used to correct any ESM field, as the training is independent of the ESM. Our approach ensures statistical fidelity, preserves large-scale spatial patterns and outperforms existing methods especially regarding extreme events and small-scale spatial features that are crucial for impact assessments.
- Europe > Germany > Brandenburg > Potsdam (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > United Kingdom > England > Devon > Exeter (0.04)
- (3 more...)
Artificial Intelligence: Good Aim, Wrong Target
Good aim, at the wrong target, is always a miss. This describes much of the current work in Artificial Intelligence: Brilliant minds, clever programmers, amazing algorithms, all pointed at the wrong target with stupefying aim. Despite their brilliance, cleverness, and coding, someone will get hurt if we continue pursuing the type of AI in vogue. A few weeks ago, Google put out guidance for research on preventing harm from AI. This last week, the Federal Government did the same.