Extreme events evaluation using CRPS distributions

Taillardat, Maxime, Fougères, Anne-Laure, Naveau, Philippe, de Fondeville, Raphaël

arXiv.org Machine Learning 

The quality of a forecast is often summarized by one scalar. For example, to identify the best forecast, one classically takes the mean on a validation period of proper scoring rules (see, e.g., Matheson and Winkler, 1976; Gneiting and Raftery, 2007; Schervish et al., 2009; Tsyplakov, 2013). Proper scoring rules can be decomposed in terms of reliability, uncertainty and resolution. Several examples of such decompositions can be found in Hersbach (2000) and Candille and Talagrand (2005). Bröcker (2015) showed that resolution is strongly linked with discrimination. Resolution and reliability can also be merged into the term calibration, and Gneiting et al. (2007) suggested to maximize the sharpness subject to calibration. Note that the sharpness is the spread of the forecast, and it is a property of the forecast only. In ensemble forecasts' verification, the most popular scoring rule is the Continuous Ranked Probability Score (CRPS) (see, e.g., Epstein, 1969; Hersbach, 2000; Bröcker, 2012) and it can be defined as

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found