Extreme events evaluation using CRPS distributions
Taillardat, Maxime, Fougères, Anne-Laure, Naveau, Philippe, de Fondeville, Raphaël
The quality of a forecast is often summarized by one scalar. For example, to identify the best forecast, one classically takes the mean on a validation period of proper scoring rules (see, e.g., Matheson and Winkler, 1976; Gneiting and Raftery, 2007; Schervish et al., 2009; Tsyplakov, 2013). Proper scoring rules can be decomposed in terms of reliability, uncertainty and resolution. Several examples of such decompositions can be found in Hersbach (2000) and Candille and Talagrand (2005). Bröcker (2015) showed that resolution is strongly linked with discrimination. Resolution and reliability can also be merged into the term calibration, and Gneiting et al. (2007) suggested to maximize the sharpness subject to calibration. Note that the sharpness is the spread of the forecast, and it is a property of the forecast only. In ensemble forecasts' verification, the most popular scoring rule is the Continuous Ranked Probability Score (CRPS) (see, e.g., Epstein, 1969; Hersbach, 2000; Bröcker, 2012) and it can be defined as
May-10-2019
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- France > Occitanie
- Haute-Garonne > Toulouse (0.04)
- Switzerland > Vaud
- Lausanne (0.04)
- United Kingdom > England
- France > Occitanie
- North America > United States
- New York (0.04)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment (0.75)
- Technology: