We present the winning strategy of an electricity demand forecasting competition. This competition was organized to design new forecasting methods for unstable periods such as the one starting in Spring 2020. We rely on state-space models to adapt standard statistical and machine learning models. We claim that it achieves the right compromise between two extremes. On the one hand, purely time-series models such as autoregressives are adaptive in essence but fail to capture dependence to exogenous variables. On the other hand, machine learning methods allow to learn complex dependence to explanatory variables on a historical data set but fail to forecast non-stationary data accurately. The evaluation period of the competition was the occasion of trial and error and we put the focus on the final forecasting procedure. In particular, it was at the same time that a recent algorithm was designed to adapt the variances of a state-space model and we present the results of the final version only. We discuss day-today predictions nonetheless.
We present a simple quantile regression-based forecasting method that was applied in a probabilistic load forecasting framework of the Global Energy Forecasting Competition 2017 (GEFCom2017). The hourly load data is log transformed and split into a long-term trend component and a remainder term. The key forecasting element is the quantile regression approach for the remainder term that takes into account weekly and annual seasonalities such as their interactions. Temperature information is only used to stabilize the forecast of the long-term trend component. Public holidays information is ignored. Still, the forecasting method placed second in the open data track and fourth in the definite data track with our forecasting method, which is remarkable given simplicity of the model. The method also outperforms the Vanilla benchmark consistently.
Combination and aggregation techniques can improve forecast accuracy substantially. This also holds for probabilistic forecasting methods where full predictive distributions are combined. There are several time-varying and adaptive weighting schemes like Bayesian model averaging (BMA). However, the performance of different forecasters may vary not only over time but also in parts of the distribution. So one may be more accurate in the center of the distributions, and other ones perform better in predicting the distribution's tails. Consequently, we introduce a new weighting procedure that considers both varying performance across time and the distribution. We discuss pointwise online aggregation algorithms that optimize with respect to the continuous ranked probability score (CRPS). After analyzing the theoretical properties of a fully adaptive Bernstein online aggregation (BOA) method, we introduce smoothing procedures for pointwise CRPS learning. The properties are confirmed and discussed using simulation studies. Additionally, we illustrate the performance in a forecasting study for carbon markets. In detail, we predict the distribution of European emission allowance prices.
This article proposes a two-dimensional classification methodology to select the relevant forecasting tools developed by the scientific community based on a classification of load forecasting studies. The inputs of the classifier are the articles of the literature and the outputs are articles classified into categories. The classification process relies on two couple of parameters that defines a forecasting problem. The temporal couple is the forecasting horizon and the forecasting resolution. The system couple is the system size and the load resolution. Each article is classified with key information about the dataset used and the forecasting tools implemented: the forecasting techniques (probabilistic or deterministic) and methodologies, the cleansing data techniques and the error metrics. This process is illustrated by reviewing and classifying thirty-four articles.
--We present a comparative study of different probabilistic forecasting techniques on the task of predicting the electrical load of secondary substations and cabinets located in a low voltage distribution grid, as well as their aggregated power profile. The methods are evaluated using standard KPIs for deterministic and probabilistic forecasts. We also compare the ability of different hierarchical techniques in improving the bottom level forecasters' performances. Both the raw and cleaned datasets, including meteorological data, are made publicly available to provide a standard benchmark for evaluating forecasting algorithms for demand-side management applications. The increasing monitoring capacity in low voltage (L V) and medium voltage (MV) distribution systems allows operators to gather power measurements from different levels of aggregation within the power grid. For instance, smart meters provide measurements from single households or buildings, dedicated power meters or phasor measurement units from secondary substations, and remote terminal units from primary substations at the interface between distribution and (sub)transmission systems. E.g., in a radial distribution system, the power flow at the grid connection point is, at the net of grid losses, the sum of the downstream elements. In the case of forecasts, however, the forecasted top-level series computed by using the information at that level of aggregation does not necessarily correspond to the sum of the bottom-level forecasts, thus invalidating the principle of hierarchy.