Goto

Collaborating Authors

 Stevanovic, Dalibor


Can Machine Learning Catch the COVID-19 Recession?

arXiv.org Machine Learning

Forecasting economic developments during crisis time is problematic since the realizations of the variables are far away from their average values, while econometric models are typically better at explaining and predicting values close to the average, particularly so in the case of linear models. The situation is even worse for the Covid-19 induced recession, when typically well performing econometric models such as Bayesian VARs with stochastic volatility have troubles in tracking the unprecedented fall in real activity and labour market indicators -- see for example for the US Carriero et al. (2020) and Plagborg-Møller et al. (2020), or An and Loungani (2020) for an analysis of the past performance of the Consensus Forecasts. As a partial solution, Foroni et al. (2020) employ simple mixed-frequency models to nowcast and forecast US and the rest of G7 GDP quarterly growth rates, using common monthly indicators, such as industrial production, surveys, and the slope of the yield curve. They then adjust the forecasts by a specific form of intercept correction or estimate by the similarity approach, see Clements and Hendry (1999) and Dendramis et al. (2020), showing that the former can reduce the extent of the forecast error during the Covid-19 period. Schorfheide and Song (2020) do not include COVID periods in the estimation of a mixed-frequency VAR model because those observations substantially alter the forecasts. An alternative approach is the specification of sophisticated nonlinear / time-varying models. While this is not without perils when used on short economic time series, it can yield some gains, see e.g.


How is Machine Learning Useful for Macroeconomic Forecasting?

arXiv.org Machine Learning

We move beyond "Is Machine Learning Useful for Macroeconomic Forecasting?" by adding the "how". The current forecasting literature has focused on matching specific variables and horizons with a particularly successful algorithm. In contrast, we study the usefulness of the underlying features driving ML gains over standard macroeconometric methods. We distinguish four so-called features (nonlinearities, regularization, cross-validation and alternative loss function) and study their behavior in both the data-rich and data-poor environments. To do so, we design experiments that allow to identify the "treatment" effects of interest. We conclude that (i) nonlinearity is the true game changer for macroeconomic prediction, (ii) the standard factor model remains the best regularization, (iii) K-fold cross-validation is the best practice and (iv) the $L_2$ is preferred to the $\bar \epsilon$-insensitive in-sample loss. The forecasting gains of nonlinear techniques are associated with high macroeconomic uncertainty, financial stress and housing bubble bursts. This suggests that Machine Learning is useful for macroeconomic forecasting by mostly capturing important nonlinearities that arise in the context of uncertainty and financial frictions.


Macroeconomic Data Transformations Matter

arXiv.org Machine Learning

Following the recent enthusiasm for Machine Learning (ML) methods and widespread availability of big data, macroeconomic forecasting research gradually evolved further and further away from the traditional tightly specified OLS regression. Rather, nonparametric non-linearity and regularization of many forms are slowly taking the center stage, largely because they can provide sizable forecasting gains with respect to traditional methods (see, among others, Kim and Swanson (2018); Medeiros et al. (2019); Goulet Coulombe et al. (2020); Goulet Coulombe (2020a)). In such environments, different linear transformations of the informational set X can change the prediction and taking first differences may not be the optimal transformation for many predictors, despite the fact that it guarantees viable frequentist inference. For instance, in penalized regression problems - like Lasso or Ridge, different rotations of X imply different priors on β in the original regressor space.