Goto

Collaborating Authors

 vogn






Reviews: Practical Deep Learning with Bayesian Principles

Neural Information Processing Systems

The paper demonstrates that the Variational Online Gauss-Newton (VOGN) method of Khan et al. (2018) can be successfully scaled to deep learning architectures. The authors demonstrated the scalability of Bayesian methods to large scale data such as ImageNet. Extensive experiments on large scale data and models are provided. The main result is an adoption of an existing model (VOGN) to make it practical for deep learning.


Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics in Limit-Order Book Markets

Magris, Martin, Shabani, Mostafa, Iosifidis, Alexandros

arXiv.org Machine Learning

The prediction of financial markets is a challenging yet important task. In modern electronically-driven markets, traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics. While recent research has established the effectiveness of traditional machine learning (ML) models in financial applications, their intrinsic inability to deal with uncertainties, which is a great concern in econometrics research and real business applications, constitutes a major drawback. Bayesian methods naturally appear as a suitable remedy conveying the predictive ability of ML methods with the probabilistically-oriented practice of econometric research. By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention, suitable for the challenging time-series task of predicting mid-price movements in ultra-high-frequency limit-order book markets. We thoroughly compare our Bayesian model with traditional ML alternatives by addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts. Our results underline the feasibility of the Bayesian deep-learning approach and its predictive and decisional advantages in complex econometric tasks, prompting future research in this direction.


Practical Deep Learning with Bayesian Principles

Osawa, Kazuki, Swaroop, Siddharth, Jain, Anirudh, Eschenhagen, Runa, Turner, Richard E., Yokota, Rio, Khan, Mohammad Emtiyaz

arXiv.org Machine Learning

Bayesian methods promise to fix many shortcomings of deep learning, but they are impractical and rarely match the performance of standard methods, let alone improve them. In this paper, we demonstrate practical training of deep networks with natural-gradient variational inference. By applying techniques such as batch normalisation, data augmentation, and distributed training, we achieve similar performance in about the same number of epochs as the Adam optimiser, even on large datasets such as ImageNet. Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated and uncertainties on out-of-distribution data are improved. This work enables practical deep learning while preserving benefits of Bayesian principles. A PyTorch implementation will be available as a plug-and-play optimiser.