Goto

Collaborating Authors

 drift model




Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset

Galashov, Alexandre, Titsias, Michalis K., György, András, Lyle, Clare, Pascanu, Razvan, Teh, Yee Whye, Sahani, Maneesh

arXiv.org Artificial Intelligence

Neural networks are traditionally trained under the assumption that data come from a stationary distribution. However, settings which violate this assumption are becoming more popular; examples include supervised learning under distributional shifts, reinforcement learning, continual learning and non-stationary contextual bandits. In this work we introduce a novel learning approach that automatically models and adapts to non-stationarity, via an Ornstein-Uhlenbeck process with an adaptive drift parameter. The adaptive drift tends to draw the parameters towards the initialisation distribution, so the approach can be understood as a form of soft parameter reset. We show empirically that our approach performs well in non-stationary supervised and off-policy reinforcement learning settings.


Scoring Rules for Performative Binary Prediction

Chan, Alan

arXiv.org Artificial Intelligence

We construct a model of expert prediction where predictions can influence the state of the world. Under this model, we show through theoretical and numerical results that proper scoring rules can incentivize experts to manipulate the world with their predictions. We also construct a simple class of scoring rules that avoids this problem.