mar
On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay
Lobacheva, Ekaterina, Kodryan, Maxim, Chirkova, Nadezhda, Malinin, Andrey, Vetrov, Dmitry
Despite the conventional wisdom that using batch normalization with weight decay may improve neural network training, some recent works show their joint usage may cause instabilities at the late stages of training. Other works, in contrast, show convergence to the equilibrium, i.e., the stabilization of training metrics. In this paper, we study this contradiction and show that instead of converging to a stable equilibrium, the training dynamics converge to consistent periodic behavior. That is, the training process regularly exhibits instabilities which, however, do not lead to complete training failure, but cause a new period of training. We rigorously investigate the mechanism underlying this discovered periodic behavior both from an empirical and theoretical point of view and show that this periodic behavior is indeed caused by the interaction between batch normalization and weight decay.
Learning from an Exploring Demonstrator: Optimal Reward Estimation for Bandits
Guo, Wenshuo, Agrawal, Kumar Krishna, Grover, Aditya, Muthukumar, Vidya, Pananjady, Ashwin
We introduce the "inverse bandit" problem of estimating the rewards of a multi-armed bandit instance from observing the learning process of a low-regret demonstrator. Existing approaches to the related problem of inverse reinforcement learning assume the execution of an optimal policy, and thereby suffer from an identifiability issue. In contrast, our paradigm leverages the demonstrator's behavior en route to optimality, and in particular, the exploration phase, to obtain consistent reward estimates. We develop simple and efficient reward estimation procedures for demonstrations within a class of upper-confidence-based algorithms, showing that reward estimation gets progressively easier as the regret of the algorithm increases. We match these upper bounds with information-theoretic lower bounds that apply to any demonstrator algorithm, thereby characterizing the optimal tradeoff between exploration and reward estimation. Extensive empirical evaluations on both synthetic data and simulated experimental design data from the natural sciences corroborate our theoretical results.
Evaluation Metrics for Recommender Systems – Towards Data Science
Recommender systems are growing progressively more popular in online retail because of their ability to offer personalized experiences to unique users. Mean Average Precision at K (MAP@K) is typically the metric of choice for evaluating the performance of a recommender systems. However, the use of additional diagnostic metrics and visualizations can offer deeper and sometimes surprising insights into a model's performance. This article explores Mean Average Recall at K (MAR@K), Coverage, Personalization, and Intra-list Similarity, and uses these metrics to compare three simple recommender systems. If you would like to use any of the metrics or plots discussed in this article, I have made them all available in a python library recmetrics.
- Media > Film (0.33)
- Leisure & Entertainment (0.33)
Machine Learning Internship in Pune at Edufyy Learning Solutions
Edufyy Smart Learning Technologies is an Early stage startup in an education technology. We are building machine learning platform for preparation of competitive examinations. Our vision is to provide best quality on demand education. Through machine learning technology we aim to improve the learning outcomes of our learners. Only those candidates can apply who: can start the internship between 27th Feb'17 and 29th Mar'17.