Goto

Collaborating Authors

Deep Q-learning from Demonstrations

arXiv.org Artificial Intelligence

Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.


Accelerated Training for Matrix-norm Regularization: A Boosting Approach

Neural Information Processing Systems

Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm. Although recent developments in sparse approximation have offered promising solution methods, current approaches either apply only to matrix-norm constrained problems or provide suboptimal convergence rates. In this paper, we propose a boosting method for regularized learning that guarantees $\epsilon$ accuracy within $O(1/\epsilon)$ iterations. Performance is further accelerated by interlacing boosting with fixed-rank local optimization---exploiting a simpler local objective than previous work. The proposed method yields state-of-the-art performance on large-scale problems.


Fast Training of Sparse Graph Neural Networks on Dense Hardware

arXiv.org Machine Learning

Graph neural networks have become increasingly popular in recent years due to their ability to naturally encode relational input data and their ability to scale to large graphs by operating on a sparse representation of graph adjacency matrices. As we look to scale up these models using custom hardware, a natural assumption would be that we need hardware tailored to sparse operations and/or dynamic control flow. In this work, we question this assumption by scaling up sparse graph neural networks using a platform targeted at dense computation on fixed-size data. Drawing inspiration from optimization of numerical algorithms on sparse matrices, we develop techniques that enable training the sparse graph neural network model from Allamanis et al. [2018] in 13 minutes using a 512-core TPUv2 Pod, whereas the original training takes almost a day.


4 Benefits of mobile learning for your company MATRIX Blog

#artificialintelligence

Everyone wants the best on the market when it comes to employees, that's why headhunting companies are still in business and HR professionals spend so much time organizing multi-stage interviews and pre-employment tests. But even if the best selection method is applied, probably they won't find professionals that are tailor-made for the company's needs. No matter how good prospects are, they will need to learn about the company, about its culture, and they will need to develop their current knowledge in order to adapt it to the needs of their new organization. New employees aren't the only ones who need learning. In the dynamic environment in which we are living, all of us will need to learn something new eventually, and learn it fast if we want to remain sharp and be the most valuable asset of our team.


The role of AR technology in making learners imagine MATRIX Blog

#artificialintelligence

Children have this incredible capability to transform their reality in a matter of seconds into something magical: a place where everything can happen and the only limits are one's imagination. No matter how old we are, if we try hard enough we still can remember bits and pieces of this land of beauty where legends came alive, where the lines of the carpet were windy mountain roads for matchbox cars and dolls had their own social life. If we look closely we will realize that Toy Story is rather boring, because our toys had way better lives and adventures. One major effect of growing up -- or side effect, some may argue -- is that we slowly let go of our vivid imagination. Fantasies become scenarios more anchored in our reality and we name them plans: future plans, business plans, and any other plan you can think of.