Goto

Collaborating Authors

Reinforcement Learning


Watch a Robot AI Beat World-Class Curling Competitors

#artificialintelligence

Artificial intelligence still needs to bridge the "sim-to-real" gap. Deep-learning techniques that are all the rage in AI log superlative performances in mastering cerebral games, including chess and Go, both of which can be played on a computer. But translating simulations to the physical world remains a bigger challenge. A robot named Curly that uses "deep reinforcement learning"--making improvements as it corrects its own errors--came out on top in three of four games against top-ranked human opponents from South Korean teams that included a women's team and a reserve squad for the national wheelchair team. One crucial finding was that the AI system demonstrated its ability to adapt to changing ice conditions.


Exploring exploration: comparing children with RL agents in unified environments

AIHub

Despite recent advances in artificial intelligence (AI) research, human children are still by far the best learners we know of, learning impressive skills like language and high-level reasoning from very little data. Children's learning is supported by highly efficient, hypothesis-driven exploration: in fact, they explore so well that many machine learning researchers have been inspired to put videos like the one below in their talks to motivate research into exploration methods. However, because applying results from studies in developmental psychology can be difficult, this video is often the extent to which such research actually connects with human cognition. Why is directly applying research from developmental psychology to problems in AI so hard? For one, taking inspiration from developmental studies can be difficult because the environments that human children and artificial agents are typically studied in can be very different. Traditionally, reinforcement learning (RL) research takes place in grid-world-like settings or other 2D games, whereas children act in the real world which is rich and 3-dimensional.


[R] GRAC: Self-Guided and Self-Regularized Actor-Critic

#artificialintelligence

Abstract: Deep reinforcement learning (DRL) algorithms have successfully been demonstrated on a range of challenging decision making and control tasks. One dominant component of recent deep reinforcement learning algorithms is the target network which mitigates the divergence when learning the Q function. However, target networks can slow down the learning process due to delayed function updates. Another dominant component especially in continuous domains is the policy gradient method which models and optimizes the policy directly. However, when Q functions are approximated with neural networks, their landscapes can be complex and therefore mislead the local gradient.


UC Berkeley Reward-Free RL Beats SOTA Reward-Based RL

#artificialintelligence

End-to-end Deep Reinforcement Learning (DRL) is a trending training approach in the field of computer vision, where it has proven successful at solving a wide range of complex tasks that were previously regarded as out of reach. End-to-end DRL is now being applied in domains ranging from real-world and simulated robotics to sophisticated video games. However, as appealing as end-to-end DRL methods are, most rely heavily on reward functions in order to learn visual features. This means feature-learning suffers when rewards are sparse, which is the case in most real-world scenarios. ATC trains a convolutional encoder to associate pairs of observations separated by a short time difference. Random shift, a stochastic data augmentation to the observations is applied within each training batch.


[D] Quality Contributions Roundup 9/14

#artificialintelligence

Though the community continues to develop new algorithms, state-of-the-art results have stopped improving in the last couple of years. Since RL algorithms that use a tremendous amount of online data to learn from scratch are infeasible to apply in the real-world, much research has moved to fields such as Meta-RL, offline RL, and integrating RL with domain-knowledge, integrating RL and planning, etc. How do you unit test end-to-end ML pipelines?, by u/farmingvillein As perhaps a bit of tldr: once you've got the bare minimum data-replay testing in place ("yeah, it is probably working, because the results are pretty close to what they were before"), I'd encourage you to consider focusing your energy toward thinking of testing as outlier detection. Outliers, in real-world ML systems, tend to be harbingers of things that are wrong systematically, upstream data problems, and logic (pre-/post-processing) problems. How do you transition from a no name international college to FAIR/Brain?, by u/r-sync Coming from a no-name Indian engineering college with meh grades, you do have to get a bit creative, very persistent and build credibility for yourself. The examples above are one way to do so, but you can also maybe articulate your thoughts as really good blog posts and arxiv papers, or show great software engineering skills in open-source (i.e.


How do we know AI is ready to be in the wild? Maybe a critic is needed

ZDNet

Mischief can happen when AI is let loose in the world, just like any technology. The examples of AI gone wrong are numerous, the most vivid in recent memory being the disastrously bad performance of Amazon's facial recognition technology, Rekognition, which had a propensity to erroneously match members of some ethnic groups with criminal mugshots to a disproportionate extent. Given the risk, how can society know if a technology has been adequately refined to a level where it is safe to deploy? "This is a really good question, and one we are actively working on, "Sergey Levine, assistant professor with the University of California at Berkeley's department of electrical engineering and computer science, told ZDNet by email this week. Levine and colleagues have been working on an approach to machine learning where the decisions of a software program are subjected to a critique by another algorithm within the same program that acts adversarially.


A Machine of Few Words -- Interactive Speaker Recognition with Reinforcement Learning

#artificialintelligence

Speaker recognition is a well known and studied task in the speech processing domain. It has many applications, either for security or speaker adaptation of personal devices. In this paper, we present a new paradigm for automatic speaker recognition that we call Interactive Speaker Recognition (ISR). In this paradigm, the recognition system aims to incrementally build a representation of the speakers by requesting personalized utterances to be spoken in contrast to the standard text-dependent or text-independent schemes. To do so, we cast the speaker recognition task into a sequential decision-making problem that we solve with Reinforcement Learning. Using a standard dataset, we show that our method achieves excellent performance while using little speech signal amounts. This method could also be applied as an utterance selection mechanism for building speech synthesis systems.


Can RL from pixels be as efficient as RL from state?

Robohub

A remarkable characteristic of human intelligence is our ability to learn tasks quickly. Most humans can learn reasonably complex skills like tool-use and gameplay within just a few hours, and understand the basics after only a few attempts. This suggests that data-efficient learning may be a meaningful part of developing broader intelligence. On the other hand, Deep Reinforcement Learning (RL) algorithms can achieve superhuman performance on games like Atari, Starcraft, Dota, and Go, but require large amounts of data to get there. Achieving superhuman performance on Dota took over 10,000 human years of gameplay. Unlike simulation, skill acquisition in the real-world is constrained to wall-clock time.


Learning Compositional Neural Programs for Continuous Control

#artificialintelligence

We propose a novel solution to challenging sparse-reward, continuous control problems that require hierarchical planning at multiple levels of abstraction. Our solution, dubbed AlphaNPI-X, involves three separate stages of learning. First, we use off-policy reinforcement learning algorithms with experience replay to learn a set of atomic goal-conditioned policies, which can be easily repurposed for many tasks. Second, we learn self-models describing the effect of the atomic policies on the environment. Third, the self-models are harnessed to learn recursive compositional programs with multiple levels of abstraction. The key insight is that the self-models enable planning by imagination, obviating the need for interaction with the world when learning higher-level compositional programs. To accomplish the third stage of learning, we extend the AlphaNPI algorithm, which applies AlphaZero to learn recursive neural programmer-interpreters. We empirically show that AlphaNPI-X can effectively learn to tackle challenging sparse manipulation tasks, such as stacking multiple blocks, where powerful model-free baselines fail.


8 Real-World Applications of Reinforcement Learning

#artificialintelligence

When it comes to reinforcement learning the first application which comes to your mind is AI playing games. Thanks to popularization by some really successful game playing reinforcement models this is the perception which we all have built. But if we break out from this notion we will find many practical use-cases of reinforcement learning. In this article, we will see some of the most amazing applications of reinforcement learning that you did not know exist. We already know how useful robots are in the industrial and manufacturing areas.