Reinforcement Learning


Using deep learning to improve traffic signal performance Penn State University

#artificialintelligence

Traffic signals serve to regulate the worst bottlenecks in highly populated areas but are not always very effective. Researchers at Penn State are hoping to use deep reinforcement learning to improve traffic signal efficiency in urban areas, thanks to a one-year, $22,443 Penn State Institute for CyberScience Seed Grant. Urban traffic congestion currently costs the U.S. economy $160 billion in lost productivity and causes 3.1 billion gallons of wasted fuel and 56 billion pounds of harmful CO2 emissions, according to the 2015 Urban Mobility Scorecard. Vikash Gayah, associate professor of civil engineering, and Zhenhui "Jessie" Li, associate professor of information sciences and technology, aim to tackle this issue by first identifying machine learning algorithms that will provide results consistent with traditional (theoretical) solutions for simple scenerios, and then building upon those algorithms by introducing complexities that cannot be readily addressed through traditional means. "Typically, we would go out and do traffic counts for an hour at certain peak times of day and that would determine signal timings for the next year, but not every day looks like that hour, and so we get inefficiency," Gayah said.


A technique to improve machine learning inspired by the behavior of human infants

#artificialintelligence

From their first years of life, human beings have the innate ability to learn continuously and build mental models of the world, simply by observing and interacting with things or people in their surroundings. Cognitive psychology studies suggest that humans make extensive use of this previously acquired knowledge, particularly when they encounter new situations or when making decisions. Despite the significant recent advances in the field of artificial intelligence (AI), most virtual agents still require hundreds of hours of training to achieve human-level performance in several tasks, while humans can learn how to complete these tasks in a few hours or less. Recent studies have highlighted two key contributors to humans' ability to acquire knowledge so quickly--namely, intuitive physics and intuitive psychology. These intuition models, which have been observed in humans from early stages of development, might be the core facilitators of future learning.


Researchers' deep learning algorithm solves Rubik's Cube faster than any human

#artificialintelligence

Since its invention by a Hungarian architect in 1974, the Rubik's Cube has furrowed the brows of many who have tried to solve it, but the 3-D logic puzzle is no match for an artificial intelligence system created by researchers at the University of California, Irvine. DeepCubeA, a deep reinforcement learning algorithm programmed by UCI computer scientists and mathematicians, can find the solution in a fraction of a second, without any specific domain knowledge or in-game coaching from humans. This is no simple task considering that the cube has completion paths numbering in the billions but only one goal state--each of six sides displaying a solid color--which apparently can't be found through random moves. For a study published today in Nature Machine Intelligence, the researchers demonstrated that DeepCubeA solved 100 percent of all test configurations, finding the shortest path to the goal state about 60 percent of the time. The algorithm also works on other combinatorial games such as the sliding tile puzzle, Lights Out and Sokoban.


Artificial Intelligence (AI) in Machine Learning

#artificialintelligence

Is a learning methodology that interacts with its setting by manufacturing actions and discovers errors or rewards. Trial and error search and delayed reward area unit the foremost relevant characteristics of reinforcement learning. This methodology permits machines and computer code agents to mechanically verify the best behavior among a selected context so as to maximise its performance. Machine learning allows analysis of huge quantities of information, whereas it typically delivers quicker, a lot of correct leads to order to spot profitable opportunities or dangerous risks, It's going to conjointly need beyond regular time and resources to coach it properly. Combining machine learning with AI and psychological feature technologies will create it even simpler in process massive volumes of knowledge.


21 Open-Source Machine Learning Tools for Every Data Scientist

#artificialintelligence

I love the open-source machine learning community. The majority of my learning as an aspiring and then as an established data scientist came from open-source resources and tools. If you haven't yet embraced the beauty of open-source tools in machine learning – you're missing out! The open-source community is massive and has an incredibly supportive attitude towards new tools and embracing the concept of democratizing machine learning. You must already know the popular open-source tools like R, Python, Jupyter notebooks, and so on.


Coherent transport of quantum states by deep reinforcement learning

#artificialintelligence

Some problems in physics are solved as a result of the discovery of an ansatz solution, namely a successful test guess, but unfortunately there is no general method to generate one. Recently, machine learning has increasingly proved to be a viable tool for modeling hidden features and effective rules in complex systems. Among the classes of machine learning algorithms, deep reinforcement learning (DRL)1 is providing some of the most spectacular results due to its ability to identify strategies for achieving a goal in a complex space of solutions without prior knowledge of the system2,3,4,5,6,7. Contrary to supervised learning, which has already been applied to quantum systems, such as in the determination of high-fidelity gates and the optimization of quantum memories by dynamic decoupling8, DRL has only very recently been proposed for the control of quantum systems9,10,11,12,13,14,15,16, along with a strictly quantum reinforcement learning implementation14,17. To show the power of DRL, we apply DRL to the problem of coherent transport by adiabatic passage (CTAP) where an electron (encoding the quantum state) is transferred through an array of quantum dots.


Python Programming Tutorials

#artificialintelligence

Deep Q Networks are the deep learning/neural network versions of Q-Learning. With DQNs, instead of a Q Table to look up values, you have a model that you inference (make predictions from), and rather than updating the Q table, you fit (train) your model. The DQN neural network model is a regression model, which typically will output values for each of our possible actions. These values will be continuous float values, and they are directly our Q values. As we enage in the environment, we will do a .predict() to figure out our next move (or move randomly).


Google's AI picks which machine learning models will produce the best results

#artificialintelligence

Leave it to the folks at Google to devise AI capable of predicting which machine learning models will produce the best results. In a newly-published paper ("Off-Policy Evaluation via Off-Policy Classification") and blog post, a team of Google AI researchers propose what they call "off-policy classification," or OPC, which evaluates the performance of AI-driven agents by treating evaluation as a classification problem. The team notes that their approach -- a variant of reinforcement learning, which employs rewards to drive software policies toward goals -- works with image inputs and scales to tasks including vision-based robotic grasping. "Fully off-policy reinforcement learning is a variant in which an agent learns entirely from older data, which is appealing because it enables model iteration without requiring a physical robot," writes Robotics at Google software engineer Alexa Irpan. "With fully off-policy RL, one can train several models on the same fixed dataset collected by previous agents, then select the best one."


QXplore: Q-learning Exploration by Maximizing Temporal Difference Error

arXiv.org Machine Learning

A major challenge in reinforcement learning for continuous state-action spaces is exploration, especially when reward landscapes are very sparse. Several recent methods provide an intrinsic motivation to explore by directly encouraging RL agents to seek novel states. A potential disadvantage of pure state novelty-seeking behavior is that unknown states are treated equally regardless of their potential for future reward. In this paper, we propose that the temporal difference error of predicting primary reward can serve as a secondary reward signal for exploration. This leads to novelty-seeking in the absence of primary reward, and at the same time accelerates exploration of reward-rich regions in sparse (but nonzero) reward landscapes compared to state novelty-seeking. This objective draws inspiration from dopaminergic pathways in the brain that influence animal behavior. We implement this idea with an adversarial method in which Q and Qx are the action-value functions for primary and secondary rewards, respectively. Secondary reward is given by the absolute value of the TD-error of Q. Training is off-policy, based on a replay buffer containing a mixture of trajectories induced by Q and Qx. We characterize performance on a suite of continuous control benchmark tasks against recent state of the art exploration methods and demonstrate comparable or better performance on all tasks, with much faster convergence for Q.


Wasserstein Adversarial Imitation Learning

arXiv.org Machine Learning

Imitation Learning describes the problem of recovering an expert policy from demonstrations. While inverse reinforcement learning approaches are known to be very sample-efficient in terms of expert demonstrations, they usually require problem-dependent reward functions or a (task-)specific reward-function regularization. In this paper, we show a natural connection between inverse reinforcement learning approaches and Optimal Transport, that enables more general reward functions with desirable properties (e.g., smoothness). Based on our observation, we propose a novel approach called Wasserstein Adversarial Imitation Learning. Our approach considers the Kantorovich potentials as a reward function and further leverages regularized optimal transport to enable large-scale applications. In several robotic experiments, our approach outperforms the baselines in terms of average cumulative rewards and shows a significant improvement in sample-efficiency, by requiring just one expert demonstration.