reinforcement


NIPS 2016: A survey of tutorials, papers, and workshops Two Sigma

#artificialintelligence

Following their previous Insights post on ICML 2016, Two Sigma researchers Vinod Valsalam and Firdaus Janoos discuss below the notable advances in deep learning, optimization algorithms, Bayesian techniques, and time-series analysis presented at NIPS 2016. This tutorial by David Blei (Columbia), Shakir Mohamed (Deep Mind), and Rajesh Ranganath (Princeton) covered variational inference (VI) methods for approximating probability distributions through optimization. Towards the end of this tutorial, they described some of the newer advances in VI such as Monte Carlo gradient estimation, black box variational inference, stochastic approximation, and variational auto-encoders. The following are a few selected papers on deep learning, covering topics in reinforcement learning, training techniques, generative modeling, and recurrent networks.


Curiosity could help artificially intelligent machines advance

#artificialintelligence

A computer algorithm equipped with a form of artificial curiosity can learn to solve tricky problems even when it isn't immediately clear what actions might help it reach this goal. Researchers at the University of California, Berkeley, developed an "intrinsic curiosity model" to make their learning algorithm work even when there isn't a strong feedback signal. The researchers tried the approach, in combination with reinforcement learning, within two simple video games: Mario Bros., a classic platform game, and VizDoom, a basic 3-D shooter title. Pierre-Yves Oudeyer, a research director at the French Institute for Research in Computer Science and Automation, has pioneered, over the past several years, the development of computer programs and robots that exhibit simple forms of inquisitiveness.


What Can We Expect From Artificial Intelligence In The Future?

#artificialintelligence

Driverless car technology is well advanced and in some countries, notably China, AI research and development is booming. Computers can learn in a similar fashion and scientists are developing computers that learn instinctively from data and complex algorithms to produce new and more intelligent applications. Google already uses reinforcement to boost efficiency in its data centers, but it can also be applied to driverless car technology and industrial robotics. Chinese AI researchers have developed a powerful new AI therapeutic robot called PARO.


Curiosity May Be Vital for Truly Smart AI

MIT Technology Review

A computer algorithm equipped with a form of artificial curiosity can learn to solve tricky problems even when it isn't immediately clear what actions might help it reach this goal. Researchers at the University of California, Berkeley, developed an "intrinsic curiosity model" to make their learning algorithm work even when there isn't a strong feedback signal. The researchers tried the approach, in combination with reinforcement learning, within two simple video games: Mario Bros., a classic platform game, and VizDoom, a basic 3-D shooter title. Pierre-Yves Oudeyer, a research director at the French Institute for Research in Computer Science and Automation, has pioneered, over the past several years, the development of computer programs and robots that exhibit simple forms of inquisitiveness.


5 Machine Learning Projects You Can No Longer Overlook, May

@machinelearnbot

More overlooked machine learning and/or machine learning-related projects? OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. From Intel comes a(nother) deep learning framework, optimized for distribution over Apache Spark. BigDL is a distributed deep learning library for Apache Spark; with BigDL, users can write their deep learning applications as standard Spark programs, which can directly run on top of existing Spark or Hadoop clusters.


How to Learn Machine Learning in 10 Days

@machinelearnbot

If you have not checked this session out in full, I suggest you consider doing so.


The 10 Algorithms Machine Learning Engineers Need to Know

@machinelearnbot

Some of the most common examples of machine learning are Netflix's algorithms to make movie suggestions based on movies you have watched in the past or Amazon's algorithms that recommend books based on books you have bought before. The textbook that we used is one of the AI classics: Peter Norvig's Artificial Intelligence -- A Modern Approach, in which we covered major topics including intelligent agents, problem-solving by searching, adversarial search, probability theory, multi-agent systems, social AI, philosophy/ethics/future of AI. Machine learning algorithms can be divided into 3 broad categories -- supervised learning, unsupervised learning, and reinforcement learning.Supervised learning is useful in cases where a property (label) is available for a certain dataset (training set), but is missing and needs to be predicted for other instances.


Reinforcement Learning and AI

@machinelearnbot

Although RL has been around for many years it has become the third leg of the Machine Learning stool and increasingly important for Data Scientist to know when and how to implement. If you poled a group of data scientist just a few years back about how many machine learning problem types there are you would almost certainly have gotten a binary response: problem types were clearly divided into supervised and unsupervised. The distinction between Supervised and Unsupervised problem types was immediately clear by both the problem definition and the data that is available. Both these would require running tests to gather training data, then modeling and applying results.


Data Science for Internet of Things (IoT): Ten Differences From Traditional Data Science

@machinelearnbot

Thus, the concept of IoT Analytics (Data Science for IoT) is expected to drive the business models for IoT. Deep learning algorithms play an important role in IoT analytics. We alluded to the possibility of Deep Learning and IoT previously where we said that Deep learning algorithms play an important role in IoT analytics because Machine data is sparse and / or has a temporal element to it. But for me, the most exciting development is the fact that IoT powers exciting new greenfield domains such as Drones, Self driving cars, Enterprise AI, Cloud robotics and many more.


The Three Ages of AI – Figuring Out Where We Are

@machinelearnbot

While our ability to utilize machine learning statistical algorithms like regression, SVMs, random forests, and neural nets expanded rapidly starting in roughly the 1990's the application of these handcrafted systems didn't entirely go away. There is very little in common among convolutional neural nets, recurrent neural nets, generative adversarial neural nets, evolutionary neural nets used in reinforcement learning, and all of their variants. They all also need massive parallel processing across extremely large computational arrays, many times requiring specialized chips like GPUs and FPGAs to get anything done on a human time scale. Models That Explain Their Reasoning: Although our deep neural nets are good at classifying things like images they are completely inscrutable in how they do so.