Collaborating Authors

Q Learning - Ashwin Vaidya


Before I explain what Q Learning is, I will quickly explain the basic principle of reinforcement learning. Reinforcement learning is a category of machine learning algorithms where the systems learn on their own by interacting with the environment. The idea is that a reward is provided to the agent if the action it takes is correct. Otherwise, some penalty is assigned to discourage the action. It is similar to how we train dogs to perform tricks, give it a snack for successfully doing a roll and rebuke it for dirtying your carpet.

Simple Reinforcement Learning with Tensorflow Part 0: Q-Learning with Tables and Neural Networks


For this tutorial in my Reinforcement Learning series, we are going to be exploring a family of RL algorithms called Q-Learning algorithms. These are a little different than the policy-based algorithms that will be looked at in the the following tutorials (Parts 1–3). Instead of starting with a complex and unwieldy deep neural network, we will begin by implementing a simple lookup-table version of the algorithm, and then show how to implement a neural-network equivalent using Tensorflow. Given that we are going back to basics, it may be best to think of this as Part-0 of the series. It will hopefully give an intuition into what is really happening in Q-Learning that we can then build on going forward when we eventually combine the policy gradient and Q-learning approaches to build state-of-the-art RL agents (If you are more interested in Policy Networks, or already have a grasp on Q-Learning, feel free to start the tutorial series here instead).

Scaling Configuration of Energy Harvesting Sensors with Reinforcement Learning Artificial Intelligence

With the advent of the Internet of Things (IoT), an increasing number of energy harvesting methods are being used to supplement or supplant battery based sensors. Energy harvesting sensors need to be configured according to the application, hardware, and environmental conditions to maximize their usefulness. As of today, the configuration of sensors is either manual or heuristics based, requiring valuable domain expertise. Reinforcement learning (RL) is a promising approach to automate configuration and efficiently scale IoT deployments, but it is not yet adopted in practice. We propose solutions to bridge this gap: reduce the training phase of RL so that nodes are operational within a short time after deployment and reduce the computational requirements to scale to large deployments. We focus on configuration of the sampling rate of indoor solar panel based energy harvesting sensors. We created a simulator based on 3 months of data collected from 5 sensor nodes subject to different lighting conditions. Our simulation results show that RL can effectively learn energy availability patterns and configure the sampling rate of the sensor nodes to maximize the sensing data while ensuring that energy storage is not depleted. The nodes can be operational within the first day by using our methods. We show that it is possible to reduce the number of RL policies by using a single policy for nodes that share similar lighting conditions.

July 2017 – RealThinks


Today we're going to be learning about reinforcement learning. The ultimate goal of this endeavor is to create an artificial intelligence that is a strong Othello player, and can teach you how to become stronger yourself. I explained the rules of Othello, my motivation, and how to create a playable game in Step 1 of this series. I created some basic artificial intelligence in Step 2 of this series. The next thing I want to do is to use machine learning to create an even better artificial intelligence, but before I can even do that, I need to learn how to implement reinforcement learning.

Guest Post (Part I): Demystifying Deep Reinforcement Learning - Intel AI


Two years ago, a small company in London called DeepMind uploaded their pioneering paper "Playing Atari with Deep Reinforcement Learning" to Arxiv. In this paper they demonstrated how a computer learned to play Atari 2600 video games by observing just the screen pixels and receiving a reward when the game score increased. The result was remarkable, because the games and the goals in every game were very different and designed to be challenging for humans. The same model architecture, without any change, was used to learn seven different games, and in three of them the algorithm performed even better than a human!