Goto

Collaborating Authors

Quantile Reinforcement Learning

#artificialintelligence

I've been exploring reinforcement learning that takes advantage of uncertainty. In particular, I have implemented a basic version of QR-DQN-1 from Distributional Reinforcement Learning with Quantile Regression. Doing so required filling in some practical details from the paper, which I'm going to explain here. The approach is an extension of Deep Q-learning, which involves attempting to learn the value of being in a given state and taking an action to maximize this value (for more background, see this post). We think of the value of being in a state as a random variable drawn from some unknown distribution.


The Value Equivalence Principle for Model-Based Reinforcement Learning

arXiv.org Artificial Intelligence

Learning models of the environment from data is often viewed as an essential component to building intelligent reinforcement learning (RL) agents. The common practice is to separate the learning of the model from its use, by constructing a model of the environment's dynamics that correctly predicts the observed state transitions. In this paper we argue that the limited representational resources of model-based RL agents are better used to build models that are directly useful for value-based planning. As our main contribution, we introduce the principle of value equivalence: two models are value equivalent with respect to a set of functions and policies if they yield the same Bellman updates. We propose a formulation of the model learning problem based on the value equivalence principle and analyze how the set of feasible solutions is impacted by the choice of policies and functions. Specifically, we show that, as we augment the set of policies and functions considered, the class of value equivalent models shrinks, until eventually collapsing to a single point corresponding to a model that perfectly describes the environment. In many problems, directly modelling state-to-state transitions may be both difficult and unnecessary. By leveraging the value-equivalence principle one may find simpler models without compromising performance, saving computation and memory. We illustrate the benefits of value-equivalent model learning with experiments comparing it against more traditional counterparts like maximum likelihood estimation. More generally, we argue that the principle of value equivalence underlies a number of recent empirical successes in RL, such as Value Iteration Networks, the Predictron, Value Prediction Networks, TreeQN, and MuZero, and provides a first theoretical underpinning of those results.


Don't let missing values ruin your analysis output, Deal with them!

#artificialintelligence

Missing values or their replacement values can lead to huge errors in your analysis output wheter it is a machine learning model, KPIs or a report. Missing values or their replacement values can lead to huge errors in your analysis output wheter it is a machine learning model, KPIs or a report. Often analysts deal with missing values just like there is only one type of them. It is not the case, there is three types of missing values and there is ways of dealing with0 each one of them. Missing at random (MAR): The presence of a null value in a variable is not random but rather dependent of a known or unknown characteristic of the record.


Everything you need to know about Neural Networks – Hacker Noon

#artificialintelligence

Forward propagation is a process of feeding input values to the neural network and getting an output which we call predicted value. Sometimes we refer forward propagation as inference. When we feed the input values to the neural network's first layer, it goes without any operations. Second layer takes values from first layer and applies multiplication, addition and activation operations and passes this value to the next layer. Same process repeats for subsequent layers and finally we get an output value from the last layer.


Static and Dynamic Values of Computation in MCTS

arXiv.org Artificial Intelligence

Monte-Carlo Tree Search (MCTS) is one of the most-widely used methods for planning, and has powered many recent advances in artificial intelligence. In MCTS, one typically performs computations (i.e., simulations) to collect statistics about the possible future consequences of actions, and then chooses accordingly. Many popular MCTS methods such as UCT and its variants decide which computations to perform by trading-off exploration and exploitation. In this work, we take a more direct approach, and explicitly quantify the value of a computation based on its expected impact on the quality of the action eventually chosen. Our approach goes beyond the "myopic" limitations of existing computation-value-based methods in two senses: (I) we are able to account for the impact of non-immediate (ie, future) computations (II) on non-immediate actions. We show that policies that greedily optimize computation values are optimal under certain assumptions and obtain results that are competitive with the state-of-the-art.