Inferring the Optimal Policy using Markov Chain Monte Carlo

arXiv.org Artificial Intelligence

This paper investigates methods for estimating the optimal stochastic control policy for a Markov Decision Process with unknown transition dynamics and an unknown reward function. This form of model-free reinforcement learning comprises many real world systems such as playing video games, simulated control tasks, and real robot locomotion. Existing methods for estimating the optimal stochastic control policy rely on high variance estimates of the policy descent. However, these methods are not guaranteed to find the optimal stochastic policy, and the high variance gradient estimates make convergence unstable. In order to resolve these problems, we propose a technique using Markov Chain Monte Carlo to generate samples from the posterior distribution of the parameters conditioned on being optimal. Our method provably converges to the globally optimal stochastic policy, and empirically similar variance compared to the policy gradient.


Identifying reasoning patterns in games

arXiv.org Artificial Intelligence

We present an algorithm that identifies the reasoning patterns of agents in a game, by iteratively examining the graph structure of its Multi-Agent Influence Diagram (MAID) representation. If the decision of an agent participates in no reasoning patterns, then we can effectively ignore that decision for the purpose of calculating a Nash equilibrium for the game. In some cases, this can lead to exponential time savings in the process of equilibrium calculation. Moreover, our algorithm can be used to enumerate the reasoning patterns in a game, which can be useful for constructing more effective computerized agents interacting with humans.


On measuring the usefulness of modeling in a competitive and cooperative environment

AAAI Conferences

Leonardo Garrido Ram6n Brena Centro de Inteligencia Artificial, Tecnol6gico de Monterrey Abstract This paper presents recent results of our experimental work in quantifying exactly how useful is building models about other agents using no more than the observation of others' behavior. The testbed we used in our experiments is an abstraction of the meeting scheduling problem, called the Meeting Scheduling Game, which has competitive as well as cooperative features. The agents are selfish, and use a rational decision theoretic approach based on the probabilistic models that the agent is learning. We view agent modeling as an iterative and gradual process, where every new piece of information about a particular agent is analyzed in such a way that the model of the agent is further refined. We propose a framework for measuring the performance of different modelling strategies and establish quantified lower and upper limits for the performance of any modeling strategy. Finally, we contrast the performances of a modeler from an individual and from a collective point of view, comparing the benefits for the modeler itself as well as for the group as a whole. Introduction Katia Sycara The Robotics Institute, Carnegie Mellon University Several approaches in the field of multiagent systems (MAS) (Durfee 1991; Wooldridge & Jennings 1995) make heavy use of beliefs as an internal model of the world (Bratman 1987) One form of belief of particular importance in multiagent systems are the agent's beliefs about other agents (Vidal & Durfee 1997b). This kind of belief could come from preexisting knowledge base (a kind of"prejudice"), or could be inferred from observing others' behavior. The purpuse of a modelling activity could be to benefit a specific agent, in the case of "selfish" agents, or to improve the performance of a group as a whole, in the case of cooperative agents -or even a combination of both.


How a Bayesian Approaches Games Like Chess

AAAI Conferences

Eric B. Baum 1 NEC Research Institute, 4 Independence Way, Princeton NJ 08540 eric@research.NJ.NEC.COM Abstract The point of game tree search is to insulate oneself from errors in the evaluation function. The standard approach is to grow a full width tree as deep as time allows, and then value the tree as if the leaf evaluations were exact. This has been effective in many games because of the computational efficiency of the alpha-beta algorithm. A Bayesian would suggest instead to train a model of one's uncertainty. This model adds extra information in addition to the standard evaluation function. Within such a formal model, there is an optimal tree growth procedure and an optimal method of valueing the tree. We describe how to optimally value the tree, and how to approximate on line the optimal tree to search.


How a Bayesian Approaches Games Like Chess

AAAI Conferences

Now the whole point of search (as opposed to just picking whichever child looks best to an evaluation function) is to insulate oneself from errors in the evaluation function. When one searches below a node, one gains more information and one's opinion of the value of that node may change. Such "opinion changes" are inherently probabilistic. They occur because one's information or computational abilities are unable to distinguish different states, e.g. a node with a given set of features might have different values. In this paper we adopt a probabilistic model of opinion changes, de-1This is a super-abbreviated discussion of [Baum and Smith, 1993] written by EBB for this conference.