Goto

Collaborating Authors

 Banerjee, Bikramjit


Latent Interactive A2C for Improved RL in Open Many-Agent Systems

arXiv.org Artificial Intelligence

There is a prevalence of multiagent reinforcement learning (MARL) methods that engage in centralized training. But, these methods involve obtaining various types of information from the other agents, which may not be feasible in competitive or adversarial settings. A recent method, the interactive advantage actor critic (IA2C), engages in decentralized training coupled with decentralized execution, aiming to predict the other agents' actions from possibly noisy observations. In this paper, we present the latent IA2C that utilizes an encoder-decoder architecture to learn a latent representation of the hidden state and other agents' actions. Our experiments in two domains -- each populated by many agents -- reveal that the latent IA2C significantly improves sample efficiency by reducing variance and converging faster. Additionally, we introduce open versions of these domains where the agent population may change over time, and evaluate on these instances as well.


Many Agent Reinforcement Learning Under Partial Observability

arXiv.org Artificial Intelligence

Recent renewed interest in multi-agent reinforcement learning (MARL) has generated an impressive array of techniques that leverage deep reinforcement learning, primarily actor-critic architectures, and can be applied to a limited range of settings in terms of observability and communication. However, a continuing limitation of much of this work is the curse of dimensionality when it comes to representations based on joint actions, which grow exponentially with the number of agents. In this paper, we squarely focus on this challenge of scalability. We apply the key insight of action anonymity, which leads to permutation invariance of joint actions, to two recently presented deep MARL algorithms, MADDPG and IA2C, and compare these instantiations to another recent technique that leverages action anonymity, viz., mean-field MARL. We show that our instantiations can learn the optimal behavior in a broader class of agent networks than the mean-field method, using a recently introduced pragmatic domain.


A Framework and Method for Online Inverse Reinforcement Learning

arXiv.org Artificial Intelligence

Inverse reinforcement learning (IRL) is the problem of learning the preferences of an agent from the observations of its behavior on a task. While this problem has been well investigated, the related problem of {\em online} IRL---where the observations are incrementally accrued, yet the demands of the application often prohibit a full rerun of an IRL method---has received relatively less attention. We introduce the first formal framework for online IRL, called incremental IRL (I2RL), and a new method that advances maximum entropy IRL with hidden variables, to this setting. Our formal analysis shows that the new method has a monotonically improving performance with more demonstration data, as well as probabilistically bounded error, both under full and partial observability. Experiments in a simulated robotic application of penetrating a continuous patrol under occlusion shows the relatively improved performance and speed up of the new method and validates the utility of online IRL.


Detection of Plan Deviation in Multi-Agent Systems

AAAI Conferences

Plan monitoring in a collaborative multi-agent system requires an agent to not only monitor the execution of its own plan, but also to detect possible deviations or failures in the plan execution of its teammates. In domains featuring partial observability and uncertainty in the agents’ sensing and actuation, especially where communication among agents is sparse (as a part of a cost-minimized plan), plan monitoring can be a significant challenge. We design an Expectation Maximization (EM) based algorithm for detection of plan deviation of teammates in such a multi-agent system. However, a direct implementation of this algorithm is intractable, so we also design an alternative approach grounded on the agents’ plans, for tractability. We establish its equivalence to the intractable version, and evaluate these techniques in some challenging tasks.


Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPs

AAAI Conferences

Decentralized partially observable Markov decision processes (Dec-POMDPs) offer a powerful modeling technique for realistic multi-agent coordination problems under uncertainty. Prevalent solution techniques are centralized and assume prior knowledge of the model. Recently a Monte Carlo based distributed reinforcement learning approach was proposed, where agents take turns to learn best responses to each other’s policies. This promotes decentralization of the policy computation problem, and relaxes reliance on the full knowledge of the problem parameters. However, this Monte Carlo approach has a large sample complexity, which we address in this paper. In particular, we propose and analyze a modified version of the previous algorithm that adaptively eliminates parts of the experience tree from further exploration, thus requiring fewer samples while ensuring unchanged confidence in the learned value function. Experiments demonstrate significant reduction in sample complexity – the maximum reductions ranging from 61% to 91% over different benchmark Dec-POMDP problems – with the final policies being often better due to more focused exploration.


Sample Bounded Distributed Reinforcement Learning for Decentralized POMDPs

AAAI Conferences

Decentralized partially observable Markov decision processes (Dec-POMDPs) offer a powerful modeling technique for realistic multi-agent coordination problems under uncertainty. Prevalent solution techniques are centralized and assume prior knowledge of the model. We propose a distributed reinforcement learning approach, where agents take turns to learn best responses to each other’s policies. This promotes decentralization of the policy computation problem, and relaxes reliance on the full knowledge of the problem parameters. We derive the relation between the sample complexity of best response learning and error tolerance. Our key contribution is to show that sample complexity could grow exponentially with the problem horizon. We show empirically that even if the sample requirement is set lower than what theory demands, our learning approach can produce (near) optimal policies in some benchmark Dec-POMDP problems.


Informed Initial Policies for Learning in Dec-POMDPs

AAAI Conferences

Decentralized partially observable Markov decision processes (Dec-POMDPs) offer a formal model for planning in cooperative multiagent systems where agents operate with noisy sensors and actuators, and local information. Prevalent Dec-POMDP solution techniques have mostly been centralized and have assumed knowledge of the model. In real world scenarios, however, solving centrally may not be an option and model parameters maybe unknown. To address this, we propose a distributed, model-free algorithm for learning Dec-POMDP policies, in which agents take turns learning, with each agent not currently learning following a static policy. For agents that have not yet learned a policy, this static policy must be initialized. We propose a principled method for learning such initial policies through interaction with the environment. We show that by using such informed initial policies, our alternate learning algorithm can find near-optimal policies for two benchmark problems.


Branch and Price for Multi-Agent Plan Recognition

AAAI Conferences

The problem of identifying the (dynamic) team structures and team behaviors from the observed activities of multiple agents is called Multi-Agent Plan Recognition (MAPR). We extend a recent formalization of this problem to accommodate a compact, partially ordered, multi-agent plan language, as well as complex plan execution models — particularly plan abandonment and activity interleaving. We adopt a branch and price approach to solve MAPR in such a challenging setting, and fully instantiate the (generic) pricing problem for MAPR. We show experimentally that this approach outperforms a recently proposed hypothesis pruning algorithm in two domains: multi-agent blocks word, and intrusion detection. The key benefit of the branch and price approach is its ability to grow the necessary component (occurrence) space from which the hypotheses are constructed, rather than begin with a fully enumerated component space that has an intractable size, and search it with pruning. Our formulation of MAPR has the broad objective of bringing mature Operations Research methodologies to bear upon MAPR, envisaged to have a similar impact as mature SAT-solvers had on planning.


Multi-Agent Plan Recognition: Formalization and Algorithms

AAAI Conferences

Multi-Agent Plan Recognition (MAPR) seeks to identify the dynamic team structures and team behaviors from the observations of the activity-sequences of a set of intelligent agents, based on a library of known team-activities (plan library). It has important applications in analyzing data from automated monitoring, surveillance, and intelligence analysis in general. In this paper, we formalize MAPR using a basic model that explicates the cost of abduction in single agent plan recognition by "flattening" or decompressing the (usually compact, hierarchical) plan library. We show that single-agent plan recognition with a decompressed library can be solved in time polynomial in the input size, while it is known that with a compressed (by partial ordering constraints) library it is NP-complete. This leads to an important insight: that although the compactness of the plan library plays an important role in the hardness of single-agent plan recognition (as recognized in the existing literature), that is not the case with multiple agents. We show, for the first time, that MAPR is NP-complete even when the (multi-agent) plan library is fully decompressed. As with previous solution approaches, we break the problem into two stages: hypothesis generation and hypothesis search. We show that Knuth's ``Algorithm X'' (with the efficient ``dancing links'' representation) is particularly suited for our model, and can be adapted to perform a branch and bound search for the second stage, in this model. We show empirically that this new approach leads to significant pruning of the hypothesis space in MAPR.