Goto

Collaborating Authors

Results


How to Do Things with Words: A Bayesian Approach

Journal of Artificial Intelligence Research

Communication changes the beliefs of the listener and of the speaker. The value of a communicative act stems from the valuable belief states which result from this act. To model this we build on the Interactive POMDP (IPOMDP) framework, which extends POMDPs to allow agents to model others in multi-agent settings, and we include communication that can take place between the agents to formulate Communicative IPOMDPs (CIPOMDPs). We treat communication as a type of action and therefore, decisions regarding communicative acts are based on decision-theoretic planning using the Bellman optimality principle and value iteration, just as they are for all other rational actions. As in any form of planning, the results of actions need to be precisely specified. We use the Bayes' theorem to derive how agents update their beliefs in CIPOMDPs; updates are due to agents' actions, observations, messages they send to other agents, and messages they receive from others. The Bayesian decision-theoretic approach frees us from the commonly made assumption of cooperative discourse - we consider agents which are free to be dishonest while communicating and are guided only by their selfish rationality. We use a simple Tiger game to illustrate the belief update, and to show that the ability to rationally communicate allows agents to improve efficiency of their interactions.


Planning in Stochastic Environments with Goal Uncertainty

arXiv.org Artificial Intelligence

We present the Goal Uncertain Stochastic Shortest Path (GUSSP) problem -- a general framework to model path planning and decision making in stochastic environments with goal uncertainty. The framework extends the stochastic shortest path (SSP) model to dynamic environments in which it is impossible to determine the exact goal states ahead of plan execution. GUSSPs introduce flexibility in goal specification by allowing a belief over possible goal configurations. The unique observations at potential goals helps the agent identify the true goal during plan execution. The partial observability is restricted to goals, facilitating the reduction to an SSP with a modified state space. We formally define a GUSSP and discuss its theoretical properties. We then propose an admissible heuristic that reduces the planning time using FLARES -- a start-of-the-art probabilistic planner. We also propose a determinization approach for solving this class of problems. Finally, we present empirical results on a search and rescue mobile robot and three other problem domains in simulation.


The Temporal Dynamics of Belief-based Updating of Epistemic Trust: Light at the End of the Tunnel?

arXiv.org Artificial Intelligence

We start with the distinction of outcome- and belief-based Bayesian models of the sequential update of agents' beliefs and subjective reliability of sources (trust). We then focus on discussing the influential Bayesian model of belief-based trust update by Eric Olsson, which models dichotomic events and explicitly represents anti-reliability. After sketching some disastrous recent results for this perhaps most promising model of belief update, we show new simulation results for the temporal dynamics of learning belief with and without trust update and with and without communication. The results seem to shed at least a somewhat more positive light on the communicating-and-trust-updating agents. This may be a light at the end of the tunnel of belief-based models of trust updating, but the interpretation of the clear findings is much less clear.


Active Goal Recognition

arXiv.org Artificial Intelligence

To coordinate with other systems, agents must be able to determine what the systems are currently doing and predict what they will be doing in the future---plan and goal recognition. There are many methods for plan and goal recognition, but they assume a passive observer that continually monitors the target system. Real-world domains, where information gathering has a cost (e.g., moving a camera or a robot, or time taken away from another task), will often require a more active observer. We propose to combine goal recognition with other observer tasks in order to obtain \emph{active goal recognition} (AGR). We discuss this problem and provide a model and preliminary experimental results for one form of this composite problem. As expected, the results show that optimal behavior in AGR problems balance information gathering with other actions (e.g., task completion) such as to achieve all tasks jointly and efficiently. We hope that our formulation opens the door for extensive further research on this interesting and realistic problem.


Memory Management in Resource-Bounded Agents

arXiv.org Artificial Intelligence

Memory in an agent system is a process of reasoning: it is the l earning process of strengthening a concept. The interaction between an agent and the environment can pla y an important role in constructing its memory and may affect its future behaviour. In fact, through memory an agent is potentially able to recall and to learn from experiences so that its beliefs and i ts future course of action are grounded in these experiences. In computational logic, [2] introduces DLEK (Dynamic Logic of Explicit beliefs and Knowledge) as a logical formalization of the short-term and long-term memory. The underlying idea is to represent reasoning about the formation of beliefs throu gh perception and inference in non-omniscient resource-bounded agents. DLEK has however no notion of time, while agents' actual perceptions are inherently timed and so are many of the inferences drawn from such perceptions. In this paper we present an extension of LEK/DLEK to T-LEK/T-DLEK ("Timed LE K" and "Timed DLEK") obtained by introducing a special function which associates to each b elief the arrival time and controls timed inferences. Through this function it is easier to keep the ev olution of the surrounding world under control and the representation is more complete. This abstr act is an evolution version of [3], where we have introduced explicit time instants and time intervals i n formulas, and it is extracted from [4].


A Temporal Module for Logical Frameworks

arXiv.org Artificial Intelligence

In the literature there different kind of timed logical fram eworks exist, where time is specified directly using hybrid logics (cf., e.g., [2]), temporal epistemic lo gic (cf., e.g., [4]) or simply by using Linear Temporal Logic. We propose a temporal module which can be ado pted to "temporalize" many logical framework. This module is in practice a particular kind of fu nction that assigns a "timing" to atoms. We have exploited this T function in two different settings. The first one is the formalization of the reasoning on the formation of beliefs and the interaction wi th background knowledge in non-omniscient agents' memory.


Exploiting Belief Bases for Building Rich Epistemic Structures

arXiv.org Artificial Intelligence

We introduce a semantics for epistemic logic exploiting a belief base abstraction. Differently from existing Kripke-style semantics for epistemic logic in which the notions of possible world and epistemic alternative are primitive, in the proposed semantics they are non-primitive but are defined from the concept of belief base. We show that this semantics allows us to define the universal epistemic model in a simpler and more compact way than existing inductive constructions of it. We provide (i) a number of semantic equivalence results for both the basic epistemic language with "individual belief" operators and its extension by the notion of "only believing", and (ii) a lower bound complexity result for epistemic logic model checking relative to the universal epistemic model.


An Empirical Study on the Practical Impact of Prior Beliefs over Policy Types

arXiv.org Artificial Intelligence

Many multiagent applications require an agent to learn quickly how to interact with previously unknown other agents. To address this problem, researchers have studied learning algorithms which compute posterior beliefs over a hypothesised set of policies, based on the observed actions of the other agents. The posterior belief is complemented by the prior belief, which specifies the subjective likelihood of policies before any actions are observed. In this paper, we present the first comprehensive empirical study on the practical impact of prior beliefs over policies in repeated interactions. We show that prior beliefs can have a significant impact on the long-term performance of such methods, and that the magnitude of the impact depends on the depth of the planning horizon. Moreover, our results demonstrate that automatic methods can be used to compute prior beliefs with consistent performance effects. This indicates that prior beliefs could be eliminated as a manual parameter and instead be computed automatically.


Cost-Based Goal Recognition in Navigational Domains

Journal of Artificial Intelligence Research

Goal recognition is the problem of determining an agent's intent by observing her behaviour. Contemporary solutions for general task-planning relate the probability of a goal to the cost of reaching it. We adapt this approach to goal recognition in the strict context of path-planning. We show (1) that a simpler formula provides an identical result to current state-of-the-art in less than half the time under all but one set of conditions. Further, we prove (2) that the probability distribution based on this technique is independent of an agent's past behaviour and present a revised formula that achieves goal recognition by reference to the agent's starting point and current location only. Building on this, we demonstrate (3) that a Radius of Maximum Probability (i.e., the distance from a goal within which that goal is guaranteed to be the most probable) can be calculated from relative cost-distances between the candidate goals and a start location, without needing to calculate any actual probabilities. In this extended version of earlier work, we generalise our framework to the continuous domain and discuss our results, including the conditions under which our findings can be generalised back to goal recognition in general task-planning.


Rethinking Epistemic Logic with Belief Bases

arXiv.org Artificial Intelligence

We introduce a new semantics for a logic of explicit and implicit beliefs based on the concept of multi-agent belief base. Differently from existing Kripke-style semantics for epistemic logic in which the notions of possible world and doxastic/epistemic alternative are primitive, in our semantics they are non-primitive but are defined from the concept of belief base. We provide a complete axiomatization and prove decidability for our logic via a finite model argument. We also provide a polynomial embedding of our logic into Fagin & Halpern's logic of general awareness and establish a complexity result for our logic via the embedding.