Goto

Collaborating Authors

General Game Playing: Overview of the AAAI Competition

AI Magazine

A general game playing system is one that can accept a formal description of a game and play the game effectively without human intervention. Unlike specialized game players, such as Deep Blue, general game players do not rely on algorithms designed in advance for specific games; and, unlike Deep Blue, they are able to play different kinds of games. In order to promote work in this area, the AAAI is sponsoring an open competition at this summer's Twentieth National Conference on Artificial Intelligence. This article is an overview of the technical issues and logistics associated with this summer's competition, as well as the relevance of general game playing to the long range-goals of artificial intelligence.


Towards a Theory of Intentions for Human-Robot Collaboration

arXiv.org Artificial Intelligence

The architecture described in this paper encodes a theory of intentions based on the the key principles of non-procrastination, persistence, and automatically limiting reasoning to relevant knowledge and observations. The architecture reasons with transition diagrams of any given domain at two different resolutions, with the fine-resolution description defined as a refinement of, and hence tightly-coupled to, a coarse-resolution description. Non-monotonic logical reasoning with the coarse-resolution description computes an activity (i.e., plan) comprising abstract actions for any given goal. Each abstract action is implemented as a sequence of concrete actions by automatically zooming to and reasoning with the part of the fine-resolution transition diagram relevant to the current coarse-resolution transition and the goal. Each concrete action in this sequence is executed using probabilistic models of the uncertainty in sensing and actuation, and the corresponding fine-resolution outcomes are used to infer coarse-resolution observations that are added to the coarse-resolution history. The architecture's capabilities are evaluated in the context of a simulated robot assisting humans in an office domain, on a physical robot (Baxter) manipulating tabletop objects, and on a wheeled robot (Turtlebot) moving objects to particular places or people. The experimental results indicate improvements in reliability and computational efficiency compared with an architecture that does not include the theory of intentions, and an architecture that does not include zooming for fine-resolution reasoning.


Dynamic Term-Modal Logics for Epistemic Planning

arXiv.org Artificial Intelligence

Classical planning frameworks are built on first-order languages. The first-order expressive power is desirable for compactly representing actions via schemas, and for specifying goal formulas such as $\neg\exists x\mathsf{blocks\_door}(x)$. In contrast, several recent epistemic planning frameworks build on propositional modal logic. The modal expressive power is desirable for investigating planning problems with epistemic goals such as $K_{a}\neg\mathsf{problem}$. The present paper presents an epistemic planning framework with first-order expressiveness of classical planning, but extending fully to the epistemic operators. In this framework, e.g. $\exists xK_{x}\exists y\mathsf{blocks\_door}(y)$ is a formula. Logics with this expressive power are called "term-modal" in the literature. This paper presents a rich but well-behaved semantics for term-modal logic. The semantics are given a dynamic extension using first-order "action models" allowing for epistemic planning, and it is shown how corresponding "action schemas" allow for a very compact action representation. Concerning metatheory, the paper defines axiomatic normal term-modal logics, shows a Canonical Model Theorem-like result, present non-standard frame characterization formulas, shows decidability for the finite agent case, and shows a general completeness result for the dynamic extension by reduction axioms.


Answer Set Planning Under Action Costs

Journal of Artificial Intelligence Research

Recently, planning based on answer set programming has been proposed as an approach towards realizing declarative planning systems. In this paper, we present the language Kc, which extends the declarative planning language K by action costs. Kc provides the notion of admissible and optimal plans, which are plans whose overall action costs are within a given limit resp.


Grounding Value Alignment with Ethical Principles

arXiv.org Artificial Intelligence

An important step in the development of value alignment (VA) systems in AI is understanding how values can interrelate with facts. Designers of future VA systems will need to utilize a hybrid approach in which ethical reasoning and empirical observation interrelate successfully in machine behavior. In this article we identify two problems about this interrelation that have been overlooked by AI discussants and designers. The first problem is that many AI designers commit inadvertently a version of what has been called by moral philosophers the "naturalistic fallacy," that is, they attempt to derive an "ought" from an "is." We illustrate when and why this occurs. The second problem is that AI designers adopt training routines that fail fully to simulate human ethical reasoning in the integration of ethical principles and facts. Using concepts of quantified modal logic, we proceed to offer an approach that promises to simulate ethical reasoning in humans by connecting ethical principles on the one hand and propositions about states of affairs on the other.