Goto

Collaborating Authors

 utility


Trading off Utility, Informativeness, and Complexity in Emergent Communication

Neural Information Processing Systems

Emergent communication (EC) research often focuses on optimizing task-specific utility as a driver for communication. However, there is increasing evidence that human languages are shaped by task-general communicative constraints and evolve under pressure to optimize the Information Bottleneck (IB) tradeoff between the informativeness and complexity of the lexicon. Here, we integrate these two approaches by trading off utility, informativeness, and complexity in EC. To this end, we propose Vector-Quantized Variational Information Bottleneck (VQ-VIB), a method for training neural agents to encode inputs into discrete signals embedded in a continuous space. We evaluate our approach in multi-agent reinforcement learning settings and in color reference games and show that: (1) VQ-VIB agents can continuously adapt to changing communicative needs and, in the color domain, align with human languages; (2) the emergent VQ-VIB embedding spaces are semantically meaningful and perceptually grounded; and (3) encouraging informativeness leads to faster convergence rates and improved utility, both in VQ-VIB and in prior neural architectures for symbolic EC, with VQ-VIB achieving higher utility for any given complexity. This work offers a new framework for EC that is grounded in information-theoretic principles that are believed to characterize human language evolution and that may facilitate human-agent interaction.


The Utility of Explainable AI in Ad Hoc Human-Machine Teaming

Neural Information Processing Systems

Recent advances in machine learning have led to growing interest in Explainable AI (xAI) to enable humans to gain insight into the decision-making of machine learning models. Despite this recent interest, the utility of xAI techniques has not yet been characterized in human-machine teaming. Importantly, xAI offers the promise of enhancing team situational awareness (SA) and shared mental model development, which are the key characteristics of effective human-machine teams. Rapidly developing such mental models is especially critical in ad hoc human-machine teaming, where agents do not have a priori knowledge of others' decision-making strategies.


Reviews: On the Utility of Learning about Humans for Human-AI Coordination

Neural Information Processing Systems

Summary: The paper investigates the usefulness of modeling human behavior in human-ai collaborative tasks. In order to study this question, the paper introduces an experimental framework that consists of: a) modeling human behavior using imitation learning, b) training RL agents in several modes (self-play, trained agains human imitator, etc.), c) measuring the joint performance of human-AI collaboration. Using both simulation based experiments and a user study the paper showcases the importance of accounting for human behavior in designing collaborative RL agents. Comments: The topic of the paper is interesting and important for modern hybrid human-AI decision making systems. This seems like a well written paper with solid contributions: to the best of my knowledge, no prior work has systematically investigated the utility of human modeling in the context of human-AI collaboration in RL.


Reviews: On the Utility of Learning about Humans for Human-AI Coordination

Neural Information Processing Systems

The paper proposes a new evaluation framework and benchmark for multi-agent learning settings where coordination with team mates is required to complete a task, and carefully evaluates state-of-the-art learning approaches in this novel setting, including evaluation with human players. All reviewers agreed that the contributions made by the paper are high, and are likely to influence future work in this field. In the initial reviews, several areas of improvement were noted, including to precisely explain the relationship of this work to the substantial amount of prior work in human-robot and human-AI interaction, several requests for clarification, and suggestions for further experimentation. The reviewers were content with the author response, and in particular the provided clarification of the relationship to prior work and overall contribution of the paper. I encourage the authors to carefully consider all reviewer comments when preparing the camera ready version.


Expert Systems: Techniques, Tools, and Applications

AI Magazine

The book is edited by Philip Klahr and the late Donald A. Waterman, both of Rand Corporation. The papers are selected from RAND technical reports published from 1977 to 1985. The book is most valuable to people learning knowledge engineering. Four of the papers provide interesting glimpses at the problems involved in transforming knowledge about a domain into computer representations. In addition, the book contains one or two interesting papers for researchers in each of the areas of knowledge acquisition, reasoning with uncertainty, and distributed problem solving.


Decision-Theoretic Planning

AI Magazine

The recent advances in computer speed and algorithms for probabilistic inference have led to a resurgence of work on planning under uncertainty. The aim is to design AI planners for environments where there might be incomplete or faulty information, where actions might not always have the same results, and where there might be tradeoffs between the different possible outcomes of a plan. Addressing uncertainty in AI, planning algorithms will greatly increase the range of potential applications, but there is plenty of work to be done before we see practical decision-theoretic planning systems. This article outlines some of the challenges that need to be overcome and surveys some of the recent work in the area. In problems where actions can lead to a number of different possible outcomes, or where the benefits of executing a plan must be weighed against the costs, the framework of decision theory can be used to compare alternative plans.


Expert Systems: Techniques, Tools, and Applications

AI Magazine

The book is edited by Philip Klahr and the late Donald A. Waterman, both of Rand Corporation. The papers are selected from RAND technical reports published from 1977 to 1985. The book is most valuable to people learning knowledge engineering. Four of the papers provide interesting glimpses at the problems involved in transforming knowledge about a domain into computer representations. In addition, the book contains one or two interesting papers for researchers in each of the areas of knowledge acquisition, reasoning with uncertainty, and distributed problem solving.


Toward a Unified Approach for Conceptual Knowledge Acquisition

AI Magazine

Among other issues, Michalski stressed the importance of unification of terminology and extraction of general principles. Amarel suggested we need both theory and application, even within a single project (one supports the other).Another indicator: Chapter XIV Handbook of Artificial Intelligence (Dietterich, London, Clarkson & Dromey, 1982) compares and contrasts generalization methods; Michalski The author would like to thank Dave Coles for his comments on a draft of this article. AI is being related to cognitive science (Pylyshyn, 1982) and with control theory and pattern recognition (Buchanan, Mitchell & Smith, 1978).) This article suggests new ideas and attempts to relate them to some existing ones in AI. While the focus is heuristic learning in search, it is examined with broader intent.


554

AI Magazine

An important task in postal automation technology is determining the position and orientation of the destination address block in the image of a mail piece such as a letter, magazine, or parcel. The corresponding subimage is then presented to a human operator or a machine reader (optical character reader) that can read the zip code and, if necessary, other address information and direct the mail piece to the appropriate sorting bin Analysis of physical characteristics of mail pieces indicates that in order to automate the addressfinding task, several different image analysis operations are necessary Some examples are locating a rectangular white address label on a multicolor background, progressively grouping characters into text lines and text Lines into text blocks, eliminating candidate regions by specialized detectors (fol example, detecting regions such as postage stamps), and identifying handwritten regions. A typical mail piece has several regions or blocks that are meaningful to mail processing, for example, address blocks (destination and return), postage [meter mark or stamp) as well as extraneous blocks WINTER 1987 25 Figure 1. The heuristics listed in the previous section suggest that the design of ABLS consist of several specialized tools that are appropriately deployed. Rule R2 suggests the need for a tool to detect postage fluorescence, rule R3 a tool for isolating blocks of a certain color, rule R4 for discriminating between handwriting and print, and so on.


Logical and Decision-Theoretic Methods for Planning under Uncertainty

AI Magazine

Decision theory and nonmonotonic logics are formalisms that can be employed to represent and solve problems of planning under uncertainty. We analyze the usefulness of these two approaches by establishing a simple correspondence between the two formalisms. The analysis indicates that planning using nonmonotonic logic comprises two decision-theoretic concepts: probabilities (degrees of belief in planning hypotheses) and utilities (degrees of preference for planning outcomes). We present and discuss examples of the following lessons from this decision-theoretic view of nonmonotonic reasoning: (1) decision theory and nonmonotonic logics are intended to solve different components of the planning problem; (2) when considered in the context of planning under uncertainty, nonmonotonic logics do not retain the domain-independent characteristics of classical (monotonic) logic; and (3) because certain nonmonotonic programming paradigms (for example, frame-based inheritance, nonmonotonic logics) are inherently problem specific, they might be inappropriate for use in solving certain types of planning problems. We discuss how these conclusions affect several current AI research issues.