Goto

Collaborating Authors

 utility


Trading off Utility, Informativeness, and Complexity in Emergent Communication

Neural Information Processing Systems

Emergent communication (EC) research often focuses on optimizing task-specific utility as a driver for communication. However, there is increasing evidence that human languages are shaped by task-general communicative constraints and evolve under pressure to optimize the Information Bottleneck (IB) tradeoff between the informativeness and complexity of the lexicon. Here, we integrate these two approaches by trading off utility, informativeness, and complexity in EC. To this end, we propose Vector-Quantized Variational Information Bottleneck (VQ-VIB), a method for training neural agents to encode inputs into discrete signals embedded in a continuous space. We evaluate our approach in multi-agent reinforcement learning settings and in color reference games and show that: (1) VQ-VIB agents can continuously adapt to changing communicative needs and, in the color domain, align with human languages; (2) the emergent VQ-VIB embedding spaces are semantically meaningful and perceptually grounded; and (3) encouraging informativeness leads to faster convergence rates and improved utility, both in VQ-VIB and in prior neural architectures for symbolic EC, with VQ-VIB achieving higher utility for any given complexity. This work offers a new framework for EC that is grounded in information-theoretic principles that are believed to characterize human language evolution and that may facilitate human-agent interaction.


The Utility of Explainable AI in Ad Hoc Human-Machine Teaming

Neural Information Processing Systems

Recent advances in machine learning have led to growing interest in Explainable AI (xAI) to enable humans to gain insight into the decision-making of machine learning models. Despite this recent interest, the utility of xAI techniques has not yet been characterized in human-machine teaming. Importantly, xAI offers the promise of enhancing team situational awareness (SA) and shared mental model development, which are the key characteristics of effective human-machine teams. Rapidly developing such mental models is especially critical in ad hoc human-machine teaming, where agents do not have a priori knowledge of others' decision-making strategies.


Reviews: On the Utility of Learning about Humans for Human-AI Coordination

Neural Information Processing Systems

Summary: The paper investigates the usefulness of modeling human behavior in human-ai collaborative tasks. In order to study this question, the paper introduces an experimental framework that consists of: a) modeling human behavior using imitation learning, b) training RL agents in several modes (self-play, trained agains human imitator, etc.), c) measuring the joint performance of human-AI collaboration. Using both simulation based experiments and a user study the paper showcases the importance of accounting for human behavior in designing collaborative RL agents. Comments: The topic of the paper is interesting and important for modern hybrid human-AI decision making systems. This seems like a well written paper with solid contributions: to the best of my knowledge, no prior work has systematically investigated the utility of human modeling in the context of human-AI collaboration in RL.


Reviews: On the Utility of Learning about Humans for Human-AI Coordination

Neural Information Processing Systems

The paper proposes a new evaluation framework and benchmark for multi-agent learning settings where coordination with team mates is required to complete a task, and carefully evaluates state-of-the-art learning approaches in this novel setting, including evaluation with human players. All reviewers agreed that the contributions made by the paper are high, and are likely to influence future work in this field. In the initial reviews, several areas of improvement were noted, including to precisely explain the relationship of this work to the substantial amount of prior work in human-robot and human-AI interaction, several requests for clarification, and suggestions for further experimentation. The reviewers were content with the author response, and in particular the provided clarification of the relationship to prior work and overall contribution of the paper. I encourage the authors to carefully consider all reviewer comments when preparing the camera ready version.


The Utility of Machine Learning in Diagnostic Healthcare

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. The last decade has seen a significant rise in the application of AI and machine learning in the field of medicine.


Bottlenecks CLUB: Unifying Information-Theoretic Trade-offs Among Complexity, Leakage, and Utility

#artificialintelligence

Bottleneck problems are an important class of optimization problems that have recently gained increasing attention in the domain of machine learning and information theory. They are widely used in generative models, fair machine learning algorithms, design of privacy-assuring mechanisms, and appear as information-theoretic performance bounds in various multi-user communication problems. In this work, we propose a general family of optimization problems, termed as complexity-leakage-utility bottleneck (CLUB) model, which (i) provides a unified theoretical framework that generalizes most of the state-of-the-art literature for the information-theoretic privacy models, (ii) establishes a new interpretation of the popular generative and discriminative models, (iii) constructs new insights to the generative compression models, and (iv) can be used in the fair generative models. We first formulate the CLUB model as a complexity-constrained privacy-utility optimization problem. We then connect it with the closely related bottleneck problems, namely information bottleneck (IB), privacy funnel (PF), deterministic IB (DIB), conditional entropy bottleneck (CEB), and conditional PF (CPF).


Expert Systems: Techniques, Tools, and Applications

AI Magazine

The book is edited by Philip Klahr and the late Donald A. Waterman, both of Rand Corporation. The papers are selected from RAND technical reports published from 1977 to 1985. The book is most valuable to people learning knowledge engineering. Four of the papers provide interesting glimpses at the problems involved in transforming knowledge about a domain into computer representations. In addition, the book contains one or two interesting papers for researchers in each of the areas of knowledge acquisition, reasoning with uncertainty, and distributed problem solving.


Decision-Theoretic Planning

AI Magazine

The recent advances in computer speed and algorithms for probabilistic inference have led to a resurgence of work on planning under uncertainty. The aim is to design AI planners for environments where there might be incomplete or faulty information, where actions might not always have the same results, and where there might be tradeoffs between the different possible outcomes of a plan. Addressing uncertainty in AI, planning algorithms will greatly increase the range of potential applications, but there is plenty of work to be done before we see practical decision-theoretic planning systems. This article outlines some of the challenges that need to be overcome and surveys some of the recent work in the area. In problems where actions can lead to a number of different possible outcomes, or where the benefits of executing a plan must be weighed against the costs, the framework of decision theory can be used to compare alternative plans.


Expert Systems: Techniques, Tools, and Applications

AI Magazine

The book is edited by Philip Klahr and the late Donald A. Waterman, both of Rand Corporation. The papers are selected from RAND technical reports published from 1977 to 1985. The book is most valuable to people learning knowledge engineering. Four of the papers provide interesting glimpses at the problems involved in transforming knowledge about a domain into computer representations. In addition, the book contains one or two interesting papers for researchers in each of the areas of knowledge acquisition, reasoning with uncertainty, and distributed problem solving.


Toward a Unified Approach for Conceptual Knowledge Acquisition

AI Magazine

Among other issues, Michalski stressed the importance of unification of terminology and extraction of general principles. Amarel suggested we need both theory and application, even within a single project (one supports the other).Another indicator: Chapter XIV Handbook of Artificial Intelligence (Dietterich, London, Clarkson & Dromey, 1982) compares and contrasts generalization methods; Michalski The author would like to thank Dave Coles for his comments on a draft of this article. AI is being related to cognitive science (Pylyshyn, 1982) and with control theory and pattern recognition (Buchanan, Mitchell & Smith, 1978).) This article suggests new ideas and attempts to relate them to some existing ones in AI. While the focus is heuristic learning in search, it is examined with broader intent.