On the Utility of Learning about Humans for Human-AI Coordination

Neural Information Processing Systems

While we would like agents that can coordinate with humans, current algorithms such as self-play and population-based training create agents that can coordinate with themselves. Agents that assume their partner to be optimal or similar to them can converge to coordination protocols that fail to understand and be understood by humans. To demonstrate this, we introduce a simple environment that requires challenging coordination, based on the popular game Overcooked, and learn a simple model that mimics human play. We evaluate the performance of agents trained via self-play and population-based training. These agents perform very well when paired with themselves, but when paired with our human model, they are significantly worse than agents designed to play with the human model.


KLM Using Artificial Intelligence Within Its Social Media Service - insideBIGDATA

#artificialintelligence

KLM Royal Dutch Airlines is taking the next step in using artificial intelligence (AI) within its social media service. KLM worked with AI frontrunner, DigitalGenius, to add automated answers to general repetitive questions from customers without the intervention of a human service agent. This gives KLM agents more time to focus on questions in conversations with customers that require a human approach. KLM is the first airline to offer a combination of human agents and artificial intelligence in a single conversation on Twitter, Messenger and WhatsApp.


Azaria

AAAI Conferences

This paper addresses the problem of automated advice provision in settings that involve repeated interactions between people and computer agents. This problem arises in many real world applications such as route selection systems and office assistants. To succeed in such settings agents must reason about how their actions in the present influence people's future actions. The paper describes several possible models of human behavior that were inspired by behavioral economic theories of people's play in repeated interactions. These models were incorporated into several agent designs to repeatedly generate offers to people playing the game. These agents were evaluated in extensive empirical investigations including hundreds of subjects that interacted with computers in different choice selections processes. The results revealed that an agent that combined a hyperbolic discounting model of human behavior with a social utility function was able to outperform alternative agent designs. We show that this approach was able to generalize to new people as well as choice selection processes that were not used for training. Our results demonstrate that combining computational approaches with behavioral economics models of people in repeated interactions facilitates the design of advice provision strategies for a large class of real-world settings.


Azaria

AAAI Conferences

This paper addresses the problem of automated advice provision in settings that involve repeated interactions between people and computer agents. This problem arises in many real world applications such as route selection systems and office assistants. To succeed in such settings agents must reason about how their actions in the present influence people's future actions. This work models such settings as a family of repeated bilateral games of incomplete information called choice selection processes'', in which players may share certain goals, but are essentially self-interested. The paper describes several possible models of human behavior that were inspired by behavioral economic theories of people's play in repeated interactions.


Learning from Demonstration to Be a Good Team Member in a Role Playing Game

AAAI Conferences

We present an approach that uses learning from demonstration in a computer role playing game to create a controller for a companion team member. We describe a behavior engine that uses case-based reasoning. The behavior engine accepts observation traces of human playing decisions and produces a sequence of actions which can then be carried out by an artificial agent within the gaming environment. Our work focuses on team-based role playing games, where the agents produced by the behavior engine act as team members within a mixed human-agent team. We present the results of a study we conducted, where we assess both the quantitative and qualitative performance difference between human-only teams compared with hybrid human-agent teams. The results of our study show that human-agent teams were more successful at task completion and, for some qualitative dimensions, hybrid teams were perceived more favorably than human-only teams.