Inherent human limitations in teaming environments coupled with complex planning problems spur the integration of intelligent decision support (IDS) systems for human-agent planning. However, prior research in human-agent planning has been limited to dyadic interaction between a single human and a single planning agent. In this paper, we highlight an emerging research area of IDS for human team planning, i.e. environments where the agent works with a team of human planners to enhance the quality of their plans and the ease of making them. We review prior works in human-agent planning and identify research challenges for an agent participating in human team planning.
Silva, Michael (PARC, A Xerox Company) | McCroskey, Silas (PARC, A Xerox Company) | Rubin, Jonathan (PARC, A Xerox Company) | Youngblood, Michael (PARC, A Xerox Company) | Ram, Ashwin (PARC, A Xerox Company)
We present an approach that uses learning from demonstration in a computer role playing game to create a controller for a companion team member. We describe a behavior engine that uses case-based reasoning. The behavior engine accepts observation traces of human playing decisions and produces a sequence of actions which can then be carried out by an artificial agent within the gaming environment. Our work focuses on team-based role playing games, where the agents produced by the behavior engine act as team members within a mixed human-agent team. We present the results of a study we conducted, where we assess both the quantitative and qualitative performance difference between human-only teams compared with hybrid human-agent teams. The results of our study show that human-agent teams were more successful at task completion and, for some qualitative dimensions, hybrid teams were perceived more favorably than human-only teams.
Most human-computer interfaces can be classified according to two dominant metaphors: (1) agent and (2) environment. In the environment metaphor, a model of the task domain is presented for the user to interact with directly. Norman's 1984 model of HCI is introduced as reference to organize and evaluate research in human-agent interaction (HAI). A wide variety of heterogeneous research involving HAI is shown to reflect automation of one of the stages of action or evaluation within Norman's model.
Nonverbal communication is a crucial element of this – in both regular and fallback experiences – as it helps the agent convey information in a way that feels more instinctive and familiar to us. Dogs do an amazing job of this – they nonverbally communicate with us in a way that's clear and easy to understand, and it's important for the agent to do the same. Nonverbal communication can be a far less intrusive interface than voice alone, and can attract the user's attention in a subtle, yet effective way. For example, when a user calls out ElliQ's wake word, ElliQ's face lights up and head bends forward, leaning in to indicate listening. This endearing behavior not only draws the user in, it also explicitly conveys to them that their request or attempted interaction was indeed successful, and that they should proceed accordingly.
Smart speakers and robots become ever more prevalent in our daily lives. These agents are able to execute a wide range of tasks and actions and, therefore, need systems to control their execution. Current state-of-the-art such as (deep) reinforcement learning, however, requires vast amounts of data for training which is often hard to come by when interacting with humans. To overcome this issue, most systems still rely on Finite State Machines. We introduce Petri Net Machines which present a formal definition for state machines based on Petri Nets that are able to execute concurrent actions reliably, execute and interleave several plans at the same time, and provide an easy to use modelling language.