Most human-computer interfaces can be classified according to two dominant metaphors: (1) agent and (2) environment. In the environment metaphor, a model of the task domain is presented for the user to interact with directly. Norman's 1984 model of HCI is introduced as reference to organize and evaluate research in human-agent interaction (HAI). A wide variety of heterogeneous research involving HAI is shown to reflect automation of one of the stages of action or evaluation within Norman's model.
Smart speakers and robots become ever more prevalent in our daily lives. These agents are able to execute a wide range of tasks and actions and, therefore, need systems to control their execution. Current state-of-the-art such as (deep) reinforcement learning, however, requires vast amounts of data for training which is often hard to come by when interacting with humans. To overcome this issue, most systems still rely on Finite State Machines. We introduce Petri Net Machines which present a formal definition for state machines based on Petri Nets that are able to execute concurrent actions reliably, execute and interleave several plans at the same time, and provide an easy to use modelling language.
Much research has been done to apply auctions, markets, and negotiation mechanisms to solve the multiagent task allocation problem. However, there has been very little work on human-agent group task allocation. We believe that the notion of bounty hunting has good properties for human-agent group interaction in dynamic task allocation problems. We use previous experimental results comparing bounty hunting with auction-like methods to argue why it would be particularly adept at handling scenarios with unreliable collaborators and unexpectedly hard tasks: scenarios we believe highlight difficulties involved in working with humans collaborators.
Inherent human limitations in teaming environments coupled with complex planning problems spur the integration of intelligent decision support (IDS) systems for human-agent planning. However, prior research in human-agent planning has been limited to dyadic interaction between a single human and a single planning agent. In this paper, we highlight an emerging research area of IDS for human team planning, i.e. environments where the agent works with a team of human planners to enhance the quality of their plans and the ease of making them. We review prior works in human-agent planning and identify research challenges for an agent participating in human team planning.