Atkinson, David J.
Ambient Personal Environment Experiment (APEX): A Cyber-Human Prosthesis for Mental, Physical and Age-Related Disabilities
Atkinson, David J. (Institute for Human and Machine Cognition) | Dorr, Bonnie J. (Institute for Human and Machine Cognition) | Clark, Micah H. (Institute for Human and Machine Cognition) | Clancey, William J. (Institute for Human and Machine Cognition) | Wilks, Yorick (Institute for Human and Machine Cognition)
We present an emerging research project in our laboratory to extend ambient intelligence (AmI) by what we refer to as “extreme personalization” meaning that an instance of ambient intelligence is focused on one or at most a few individuals over a very long period of time. Over a lifetime of co-activity, it senses and adapts to a person’s preferences and experiences, and crucially, his or her (changing) special needs; needs that differ significantly from the normal baseline. We refer to our agent-based cyber-physical system as Ambient Personal Environment eXperiment (APEX). It aims to serve as a Companion , a Coach , and a Caregiver : crucial support for individuals with mental, physical, and age-related disabilities and those other people who help them. We propose that an instance of APEX, interacting socially with each of these people, is both a social actor as well as a cyber-human prosthetic device . APEX is an ambitious integration of multiple technologies from Artificial Intelligence (AI) and other disciplines. Its successful development can be viewed as a grand challenge for AI. We discuss in this paper three research thrusts that lead toward our vision: robust intelligent agents, semantically rich human-machine interaction, and reasoning from comprehensive multi-modal behavior data.
Emerging Cyber-Security Issues of Autonomy and the Psychopathology of Intelligent Machines
Atkinson, David J. (Institute for Human and Machine Cognition)
The central thesis of this paper is that the technology of intelligent, autonomous machines gives rise to novel fault modes that are not seen in other types of automation. As a consequence, autonomous systems provide new vectors for cyber-attack with the potential consequence of subversion, degraded behavior or outright failure of the autonomous system. While we can only pursue the analogy so far, maladaptive behavior and the other symptoms of these fault modes in some cases may resemble those found in humans. The term “psychopathology” is applied to fault modes of the human mind, but as yet we have no equivalent area of study for intelligent, autonomous machines. This area requires further study in order to document and explain the symptoms of unique faults in intelligent systems, whether they occur in nominal conditions or as a result of an outside, purposeful attack. By analyzing algorithms, architectures and what can go wrong with autonomous machines, we may a) gain insight into mechanisms of intelligence; b) learn how to design out, work around or otherwise mitigate these new failure modes; c) identify potential new cyber-security risks; d) increase the trustworthiness of machine intelligence. Vigilance and attention management mechanisms are identified as specific areas of risk.
Shared Awareness, Autonomy and Trust in Human-Robot Teamwork
Atkinson, David J. (Institute for Human and Machine Cognition) | Clancey, William J. (Institute for Human and Machine Cognition) | Clark, Micah H. (Institute for Human and Machine Cognition)
Teamwork requires mutual trust among team members. Establishing and maintaining trust depends upon alignment of mental models, an aspect of shared awareness. We present a theory of how maintenance of model alignment is integral to fluid changes in relative control authority (i.e., adaptive autonomy) in human-robot teamwork.