Collaborating Authors

Simulation of Human Behavior: Overviews

AI Research Considerations for Human Existential Safety (ARCHES) Artificial Intelligence

Framed in positive terms, this report examines how technical AI research might be steered in a manner that is more attentive to humanity's long-term prospects for survival as a species. In negative terms, we ask what existential risks humanity might face from AI development in the next century, and by what principles contemporary technical research might be directed to address those risks. A key property of hypothetical AI technologies is introduced, called \emph{prepotence}, which is useful for delineating a variety of potential existential risks from artificial intelligence, even as AI paradigms might shift. A set of \auxref{dirtot} contemporary research \directions are then examined for their potential benefit to existential safety. Each research direction is explained with a scenario-driven motivation, and examples of existing work from which to build. The research directions present their own risks and benefits to society that could occur at various scales of impact, and in particular are not guaranteed to benefit existential safety if major developments in them are deployed without adequate forethought and oversight. As such, each direction is accompanied by a consideration of potentially negative side effects.

Modelling Bushfire Evacuation Behaviours Artificial Intelligence

Bushfires pose a significant threat to Australia's regional areas. To minimise risk and increase resilience, communities need robust evacuation strategies that account for people's likely behaviour both before and during a bushfire. Agent-based modelling (ABM) offers a practical way to simulate a range of bushfire evacuation scenarios. However, the ABM should reflect the diversity of possible human responses in a given community. The Belief-Desire-Intention (BDI) cognitive model captures behaviour in a compact representation that is understandable by domain experts. Within a BDI-ABM simulation, individual BDI agents can be assigned profiles that determine their likely behaviour. Over a population of agents their collective behaviour will characterise the community response. These profiles are drawn from existing human behaviour research and consultation with emergency services personnel and capture the expected behaviours of identified groups in the population, both prior to and during an evacuation. A realistic representation of each community can then be formed, and evacuation scenarios within the simulation can be used to explore the possible impact of population structure on outcomes. It is hoped that this will give an improved understanding of the risks associated with evacuation, and lead to tailored evacuation plans for each community to help them prepare for and respond to bushfire.

MLR (Memory, Learning and Recognition): A General Cognitive Model -- applied to Intelligent Robots and Systems Control Artificial Intelligence

This paper introduces a new perspective of intelligent robots and systems control. The presented and proposed cognitive model: Memory, Learning and Recognition (MLR), is an effort to bridge the gap between Robotics, AI, Cognitive Science, and Neuroscience. The currently existing gap prevents us from integrating the current advancement and achievements of these four research fields which are actively trying to define intelligence in either application-based way or in generic way. This cognitive model defines intelligence more specifically, parametrically and detailed. The proposed MLR model helps us create a general control model for robots and systems independent of their application domains and platforms since it is mainly based on the dataset provided for robots and systems controls. This paper is mainly proposing and introducing this concept and trying to prove this concept in a small scale, firstly through experimentation. The proposed concept is also applicable to other different platforms in real-time as well as in simulation.

Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI Artificial Intelligence

This is an integrative review that address the question, "What makes for a good explanation?" with reference to AI systems. Pertinent literatures are vast. Thus, this review is necessarily selective. That said, most of the key concepts and issues are expressed in this Report. The Report encapsulates the history of computer science efforts to create systems that explain and instruct (intelligent tutoring systems and expert systems). The Report expresses the explainability issues and challenges in modern AI, and presents capsule views of the leading psychological theories of explanation. Certain articles stand out by virtue of their particular relevance to XAI, and their methods, results, and key points are highlighted. It is recommended that AI/XAI researchers be encouraged to include in their research reports fuller details on their empirical or experimental methods, in the fashion of experimental psychology research reports: details on Participants, Instructions, Procedures, Tasks, Dependent Variables (operational definitions of the measures and metrics), Independent Variables (conditions), and Control Conditions.

Unifying Decision-Making: a Review on Evolutionary Theories on Rationality and Cognitive Biases Artificial Intelligence

In this paper, we make a review on the concepts of rationality across several different fields, namely in economics, psychology and evolutionary biology and behavioural ecology. We review how processes like natural selection can help us understand the evolution of cognition and how cognitive biases might be a consequence of this natural selection. In the end we argue that humans are not irrational, but rather rationally bounded and we complement the discussion on how quantum cognitive models can contribute for the modelling and prediction of human paradoxical decisions.

A review of possible effects of cognitive biases on interpretation of rule-based machine learning models Machine Learning

This paper investigates to what extent do cognitive biases affect human understanding of interpretable machine learning models, in particular of rules discovered from data. Twenty cognitive biases (illusions, effects) are covered, as are possibly effective debiasing techniques that can be adopted by designers of machine learning algorithms and software. While there seems no universal approach for eliminating all the identified cognitive biases, it follows from our analysis that the effect of most biases can be ameliorated by making rule-based models more concise. Due to lack of previous research, our review transfers general results obtained in cognitive psychology to the domain of machine learning. It needs to be succeeded by empirical studies specifically aimed at the machine learning domain.

Techniques and Methodology

AI Magazine

Should Artificial Intelligence strive to model and understand human cognitive and perceptual systems? Should it operate at a more abstract mathematical level of characterizing possible intelligent action, independent of human performance? Or, should it focus on building working programs that exhibit increasingly expert behavior, irrespective of theoretical or psychological conccrlls? These questions lie at the heart of most current, debate on whether AI is a science, an art, or a new branch of engineering In fact, some researchers believe it is all three and consequently build systems that perform some interesting task, arguing for the "theoretical significance" and "psychological validity" of the approach. In fact, it assumes the cognitive psychology paradigm as central and suggests that AI research would benefit from closer adherence to the data and methods of psychological research We welcome contributions in support of other research methodologies in AI, as well as discussions com-Rcscarch for this paper was conducted at the LJniversity of Chicago Center for Cognitive Science under a grant.

Kognit: Intelligent Cognitive Enhancement Technology by Cognitive Models and Mixed Reality for Dementia Patients

AAAI Conferences

With advancements in technology, smartphones can already serve as memory aids. Electronic calendars are of great use in time-based memory tasks. In this project, we enter the mixed reality realm for helping dementia patients. Dementia is a general term for a decline in mental ability severe enough to interfere with daily life. Memory loss is an example. Here, mixed reality refers to the merging of real and virtual worlds to produce new episodic memory visualisations where physical and digital objects co-exist and interact in real-time. Cognitive models are approximations of a patient's mental abilities and limitations involving conscious mental activities (such as thinking, understanding, learning, and remembering). External representations of episodic memory help patients and caregivers coordinate their actions with one another. We advocate distributed cognition, which involves the coordination between individuals, artefacts and the environment, in four main implementations of artificial intelligence technology in the Kognit storyboard: (1) speech dialogue and episodic memory retrieval; (2) monitoring medication management and tracking an elder's behaviour (e.g., drinking water); (3) eye tracking and modelling cognitive abilities; and (4) serious game development towards active memory training. We discuss the storyboard, use cases and usage scenarios, and some implementation details of cognitive models and mixed reality hardware for the patient. The purpose of future studies is to determine the extent to which cognitive enhancement technology can be used to decrease caregiver burden.