Goto

Collaborating Authors

 Hoey, Jesse


Intelligent and Affectively Aligned Evaluation of Online Health Information for Older Adults

AAAI Conferences

Online health resources aimed at older adults can have a significant impact on patient-physician relationships and on health outcomes. High quality online resources that are delivered in an ethical, emotionally aligned way can increase trust and reduce negative health outcomes such as anxiety. In contrast, low quality or misaligned resources can lead to harmful consequences such as inappropriate use of health care services and poor health decision-making. This paper investigates mechanisms for ensuring both quality and alignment of online health resources and interventions. First, the recently proposed QUEST evaluation instrument is examined. QUEST assesses the quality of online health information along six validated dimensions (authorship, attribution, conflict of interest, currency, complementarity, tone). A decision tree classifier is learned that is able to predict one criteria of the QUEST tool, complementarity, with an F1-score of 0.9 on a manually annotated dataset of 50 articles giving advice about Alzheimer disease. A social-psychological theory of affective (emotional) alignment is then presented, and demonstrated to gauge older adults emotional interpretations of eight examples of health recommendation systems related to Alzheimer disease (online memory tests). The paper concludes with a synthesizing view and a vision for the future of this important societal challenge.


Detecting Falls with X-Factor Hidden Markov Models

arXiv.org Artificial Intelligence

Identification of falls while performing normal activities of daily living (ADL) is important to ensure personal safety and well-being. However, falling is a short term activity that occurs infrequently. This poses a challenge to traditional classification algorithms, because there may be very little training data for falls (or none at all). This paper proposes an approach for the identification of falls using a wearable device in the absence of training data for falls but with plentiful data for normal ADL. We propose three `X-Factor' Hidden Markov Model (XHMMs) approaches. The XHMMs model unseen falls using "inflated" output covariances (observation models). To estimate the inflated covariances, we propose a novel cross validation method to remove "outliers" from the normal ADL that serve as proxies for the unseen falls and allow learning the XHMMs using only normal activities. We tested the proposed XHMM approaches on two activity recognition datasets and show high detection rates for falls in the absence of fall-specific training data. We show that the traditional method of choosing a threshold based on maximum of negative of log-likelihood to identify unseen falls is ill-posed for this problem. We also show that supervised classification methods perform poorly when very limited fall data are available during the training phase.


Bayesian Affect Control Theory of Self

AAAI Conferences

Notions of identity and of the self have long been studied in social psychology and sociology as key guiding elements of social interaction and coordination. In the AI of the future, these notions will also play a role in producing natural, socially appropriate artificially intelligent agents that encompass subtle and complex human social and affective skills. We propose here a Bayesian generalization of the sociological affect control theory of self as a theoretical foundation for socio-affectively skilled artificial agents. This theory posits that each human maintains an internal model of his or her deep sense of "self" that captures their emotional, psychological, and socio-cultural sense of being in the world. The "self" is then externalised as an identity within any given interpersonal and institutional situation, and this situational identity is the person's local (in space and time) representation of the self. Situational identities govern the actions of humans according to affect control theory. Humans will seek situations that allow them to enact identities consistent with their sense of self. This consistency is cumulative over time: if some parts of a person's self are not actualized regularly, the person will have a growing feeling of inauthenticity that they will seek to resolve. In our present generalisation, the self is represented as a probability distribution, allowing it to be multi-modal (a person can maintain multiple different identities), uncertain (a person can be unsure about who they really are), and learnable (agents can learn the identities and selves of other agents). We show how the Bayesian affect control theory of self can underpin artificial agents that are socially intelligent.


Affect Control Processes: Intelligent Affective Interaction using a Partially Observable Markov Decision Process

arXiv.org Artificial Intelligence

This paper describes a novel method for building affectively intelligent human-interactive agents. The method is based on a key sociological insight that has been developed and extensively verified over the last twenty years, but has yet to make an impact in artificial intelligence. The insight is that resource bounded humans will, by default, act to maintain affective consistency. Humans have culturally shared fundamental affective sentiments about identities, behaviours, and objects, and they act so that the transient affective sentiments created during interactions confirm the fundamental sentiments. Humans seek and create situations that confirm or are consistent with, and avoid and supress situations that disconfirm or are inconsistent with, their culturally shared affective sentiments. This "affect control principle" has been shown to be a powerful predictor of human behaviour. In this paper, we present a probabilistic and decision-theoretic generalisation of this principle, and we demonstrate how it can be leveraged to build affectively intelligent artificial agents. The new model, called BayesAct, can maintain multiple hypotheses about sentiments simultaneously as a probability distribution, and can make use of an explicit utility function to make value-directed action choices. This allows the model to generate affectively intelligent interactions with people by learning about their identity, predicting their behaviours using the affect control principle, and taking actions that are simultaneously goal-directed and affect-sensitive. We demonstrate this generalisation with a set of simulations. We then show how our model can be used as an emotional "plug-in" for artificially intelligent systems that interact with humans in two different settings: an exam practice assistant (tutor) and an assistive device for persons with a cognitive disability.


SPUDD: Stochastic Planning using Decision Diagrams

arXiv.org Artificial Intelligence

Markov decisions processes (MDPs) are becoming increasing popular as models of decision theoretic planning. While traditional dynamic programming methods perform well for problems with small state spaces, structured methods are needed for large problems. We propose and examine a value iteration algorithm for MDPs that uses algebraic decision diagrams(ADDs) to represent value functions and policies. An MDP is represented using Bayesian networks and ADDs and dynamic programming is applied directly to these ADDs. We demonstrate our method on large MDPs (up to 63 million states) and show that significant gains can be had when compared to tree-structured representations (with up to a thirty-fold reduction in the number of nodes required to represent optimal value functions).


Smart Home, The Next Generation: Closing the Gap between Users and Technology

AAAI Conferences

In this paper we discuss the gap that exists between the caregivers of older adults attempting to age-in-place and sophisticated โ€smart-homeโ€ systems that can sense the environment and provide assistance when needed. We argue that smart-home systems need to be customizable by end-users, and we present a general-purpose model for cognitive assistive technology that can be adapted to suit many different tasks, users and environments. Al- though we can provide mechanisms for engineers and designers to build and adapt smart-home systems based on this general-purpose model, these mechanisms are not easily understood by or sufficiently user-friendly for actual end users such as older adults and their care- givers. Our goal is therefore to study how to bridge the gap between the end-users and this technology. In this paper, we discuss our work on this problem from both sides: developing technology that is customizable and general-purpose, and studying userโ€™s abilities and needs when it comes to building smart-home systems to help with activities of daily living. We show how a large gap still exists, and propose ideas for how to bridge the gap.


An Ontological Representation Model to Tailor Ambient Assisted Interventions for Wandering

AAAI Conferences

Wandering is a problematic behavior that is common among people with dementia (PwD), and is highly influenced by the eldersโ€™ background and by contextual factors specific to the situation. We have developed the Ambient Augmented Memory System (AAMS) to support the caregiver to implement interventions based on providing external memory aids to the PwD. To provide a successful intervention, it is required to use an individualized approach that considers the context of the PwD situation. To reach this end, we extended the AAMS system to include an ontological model to support the context-aware tailoring of interventions for wandering. In this paper, we illustrate the ontology flexibility to personalize the AAMS system to support direct and indirect interventions for wandering through mobile devices.


A Market-Based Coordination Mechanism for Resource Planning Under Uncertainty

AAAI Conferences

Multiagent Resource Allocation (MARA) distributes a set of resources among a set of intelligent agents in order to respect the preferences of the agents and to maximize some measure of global utility, which may include minimizing total costs or maximizing total return. We are interested in MARA solutions that provide optimal or close-to-optimal allocation of resources in terms of maximizing a global welfare function with low communication and computation cost, with respect to the priority of agents, and temporal dependencies between resources. We propose an MDP approach for resource planning in multiagent environments. Our approach formulates internal preference modeling and success of each individual agent as a single MDP and then to optimize global utility, we apply a market-based solution to coordinate these decentralized MDPs.


Distributed Control of Situated Assistance in Large Domains with Many Tasks

AAAI Conferences

This paper tackles the problem of building situated prompting and assistance systems for guiding a human with a cognitive disability through a large domain containing multiple tasks. This problem is challenging because the target population has difficulty maintaining goals, recalling necessary steps and recognizing objects and potential actions (affordances), and therefore may not appear to be acting rationally. Prompts or cues from an automated system can be very helpful in this regard, but the domain is inherently partially observable due to sensor noise and uncertain human behaviours, making the task of selecting an appropriate prompt very challenging. Prior work has shown how such automated assistance for a single task can be modeled as a partially observable Markov decision process (POMDP). In this paper, we generalise this to multiple tasks, and show how to build a scalable, distributed and hierarchical controller. We demonstrate the algorithm in a set of simulated domains and show it can perform as well as the full model in many cases, and can give solutions to large problems (over 10 15 states and 10 9 observations) for which the full model fails to find a policy.


APRICODD: Approximate Policy Construction Using Decision Diagrams

Neural Information Processing Systems

We propose a method of approximate dynamic programming for Markov decision processes (MDPs) using algebraic decision diagrams (ADDs). We produce near-optimal value functions and policies with much lower time and space requirements than exact dynamic programming. Our method reduces the sizes of the intermediate value functions generated during value iteration by replacing the values at the terminals of the ADD with ranges of values. Our method is demonstrated on a class of large MDPs (with up to 34 billion states), and we compare the results with the optimal value functions.