Goto

Collaborating Authors

 human behaviour


Social Media Data Mining of Human Behaviour during Bushfire Evacuation

Wu, Junfeng, Zhou, Xiangmin, Kuligowski, Erica, Singh, Dhirendra, Ronchi, Enrico, Kinateder, Max

arXiv.org Artificial Intelligence

Traditional data sources on bushfire evacuation behaviour, such as quantitative surveys and manual observations have severe limitations. Mining social media data related to bushfire evacuations promises to close this gap by allowing the collection and processing of a large amount of behavioural data, which are low-cost, accurate, possibly including location information and rich contextual information. However, social media data have many limitations, such as being scattered, incomplete, informal, etc. Together, these limitations represent several challenges to their usefulness to better understand bushfire evacuation. To overcome these challenges and provide guidance on which and how social media data can be used, this scoping review of the literature reports on recent advances in relevant data mining techniques. In addition, future applications and open problems are discussed. We envision future applications such as evacuation model calibration and validation, emergency communication, personalised evacuation training, and resource allocation for evacuation preparedness. We identify open problems such as data quality, bias and representativeness, geolocation accuracy, contextual understanding, crisis-specific lexicon and semantics, and multimodal data interpretation.


15212f24321aa2c3dc8e9acf820f3c15-AuthorFeedback.pdf

Neural Information Processing Systems

We would like to thank all the reviewers for their insightful comments. Changes mentioned in our responses below have been incorporated in the revised version of the paper. Regarding the contribution of the paper, our Level-1 theory of mind (section 2.2) was similar to Ref [23] That is not true for the opposite case. POMDP model always generates a deterministic policy. It only changes the likelihood function of the model. Therefore, we don't need any new parameters to measure the accuracy of our model.



Situating AI Agents in their World: Aspective Agentic AI for Dynamic Partially Observable Information Systems

Bentley, Peter J., Lim, Soo Ling, Ishikawa, Fuyuki

arXiv.org Artificial Intelligence

Agentic LLM AI agents are often little more than autonomous chatbots: actors following scripts, often controlled by an unreliable director. This work introduces a bottom-up framework that situates AI agents in their environment, with all behaviors triggered by changes in their environments. It introduces the notion of aspects, similar to the idea of umwelt, where sets of agents perceive their environment differently to each other, enabling clearer control of information. We provide an illustrative implementation and show that compared to a typical architecture, which leaks up to 83% of the time, aspective agentic AI enables zero information leakage. We anticipate that this concept of specialist agents working efficiently in their own information niches can provide improvements to both security and efficiency.


Humans expect rationality and cooperation from LLM opponents in strategic games

Barak, Darija, Costa-Gomes, Miguel

arXiv.org Artificial Intelligence

As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of `zero' Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM's reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects' behaviour and beliefs about LLM's play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.


Model-based optimisation for the personalisation of robot-assisted gait training

Christou, Andreas, Gordon, Daniel F. N., Stouraitis, Theodoros, Moreno, Juan C., Vijayakumar, Sethu

arXiv.org Artificial Intelligence

PAPER ID: TMRB-06-24-OA-0958 1 Model-based optimisation for the personalisation of robot-assisted gait training Andreas Christou, Daniel F. N. Gordon, Theodoros Stouraitis, Juan C. Moreno and Sethu Vijayakumar Abstract--Personalised rehabilitation can be key to promoting gait independence and quality of life. Robots can enhance therapy by systematically delivering support in gait training, but often use one-size-fits-all control methods, which can be suboptimal. Here, we describe a model-based optimisation method for designing and fine-tuning personalised robotic controllers. As a case study, we formulate the objective of providing assistance as needed as an optimisation problem, and we demonstrate how musculoskeletal modelling can be used to develop personalised interventions. Eighteen healthy participants (age = 26 4) were recruited and the personalised control parameters for each were obtained to provide assistance as needed during a unilateral tracking task. A comparison was carried out between the personalised controller and the non-personalised controller. In simulation, a significant improvement was predicted when the personalised parameters were used. Experimentally, responses varied: six subjects showed significant improvements with the personalised parameters, eight subjects showed no obvious change, while four subjects performed worse. High interpersonal and intra-personal variability was observed with both controllers. This study highlights the importance of personalised control in robot-assisted gait training, and the need for a better estimation of human-robot interaction and human behaviour to realise the benefits of model-based optimisation. I. Introduction Motor function deficits are often the result of neurological disorders and can significantly impact the quality of This research was supported in part by the Engineering and Physical Sciences Research Council (EPSRC, grant reference EP/L016834/1) as part of the Centre for Doctoral Training in Robotics and Autonomous Systems at Heriot-Watt University and The University of Edinburgh, in part by the Alan Turing Institute, U.K., in part by Project I+D+i RED2022-134319-T (Spain), and in part by the Japan Science and Technology Agency (JST) Moonshot R&D Program (Grant No. JPMJMS2239). This includes one multimedia MP4 format movie clip, which provides scenes of the experimental setup. This material is 24.1 MB in size. T. Stouraitis is with DeepSea Technologies, 105 64 Athens, Greece (email: stoutheo@gmail.com).


AToM: Adaptive Theory-of-Mind-Based Human Motion Prediction in Long-Term Human-Robot Interactions

Liao, Yuwen, Cao, Muqing, Xu, Xinhang, Xie, Lihua

arXiv.org Artificial Intelligence

Humans learn from observations and experiences to adjust their behaviours towards better performance. Interacting with such dynamic humans is challenging, as the robot needs to predict the humans accurately for safe and efficient operations. Long-term interactions with dynamic humans have not been extensively studied by prior works. We propose an adaptive human prediction model based on the Theory-of-Mind (ToM), a fundamental social-cognitive ability that enables humans to infer others' behaviours and intentions. We formulate the human internal belief about others using a game-theoretic model, which predicts the future motions of all agents in a navigation scenario. To estimate an evolving belief, we use an Unscented Kalman Filter to update the behavioural parameters in the human internal model. Our formulation provides unique interpretability to dynamic human behaviours by inferring how the human predicts the robot. We demonstrate through long-term experiments in both simulations and real-world settings that our prediction effectively promotes safety and efficiency in downstream robot planning. Code will be available at https://github.com/centiLinda/AToM-human-prediction.git.


Fully Data-driven but Interpretable Human Behavioural Modelling with Differentiable Discrete Choice Model

Makinoshima, Fumiyasu, Mitomi, Tatsuya, Makihara, Fumiya, Segawa, Eigo

arXiv.org Artificial Intelligence

Discrete choice models are essential for modelling various decision-making processes in human behaviour. However, the specification of these models has depended heavily on domain knowledge from experts, and the fully automated but interpretable modelling of complex human behaviours has been a long-standing challenge. In this paper, we introduce the differentiable discrete choice model (Diff-DCM), a fully data-driven method for the interpretable modelling, learning, prediction, and control of complex human behaviours, which is realised by differentiable programming. Solely from input features and choice outcomes without any prior knowledge, Diff-DCM can estimate interpretable closed-form utility functions that reproduce observed behaviours. Comprehensive experiments with both synthetic and real-world data demonstrate that Diff-DCM can be applied to various types of data and requires only a small amount of computational resources for the estimations, which can be completed within tens of seconds on a laptop without any accelerators. In these experiments, we also demonstrate that, using its differentiability, Diff-DCM can provide useful insights into human behaviours, such as an optimal intervention path for effective behavioural changes. This study provides a strong basis for the fully automated and reliable modelling, prediction, and control of human behaviours.


Do LLM Personas Dream of Bull Markets? Comparing Human and AI Investment Strategies Through the Lens of the Five-Factor Model

Borman, Harris, Leontjeva, Anna, Pizzato, Luiz, Jiang, Max Kun, Jermyn, Dan

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have demonstrated the ability to adopt a personality and behave in a human-like manner. There is a large body of research that investigates the behavioural impacts of personality in less obvious areas such as investment attitudes or creative decision making. In this study, we investigated whether an LLM persona with a specific Big Five personality profile would perform an investment task similarly to a human with the same personality traits. We used a simulated investment task to determine if these results could be generalised into actual behaviours. In this simulated environment, our results show these personas produced meaningful behavioural differences in all assessed categories, with these behaviours generally being consistent with expectations derived from human research. We found that LLMs are able to generalise traits into expected behaviours in three areas: learning style, impulsivity and risk appetite while environmental attitudes could not be accurately represented. In addition, we showed that LLMs produce behaviour that is more reflective of human behaviour in a simulation environment compared to a survey environment.


Semantics in Robotics: Environmental Data Can't Yield Conventions of Human Behaviour

Freestone, Jamie Milton

arXiv.org Artificial Intelligence

The word semantics, in robotics and AI, has no canonical definition. It usually serves to denote additional data provided to autonomous agents to aid HRI. Most researchers seem, implicitly, to understand that such data cannot simply be extracted from environmental data. I try to make explicit why this is so and argue that so-called semantics are best understood as data comprised of conventions of human behaviour. This includes labels, most obviously, but also places, ontologies, and affordances. Object affordances are especially problematic because they require not only semantics that are not in the environmental data (conventions of object use) but also an understanding of physics and object combinations that would, if achieved, constitute artificial superintelligence.