prospect theory
Prospect Theory in Physical Human-Robot Interaction: A Pilot Study of Probability Perception
Lin, Yixiang, Yang, Tiancheng, Eden, Jonathan, Tan, Ying
Understanding how humans respond to uncertainty is critical for designing safe and effective physical human-robot interaction (pHRI), as physically working with robots introduces multiple sources of uncertainty, including trust, comfort, and perceived safety. Conventional pHRI control frameworks typically build on optimal control theory, which assumes that human actions minimize a cost function; however, human behavior under uncertainty often departs from such optimal patterns. To address this gap, additional understanding of human behavior under uncertainty is needed. This pilot study implemented a physically coupled target-reaching task in which the robot delivered assistance or disturbances with systematically varied probabilities (10\% to 90\%). Analysis of participants' force inputs and decision-making strategies revealed two distinct behavioral clusters: a "trade-off" group that modulated their physical responses according to disturbance likelihood, and an "always-compensate" group characterized by strong risk aversion irrespective of probability. These findings provide empirical evidence that human decision-making in pHRI is highly individualized and that the perception of probability can differ to its true value. Accordingly, the study highlights the need for more interpretable behavioral models, such as cumulative prospect theory (CPT), to more accurately capture these behaviors and inform the design of future adaptive robot controllers.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.68)
Individual utilities of life satisfaction reveal inequality aversion unrelated to political alignment
Cooper, Crispin, Fredrich, Ana, Reggiani, Tommaso, Poortinga, Wouter
How should well-being be prioritised in society, and what trade-offs are people willing to make between fairness and personal well-being? We investigate these questions using a stated preference experiment with a nationally representative UK sample (n = 300), in which participants evaluated life satisfaction outcomes for both themselves and others under conditions of uncertainty. Individual-level utility functions were estimated using an Expected Utility Maximisation (EUM) framework and tested for sensitivity to the overweighting of small probabilities, as characterised by Cumulative Prospect Theory (CPT). A majority of participants displayed concave (risk-averse) utility curves and showed stronger aversion to inequality in societal life satisfaction outcomes than to personal risk. These preferences were unrelated to political alignment, suggesting a shared normative stance on fairness in well-being that cuts across ideological boundaries. The results challenge use of average life satisfaction as a policy metric, and support the development of nonlinear utility-based alternatives that more accurately reflect collective human values. Implications for public policy, well-being measurement, and the design of value-aligned AI systems are discussed.
- North America > United States (0.28)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine (1.00)
- Government > Regional Government (0.93)
Does Calibration Affect Human Actions?
Nizri, Meir, Azaria, Amos, Gupta, Chirag, Hazon, Noam
Calibration has been proposed as a way to enhance the reliability and adoption of machine learning classifiers. We study a particular aspect of this proposal: how does calibrating a classification model affect the decisions made by non-expert humans consuming the model's predictions? We perform a Human-Computer-Interaction (HCI) experiment to ascertain the effect of calibration on (i) trust in the model, and (ii) the correlation between decisions and predictions. We also propose further corrections to the reported calibrated scores based on Kahneman and Tversky's prospect theory from behavioral economics, and study the effect of these corrections on trust and decision-making. We find that calibration is not sufficient on its own; the prospect theory correction is crucial for increasing the correlation between human decisions and the model's predictions. While this increased correlation suggests higher trust in the model, responses to ``Do you trust the model more?" are unaffected by the method used.
- Oceania > Australia (0.46)
- North America > United States (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Government (0.68)
- Banking & Finance (0.46)
Prospect Theory Fails for LLMs: Revealing Instability of Decision-Making under Epistemic Uncertainty
Wang, Rui, Lin, Qihan, Liu, Jiayu, Zong, Qing, Zheng, Tianshi, Wang, Weiqi, Song, Yangqiu
Prospect Theory (PT) models human decision-making under uncertainty, while epistemic markers (e.g., maybe) serve to express uncertainty in language. However, it remains largely unexplored whether Prospect Theory applies to contemporary Large Language Models and whether epistemic markers, which express human uncertainty, affect their decision-making behaviour. To address these research gaps, we design a three-stage experiment based on economic questionnaires. We propose a more general and precise evaluation framework to model LLMs' decision-making behaviour under PT, introducing uncertainty through the empirical probability values associated with commonly used epistemic markers in comparable contexts. We then incorporate epistemic markers into the evaluation framework based on their corresponding probability values to examine their influence on LLM decision-making behaviours. Our findings suggest that modelling LLMs' decision-making with PT is not consistently reliable, particularly when uncertainty is expressed in diverse linguistic forms. Our code is released in https://github.com/HKUST-KnowComp/MarPT.
- Europe > Austria (0.28)
- North America > United States (0.28)
An analysis of AI Decision under Risk: Prospect theory emerges in Large Language Models
Judgment of risk is key to decision-making under uncertainty. As Daniel Kahneman and Amos Tversky famously discovered, humans do so in a distinctive way that departs from mathematical rationalism. Specifically, they demonstrated experimentally that humans accept more risk when they feel themselves at risk of losing something than when they might gain. I report the first tests of Kahneman and Tversky's landmark 'prospect theory' with Large Language Models, including today's state of the art chain-of-thought 'reasoners'. In common with humans, I find that prospect theory often anticipates how these models approach risky decisions across a range of scenarios. I also demonstrate that context is key to explaining much of the variance in risk appetite. The 'frame' through which risk is apprehended appears to be embedded within the language of the scenarios tackled by the models. Specifically, I find that military scenarios generate far larger 'framing effects' than do civilian settings, ceteris paribus. My research suggests, therefore, that language models the world, capturing our human heuristics and biases. But also that these biases are uneven - the idea of a 'frame' is richer than simple gains and losses. Wittgenstein's notion of 'language games' explains the contingent, localised biases activated by these scenarios. Finally, I use my findings to reframe the ongoing debate about reasoning and memorisation in LLMs.
- Research Report > Experimental Study (0.66)
- Research Report > New Finding (0.48)
- Government > Military (1.00)
- Banking & Finance (0.93)
- Government > Foreign Policy (0.93)
Seeing Through Risk: A Symbolic Approximation of Prospect Theory
Yousaf, Ali Arslan, Rehman, Umair, Danish, Muhammad Umair
We propose a novel symbolic modeling framework for decision-making under risk that merges interpretability with the core insights of Prospect Theory. Our approach replaces opaque utility curves and probability weighting functions with transparent, effect-size-guided features. We mathematically formalize the method, demonstrate its ability to replicate well-known framing and loss-aversion phenomena, and provide an end-to-end empirical validation on synthetic datasets. The resulting model achieves competitive predictive performance while yielding clear coefficients mapped onto psychological constructs, making it suitable for applications ranging from AI safety to economic policy analysis.
GPT's Judgements Under Uncertainty
Saeedi, Payam, Goodarzi, Mahsa
--We investigate the presence of cognitive biases in three large language models (LLMs): GPT -4o, Gemma 2, and Llama 3.1. The study uses 1,500 experiments across nine established cognitive biases to evaluate the responses and consistency of the models. GPT -4o demonstrated the strongest overall performance. Gemma 2 showed strengths in addressing the sunk cost fallacy and prospect theory; however, its performance varied across different biases. Llama 3.1 consistently underperformed, relying on heuristics and exhibiting frequent inconsistencies and contradictions. The findings highlight the challenges of achieving robust and generalizable reasoning in LLMs, and underscore the need for further development to mitigate biases in artificial general intelligence (AGI). The study emphasizes the importance of integrating statistical reasoning and ethical considerations in future AI development. Cognitive biases and heuristics are well-established phenomena of the human mind, shaping how individuals process information, make judgments, and make decisions. These biases emerge from heuristics -- mental shortcuts that simplify complex tasks by substituting them with cognitively easier alternatives [1]. While heuristics enable quick and efficient reasoning, they also introduce systematic errors that impact judgment and decision-making [2]-[4]. Understanding whether such biases, embedded in the data and interactions that shape Large Language Models (LLMs), are reflected in their outputs is not only critical for evaluating their alignment with human cognition but also vital for the development of Artificial General Intelligence (AGI). AGI, envisioned as systems capable of performing any intellectual task a human can, must navigate the intricacies of human-like reasoning while avoiding harmful or irresponsible biases.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Switzerland (0.04)
- Health & Medicine (0.46)
- Education (0.46)
ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics
Ramos, Eduardo da C., Campos, Maria Luiza M., Baião, Fernanda
Organizational decision-making is crucial for success, yet cognitive biases can significantly affect risk preferences, leading to suboptimal outcomes. Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences, including financial losses. This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences during decision-making. This research makes a novel contribution by automating the identification and explanation of risk seeking preferences using Cumulative Prospect theory (CPT) from Behavioral Economics. The ABI approach transforms theoretical insights into actionable, real-time guidance, making them accessible to a broader range of organizations and decision-makers without requiring specialized personnel. By contextualizing CPT concepts into business language, the approach facilitates widespread adoption and enhances decision-making processes with deep behavioral insights. Our systematic literature review identified significant gaps in existing methods, especially the lack of automated solutions with a concrete mechanism for automatically identifying risk seeking preferences, and the absence of formal knowledge representation, such as ontologies, for identifying and explaining the risk preferences. The ABI Approach addresses these gaps, offering a significant contribution to decision-making research and practice. Furthermore, it enables automatic collection of historical decision data with risk preferences, providing valuable insights for enhancing strategic management and long-term organizational performance. An experiment provided preliminary evidence on its effectiveness in helping decision-makers recognize their risk seeking preferences during decision-making in the loss domain.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York (0.04)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine (1.00)
- Banking & Finance (1.00)
- Government (0.92)
- Education (0.67)
Calibration of Quantum Decision Theory: Aversion to Large Losses and Predictability of Probabilistic Choices
Kovalenko, T., Vincent, S., Yukalov, V. I., Sornette, D.
We present the first calibration of quantum decision theory (QDT) to a dataset of binary risky choice. We quantitatively account for the fraction of choice reversals between two repetitions of the experiment, using a probabilistic choice formulation in the simplest form without model assumption or adjustable parameters. The prediction of choice reversal is then refined by introducing heterogeneity between decision makers through their differentiation into two groups: ``majoritarian'' and ``contrarian'' (in proportion 3:1). This supports the first fundamental tenet of QDT, which models choice as an inherent probabilistic process, where the probability of a prospect can be expressed as the sum of its utility and attraction factors. We propose to parameterise the utility factor with a stochastic version of cumulative prospect theory (logit-CPT), and the attraction factor with a constant absolute risk aversion (CARA) function. For this dataset, and penalising the larger number of QDT parameters via the Wilks test of nested hypotheses, the QDT model is found to perform significantly better than logit-CPT at both the aggregate and individual levels, and for all considered fit criteria for the first experiment iteration and for predictions (second ``out-of-sample'' iteration). The distinctive QDT effect captured by the attraction factor is mostly appreciable (i.e., most relevant and strongest in amplitude) for prospects with big losses. Our quantitative analysis of the experimental results supports the existence of an intrinsic limit of predictability, which is associated with the inherent probabilistic nature of choice. The results of the paper can find applications both in the prediction of choice of human decision makers as well as for organizing the operation of artificial intelligence.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (10 more...)
- Research Report (1.00)
- Overview (1.00)
- Health & Medicine (0.46)
- Leisure & Entertainment (0.46)
Prospect Theory-inspired Automated P2P Energy Trading with Q-learning-based Dynamic Pricing
Timilsina, Ashutosh, Silvestri, Simone
The widespread adoption of distributed energy resources, and the advent of smart grid technologies, have allowed traditionally passive power system users to become actively involved in energy trading. Recognizing the fact that the traditional centralized grid-driven energy markets offer minimal profitability to these users, recent research has shifted focus towards decentralized peer-to-peer (P2P) energy markets. In these markets, users trade energy with each other, with higher benefits than buying or selling to the grid. However, most researches in P2P energy trading largely overlook the user perception in the trading process, assuming constant availability, participation, and full compliance. As a result, these approaches may result in negative attitudes and reduced engagement over time. In this paper, we design an automated P2P energy market that takes user perception into account. We employ prospect theory to model the user perception and formulate an optimization framework to maximize the buyer's perception while matching demand and production. Given the non-linear and non-convex nature of the optimization problem, we propose Differential Evolution-based Algorithm for Trading Energy called DEbATE. Additionally, we introduce a risk-sensitive Q-learning algorithm, named Pricing mechanism with Q-learning and Risk-sensitivity (PQR), which learns the optimal price for sellers considering their perceived utility. Results based on real traces of energy consumption and production, as well as realistic prospect theory functions, show that our approach achieves a 26% higher perceived value for buyers and generates 7% more reward for sellers, compared to a recent state of the art approach.