Bias-Adjusted LLM Agents for Human-Like Decision-Making via Behavioral Economics
Kitadai, Ayato, Fukasawa, Yusuke, Nishino, Nariaki
–arXiv.org Artificial Intelligence
Large language models (LLMs) are increasingly used to simulate human decision-making, but their intrinsic biases often diverge from real human behavior--limiting their ability to reflect population-level diversity. We address this challenge with a persona-based approach that leverages individual-level behavioral data from behavioral economics to adjust model biases. Applying this method to the ultimatum game--a standard but difficult benchmark for LLMs--we observe improved alignment between simulated and empirical behavior, particularly on the responder side. While further refinement of trait representations is needed, our results demonstrate the promise of persona-conditioned LLMs for simulating human-like decision patterns at scale.
arXiv.org Artificial Intelligence
Aug-27-2025
- Country:
- Asia
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States > Florida
- Miami-Dade County > Miami (0.04)
- Mexico > Mexico City
- Genre:
- Research Report > New Finding (1.00)
- Technology: