Can Risk-taking AI-Assistants suitably represent entities
Mazyaki, Ali, Naghizadeh, Mohammad, Zonouzaghi, Samaneh Ranjkhah, Sotoudeh, Amirhossein Farshi
–arXiv.org Artificial Intelligence
Responsible AI demands systems whose behavioral tendencies can be effectively measured, audited, and adjusted to prevent inadvertently nudging users toward risky decisions or embedding hidden biases in risk aversion. As language models (LMs) are increasingly incorporated into AI-driven decision support systems, understanding their risk behaviors is crucial for their responsible deployment. This study investigates the manipulability of risk aversion (MoRA) in LMs, examining their ability to replicate human risk preferences across diverse economic scenarios, with a focus on gender-specific attitudes, uncertainty, role-based decision-making, and the manipulability of risk aversion. The results indicate that while LMs such as DeepSeek Reasoner and Gemini-2.0-flash-lite exhibit some alignment with human behaviors, notable discrepancies highlight the need to refine bio-centric measures of manipulability. These findings suggest directions for refining AI design to better align human and AI risk preferences and enhance ethical decision-making. The study calls for further advancements in model design to ensure that AI systems more accurately replicate human risk preferences, thereby improving their effectiveness in risk management contexts. This approach could enhance the applicability of AI assistants in managing risk.
arXiv.org Artificial Intelligence
Oct-10-2025
- Country:
- Africa (0.04)
- Asia
- Bangladesh > Dhaka Division
- Dhaka District > Dhaka (0.04)
- China
- Hong Kong (0.04)
- Jiangsu Province > Nanjing (0.04)
- Middle East > Iran
- Tehran Province > Tehran (0.04)
- Bangladesh > Dhaka Division
- Europe (0.04)
- North America > United States
- Tennessee > Davidson County > Nashville (0.04)
- Genre:
- Research Report > New Finding (0.88)
- Industry:
- Health & Medicine (0.69)
- Technology: