Temporal Preferences in Language Models for Long-Horizon Assistance
Mazyaki, Ali, Naghizadeh, Mohammad, Zonouzaghi, Samaneh Ranjkhah, Setareh, Hossein
–arXiv.org Artificial Intelligence
We study whether language models (LMs) exhibit future- versus present-oriented preferences in intertemporal choice and whether those preferences can be systematically manipulated. Using adapted human experimental protocols, we evaluate multiple LMs on time-tradeoff tasks and benchmark them against a sample of human decision makers. We introduce an operational metric, the Manipulability of Time Orientation (MTO), defined as the change in an LM's revealed time preference between future- and present-oriented prompts. In our tests, reasoning-focused models (e.g., DeepSeek-Reasoner and grok-3-mini) choose later options under future-oriented prompts but only partially personalize decisions across identities or geographies. Moreover, models that correctly reason about time orientation internalize a future orientation for themselves as AI decision makers. We discuss design implications for AI assistants that should align with heterogeneous, long-horizon goals and outline a research agenda on personalized contextual calibration and socially aware deployment.
arXiv.org Artificial Intelligence
Sep-15-2025
- Country:
- Asia
- Middle East > Iran
- Tehran Province > Tehran (0.04)
- Uzbekistan (0.04)
- Middle East > Iran
- Europe (0.04)
- North America
- Canada > Ontario (0.04)
- United States (0.14)
- Asia
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Banking & Finance (0.68)
- Government (0.93)
- Health & Medicine > Therapeutic Area
- Neurology (0.46)
- Law (0.68)
- Technology: