Do Large Language Models Learn Human-Like Strategic Preferences?
Roberts, Jesse, Moore, Kyle, Fisher, Doug
–arXiv.org Artificial Intelligence
We evaluate whether LLMs learn to make human-like preference judgements in strategic scenarios as compared with known empirical results. We show that Solar and Mistral exhibit stable value-based preference consistent with human in the prisoner's dilemma, including stake-size effect, and traveler's dilemma, including penalty-size effect. We establish a relationship between model size, value based preference, and superficiality. Finally, we find that models that tend to be less brittle were trained with sliding window attention. Additionally, we contribute a novel method for constructing preference relations from arbitrary LLMs and support for a hypothesis regarding human behavior in the traveler's dilemma.
arXiv.org Artificial Intelligence
Apr-11-2024
- Country:
- North America > United States (0.14)
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (1.00)
- Research Report
- Technology: