Revisiting the Reliability of Psychological Scales on Large Language Models
Huang, Jen-tse, Wang, Wenxuan, Lam, Man Ho, Li, Eric John, Jiao, Wenxiang, Lyu, Michael R.
–arXiv.org Artificial Intelligence
The accompanying shadow represents the standard deviation ( Std). Recent research has extended beyond assessing the performance of Large Language Models (LLMs) to examining their characteristics from a psychological standpoint, acknowledging the necessity of understanding their behavioral characteristics. The administration of personality tests to LLMs has emerged as a noteworthy area in this context. However, the suitability of employing psychological scales, initially devised for humans, on LLMs is a matter of ongoing debate. Our study aims to determine the reliability of applying personality assessments to LLMs, explicitly investigating whether LLMs demonstrate consistent personality traits. Analyzing responses under 2,500 settings reveals that gpt-3.5-turbo Furthermore, our research explores the potential of gpt-3.5-turbo to emulate diverse personalities and represent various groups--a capability increasingly sought after in social sciences for substituting human participants with LLMs to reduce costs. Our findings reveal that LLMs have the potential to represent different personalities with specific prompt instructions. By shedding light on the personalization of LLMs, our study endeavors to pave the way for future explorations in this field. Wenxiang Jiao is the corresponding author. The recent emergence of Large Language Models (LLMs) marks a significant advancement in the field of Artificial Intelligence (AI), representing a notable milestone.
arXiv.org Artificial Intelligence
Dec-28-2023
- Country:
- Asia
- China > Hong Kong (0.04)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe > United Kingdom (0.04)
- North America > United States
- Indiana (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.88)
- Industry:
- Health & Medicine (0.68)
- Law (0.46)
- Technology: