Questioning the Survey Responses of Large Language Models
Dominguez-Olmedo, Ricardo, Hardt, Moritz, Mendler-Dünner, Celestine
–arXiv.org Artificial Intelligence
As large language models increase in capability, researchers have started to conduct surveys of all kinds on these models with varying scientific motivations. In this work, we examine what we can learn from language models' survey responses on the basis of the well-established American Community Survey (ACS) by the U.S. Census Bureau. Using a de-facto standard multiple-choice prompting technique and evaluating 40 different language models, hundreds of thousands of times each on questions from the ACS, we systematically establish two dominant patterns. First, models have significant position and labeling biases, for example, towards survey responses labeled with the letter "A". Second, when adjusting for labeling biases through randomized answer ordering, models across the board trend towards uniformly random survey responses. In fact, binary classifiers can almost perfectly differentiate between models' responses to the ACS and the responses of the US census. Taken together, our findings suggest caution in treating survey responses from language models as equivalent to those of human populations at present time.
arXiv.org Artificial Intelligence
Oct-12-2023
- Country:
- North America > United States (1.00)
- Genre:
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (1.00)
- Industry:
- Technology: