Goto

Collaborating Authors

 Stephan Alaniz




Supplementary Materials: In-Context Impersonation Reveals Large Language Models' Strengths and Biases

Neural Information Processing Systems

We compare the task expert results with the average of all neutral personas, the average of all domain expert personas, the average of all nondomain expert personas and the random baseline (horizontal line). The first plot shows the average over all STEM tasks, while the remaining plots show the results for each STEM task individually. All 95% confidence intervals are computed over the average task accuracy.


In-Context Impersonation Reveals Large Language Models' Strengths and Biases

Neural Information Processing Systems

In everyday conversations, humans can take on different roles and adapt their vocabulary to their chosen roles. We explore whether LLMs can take on, that is impersonate, different roles when they generate text in-context. We ask LLMs to assume different personas before solving vision and language tasks. We do this by prefixing the prompt with a persona that is associated either with a social identity or domain expertise. In a multi-armed bandit task, we find that LLMs pretending to be children of different ages recover human-like developmental stages of exploration. In a language-based reasoning task, we find that LLMs impersonating domain experts perform better than LLMs impersonating non-domain experts.