On the Psychology of GPT-4: Moderately anxious, slightly masculine, honest, and humble
Barua, Adrita, Brase, Gary, Dong, Ke, Hitzler, Pascal, Vasserman, Eugene
–arXiv.org Artificial Intelligence
The capability of Large Language Models (LLMs) such as GPT-4 to engage in conversation with humans presents a significant leap in Artificial Intelligence (AI) development that is broadly considered to be disruptive for certain technological areas. A human interacting with an LLM may indeed perceive the LLM as an agent with a personality, to the extent that some have even called them sentient (De Cosmo, 2022). While we, of course, do not subscribe to the notion that LLMs are sentient - nor do we believe it is as yet clear what it means to even ask whether an LLM has a personality - there is still the appearance of agency and personality to the human user interacting with the system. Subjecting an LLM to psychometric tests is thus, in our view, less an assessment of some actual personality that the LLM may or may not have, but rather an assessment of the personality or personalities perceived by the human user. As such, our interest is not only in the actual personality profile(s) resulting from the tests, but also in the question whether the profiles are stable over re-tests and how they vary with different (relevant) parameter settings. At the same time, the results beg the question why the results of the tests are what they are.
arXiv.org Artificial Intelligence
Feb-1-2024