Prompting-in-a-Series: Psychology-Informed Contents and Embeddings for Personality Recognition With Decoder-Only Models
Tan, Jing Jie, Kwan, Ban-Hoe, Ng, Danny Wee-Kiat, Hum, Yan-Chai, Mokraoui, Anissa, Lo, Shih-Yu
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have demonstrated remarkable capabilities across various natural language processing tasks. This research introduces a novel "Prompting-in-a-Series" algorithm, termed PICEPR (Psychology-Informed Contents Embeddings for Personality Recognition), featuring two pipelines: (a) Contents and (b) Embeddings. The approach demonstrates how a modularised decoder-only LLM can summarize or generate content, which can aid in classifying or enhancing personality recognition functions as a personality feature extractor and a generator for personality-rich content. We conducted various experiments to provide evidence to justify the rationale behind the PICEPR algorithm. Meanwhile, we also explored closed-source models such as \textit{gpt4o} from OpenAI and \textit{gemini} from Google, along with open-source models like \textit{mistral} from Mistral AI, to compare the quality of the generated content. The PICEPR algorithm has achieved a new state-of-the-art performance for personality recognition by 5-15\% improvement. The work repository and models' weight can be found at https://research.jingjietan.com/?q=PICEPR.
arXiv.org Artificial Intelligence
Dec-9-2025
- Country:
- Asia
- Europe
- France (0.04)
- Germany (0.04)
- United Kingdom (0.04)
- North America > United States (0.04)
- Genre:
- Overview (1.00)
- Research Report
- Experimental Study (0.93)
- New Finding (1.00)
- Technology: