Less but Better: Parameter-Efficient Fine-Tuning of Large Language Models for Personality Detection
Shen, Lingzhi, Long, Yunfei, Cai, Xiaohao, Chen, Guanming, Razzak, Imran, Jameel, Shoaib
–arXiv.org Artificial Intelligence
Shoaib Jameel University of Southampton Southampton, United Kingdom M.S.Jameel@southampton.ac.uk Abstract --Personality detection automatically identifies an individual's personality from various data sources, such as social media texts. However, as the parameter scale of language models continues to grow, the computational cost becomes increasingly difficult to manage. Fine-tuning also grows more complex, making it harder to justify the effort and reliably predict outcomes. We introduce a novel parameter-efficient fine-tuning framework, PersLLM, to address these challenges. By storing the features in the memory layer, we eliminate the need for repeated complex computations by the LLM. Meanwhile, the lightweight output network serves as a proxy for evaluating the overall effectiveness of the framework, improving the predictability of results. Experimental results on key benchmark datasets like Kaggle and Pandora show that PersLLM significantly reduces computational cost while maintaining competitive performance and strong adaptability. I NTRODUCTION Personality refers to the stable traits in an individual's emotions, thoughts, and behaviours that shape how they perceive, interpret, and interact with the world [1], [2]. It is a complex and multifaceted construct that combines various characteristics to form a person's unique identity.
arXiv.org Artificial Intelligence
Apr-9-2025
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Europe > United Kingdom
- England
- Greater London > London (0.04)
- Hampshire > Southampton (0.25)
- England
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (1.00)
- Technology: