psychological profile
The Basic B*** Effect: The Use of LLM-based Agents Reduces the Distinctiveness and Diversity of People's Choices
Matz, Sandra C., Horton, C. Blaine, Goethals, Sofie
Large language models (LLMs) increasingly act on people's behalf: they write emails, buy groceries, and book restaurants. While the outsourcing of human decision-making to AI can be both efficient and effective, it raises a fundamental question: how does delegating identity-defining choices to AI reshape who people become? We study the impact of agentic LLMs on two identity-relevant outcomes: interpersonal distinctiveness - how unique a person's choices are relative to others - and intrapersonal diversity - the breadth of a single person's choices over time. Using real choices drawn from social-media behavior of 1,000 U.S. users (110,000 choices in total), we compare a generic and personalized agent to a human baseline. Both agents shift people's choices toward more popular options, reducing the distinctiveness of their behaviors and preferences. While the use of personalized agents tempers this homogenization (compared to the generic AI), it also more strongly compresses the diversity of people's preference portfolios by narrowing what they explore across topics and psychological affinities. Understanding how AI agents might flatten human experience, and how using generic versus personalized agents involves distinctiveness-diversity trade-offs, is critical for designing systems that augment rather than constrain human agency, and for safeguarding diversity in thought, taste, and expression.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
Sentiment Simulation using Generative AI Agents
Tia, Melrose, Lanuzo, Jezreel Sophia, Baltazar, Lei Rigi, Lopez-Relente, Marie Joy, Quiñones, Diwa Malaya, Albia, Jason
Traditional sentiment analysis relies on surface-level linguistic patterns and retrospective data, limiting its ability to capture the psychological and contextual drivers of human sentiment. These limitations constrain its effectiveness in applications that require predictive insight, such as policy testing, narrative framing, and behavioral forecasting. We present a robust framework for sentiment simulation using generative AI agents embedded with psychologically rich profiles. Agents are instantiated from a nationally representative survey of 2,485 Filipino respondents, combining sociodemographic information with validated constructs of personality traits, values, beliefs, and socio-political attitudes. The framework includes three stages: (1) agent embodiment via categorical or contextualized encodings, (2) exposure to real-world political and economic scenarios, and (3) generation of sentiment ratings accompanied by explanatory rationales. Using Quadratic Weighted Accuracy (QWA), we evaluated alignment between agent-generated and human responses. Contextualized encoding achieved 92% alignment in replicating original survey responses. In sentiment simulation tasks, agents reached 81%--86% accuracy against ground truth sentiment, with contextualized profile encodings significantly outperforming categorical (p < 0.0001, Cohen's d = 0.70). Simulation results remained consistent across repeated trials (+/-0.2--0.5% SD) and resilient to variation in scenario framing (p = 0.9676, Cohen's d = 0.02). Our findings establish a scalable framework for sentiment modeling through psychographically grounded AI agents. This work signals a paradigm shift in sentiment analysis from retrospective classification to prospective and dynamic simulation grounded in psychology of sentiment formation.
- Asia > Philippines > Luzon > National Capital Region > City of Manila (0.14)
- Asia > Indonesia (0.14)
- Pacific Ocean > North Pacific Ocean > Philippine Sea (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Government (1.00)
- Marketing (0.93)
- Information Technology > Services (0.68)
Eeyore: Realistic Depression Simulation via Supervised and Preference Optimization
Liu, Siyang, Brie, Bianca, Li, Wenda, Biester, Laura, Lee, Andrew, Pennebaker, James, Mihalcea, Rada
Large Language Models (LLMs) have been previously explored for mental healthcare training and therapy client simulation, but they still fall short in authentically capturing diverse client traits and psychological conditions. We introduce \textbf{Eeyore}, an 8B model optimized for realistic depression simulation through a structured alignment framework, incorporating expert input at every stage. First, we systematically curate real-world depression-related conversations, extracting depressive traits to guide data filtering and psychological profile construction, and use this dataset to instruction-tune Eeyore for profile adherence. Next, to further enhance realism, Eeyore undergoes iterative preference optimization -- first leveraging model-generated preferences and then calibrating with a small set of expert-annotated preferences. Throughout the entire pipeline, we actively collaborate with domain experts, developing interactive interfaces to validate trait extraction and iteratively refine structured psychological profiles for clinically meaningful role-play customization. Despite its smaller model size, the Eeyore depression simulation outperforms GPT-4o with SOTA prompting strategies, both in linguistic authenticity and profile adherence.
- North America > United States > Texas (0.14)
- North America > United States > Michigan (0.14)
- Europe > Czechia (0.14)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Personal > Interview (0.67)
PsychAdapter: Adapting LLM Transformers to Reflect Traits, Personality and Mental Health
Vu, Huy, Nguyen, Huy Anh, Ganesan, Adithya V, Juhng, Swanie, Kjell, Oscar N. E., Sedoc, Joao, Kern, Margaret L., Boyd, Ryan L., Ungar, Lyle, Schwartz, H. Andrew, Eichstaedt, Johannes C.
Artificial intelligence-based language generators are now a part of most people's lives. However, by default, they tend to generate "average" language without reflecting the ways in which people differ. Here, we propose a lightweight modification to the standard language model transformer architecture - "PsychAdapter" - that uses empirically derived trait-language patterns to generate natural language for specified personality, demographic, and mental health characteristics (with or without prompting). We applied PsychAdapters to modify OpenAI's GPT-2, Google's Gemma, and Meta's Llama 3 and found generated text to reflect the desired traits. For example, expert raters evaluated PsychAdapter's generated text output and found it matched intended trait levels with 87.3% average accuracy for Big Five personalities, and 96.7% for depression and life satisfaction. PsychAdapter is a novel method to introduce psychological behavior patterns into language models at the foundation level, independent of prompting, by influencing every transformer layer. This approach can create chatbots with specific personality profiles, clinical training tools that mirror language associated with psychological conditionals, and machine translations that match an authors reading or education level without taking up LLM context windows. PsychAdapter also allows for the exploration psychological constructs through natural language expression, extending the natural language processing toolkit to study human psychology.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (16 more...)
- Leisure & Entertainment (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.86)
Psychological Profiling in Cybersecurity: A Look at LLMs and Psycholinguistic Features
Tshimula, Jean Marie, Nkashama, D'Jeff K., Muabila, Jean Tshibangu, Galekwa, René Manassé, Kanda, Hugues, Dialufuma, Maximilien V., Didier, Mbuyi Mukendi, Kalonji, Kalala, Mundele, Serge, Lenye, Patience Kinshie, Basele, Tighana Wenge, Ilunga, Aristarque, Mayemba, Christian N., Kasoro, Nathanaël M., Kasereka, Selain K., Mikese, Hardy, Tardif, Pierre-Martin, Frappier, Marc, Kabanza, Froduald, Chikhaoui, Belkacem, Wang, Shengrui, Sumbu, Ali Mulenda, Ndona, Xavier, Intudi, Raoul Kienge-Kienge
The increasing sophistication of cyber threats necessitates innovative approaches to cybersecurity. In this paper, we explore the potential of psychological profiling techniques, particularly focusing on the utilization of Large Language Models (LLMs) and psycholinguistic features. We investigate the intersection of psychology and cybersecurity, discussing how LLMs can be employed to analyze textual data for identifying psychological traits of threat actors. We explore the incorporation of psycholinguistic features, such as linguistic patterns and emotional cues, into cybersecurity frameworks. Our research underscores the importance of integrating psychological perspectives into cybersecurity practices to bolster defense mechanisms against evolving threats.
- Africa > Democratic Republic of the Congo > Kinshasa Province > Kinshasa (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- (7 more...)
- Research Report (1.00)
- Overview (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Towards a Client-Centered Assessment of LLM Therapists by Client Simulation
Wang, Jiashuo, Xiao, Yang, Li, Yanran, Song, Changhe, Xu, Chunpu, Tan, Chenhao, Li, Wenjie
Although there is a growing belief that LLMs can be used as therapists, exploring LLMs' capabilities and inefficacy, particularly from the client's perspective, is limited. This work focuses on a client-centered assessment of LLM therapists with the involvement of simulated clients, a standard approach in clinical medical education. However, there are two challenges when applying the approach to assess LLM therapists at scale. Ethically, asking humans to frequently mimic clients and exposing them to potentially harmful LLM outputs can be risky and unsafe. Technically, it can be difficult to consistently compare the performances of different LLM therapists interacting with the same client. To this end, we adopt LLMs to simulate clients and propose ClientCAST, a client-centered approach to assessing LLM therapists by client simulation. Specifically, the simulated client is utilized to interact with LLM therapists and complete questionnaires related to the interaction. Based on the questionnaire results, we assess LLM therapists from three client-centered aspects: session outcome, therapeutic alliance, and self-reported feelings. We conduct experiments to examine the reliability of ClientCAST and use it to evaluate LLMs therapists implemented by Claude-3, GPT-3.5, LLaMA3-70B, and Mixtral 8*7B. Codes are released at https://github.com/wangjs9/ClientCAST.
- Europe > Ireland (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- (3 more...)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (0.46)
- Research Report > New Finding (0.46)
The high dimensional psychological profile and cultural bias of ChatGPT
Yuan, Hang, Che, Zhongyue, Li, Shao, Zhang, Yue, Hu, Xiaomeng, Luo, Siyang
Given the rapid advancement of large-scale language models, artificial intelligence (AI) models, like ChatGPT, are playing an increasingly prominent role in human society. However, to ensure that artificial intelligence models benefit human society, we must first fully understand the similarities and differences between the human-like characteristics exhibited by artificial intelligence models and real humans, as well as the cultural stereotypes and biases that artificial intelligence models may exhibit in the process of interacting with humans. This study first measured ChatGPT in 84 dimensions of psychological characteristics, revealing differences between ChatGPT and human norms in most dimensions as well as in high-dimensional psychological representations. Additionally, through the measurement of ChatGPT in 13 dimensions of cultural values, it was revealed that ChatGPT's cultural value patterns are dissimilar to those of various countries/regions worldwide. Finally, an analysis of ChatGPT's performance in eight decision-making tasks involving interactions with humans from different countries/regions revealed that ChatGPT exhibits clear cultural stereotypes in most decision-making tasks and shows significant cultural bias in third-party punishment and ultimatum games. The findings indicate that, compared to humans, ChatGPT exhibits a distinct psychological profile and cultural value orientation, and it also shows cultural biases and stereotypes in interpersonal decision-making. Future research endeavors should emphasize enhanced technical oversight and augmented transparency in the database and algorithmic training procedures to foster more efficient cross-cultural communication and mitigate social disparities.
- North America > United States > Iowa (0.04)
- Asia > Japan (0.04)
- South America > Colombia (0.04)
- (22 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.93)
Israeli Researchers Develop AI Tech To Detect Early Signs Of Suicide Risk
Israeli researchers have developed AI-based technology they say could contribute to the development of a more effective screening tool for the early detection of suicidal tendencies and risks. The technology, based on the automatic text analysis of social network content, was detailed in research published in the academic journal Scientific Reports last month. The researchers created a system that combines machine learning and natural language processing (NLP) algorithims with theoretical and analytical tools from the realm of psychology and psychiatry, and uses layered neural networks. The tools developed by the group can be used as an early detection tool for those in the population at-risk of suicidal thoughts or inclinations. It is not limited to people already being treated for mental health issues, a study from both universities said.
David Icke Socioemotional "Thought Crimes" in American Schools: Tracking Student SEL Data for Precrime
'As a result of federal initiatives to "get tough on crime," such as the Reagan Administration's War on Drugs and the Clinton Administration's "Three Strikes" laws, the total number of incarcerated Americans more than quadrupled from roughly 500,000 inmates in 1980 to 2.2 million inmates in 2015. During these decades, black Americans were incarcerated at a rate five times higher than that of white Americans. Despite a new 2019 US Bureau of Justice Statistics (BJS) report, which suggests that the racial disparity between white and black incarceration rates is "narrowing," a Pew Research Center review of BJS stats reveals that this 2019 report "counts only inmates sentenced to more than a year."Moreover, Whites accounted for 64% of adults but 30% of prisoners. . . . In 2017, there were 1,549 black prisoners for every 100,000 black adults--nearly six times the imprisonment rate for whites (272 per 100,000)."
- North America > United States > California (0.14)
- North America > United States > Pennsylvania (0.04)
- North America > United States > North Dakota > Ward County > Minot (0.04)
- (6 more...)
- Instructional Material (0.93)
- Research Report (0.70)
The Rise of Dataism: A Threat to Freedom or a Scientific Revolution?
What would happen if we made all of our data public--everything from wearables monitoring our biometrics, all the way to smartphones monitoring our location, our social media activity, and even our internet search history? Would such insights into our lives simply provide companies and politicians with greater power to invade our privacy and manipulate us by using our psychological profiles against us? A burgeoning new philosophy called dataism doesn't think so. In fact, this trending ideology believes that liberating the flow of data is the supreme value of the universe, and that it could be the key to unleashing the greatest scientific revolution in the history of humanity. First mentioned by David Brooks in his 2013 New York Times article "The Philosophy of Data," dataism is an ethical system that has been most heavily explored and popularized by renowned historian, Yuval Noah Harari.
- North America > United States (0.15)
- Europe > United Kingdom (0.05)
- Health & Medicine > Epidemiology (0.71)
- Health & Medicine > Therapeutic Area > Immunology (0.52)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.32)