mindset
MindSET: Advancing Mental Health Benchmarking through Large-Scale Social Media Data
Mankarious, Saad, Zirikly, Ayah, Wiechmann, Daniel, Kerz, Elma, Kempa, Edward, Qiao, Yu
Social media data has become a vital resource for studying mental health, offering real-time insights into thoughts, emotions, and behaviors that traditional methods often miss. Progress in this area has been facilitated by benchmark datasets for mental health analysis; however, most existing benchmarks have become outdated due to limited data availability, inadequate cleaning, and the inherently diverse nature of social media content (e.g., multilingual and harmful material). We present a new benchmark dataset, \textbf{MindSET}, curated from Reddit using self-reported diagnoses to address these limitations. The annotated dataset contains over \textbf{13M} annotated posts across seven mental health conditions, more than twice the size of previous benchmarks. To ensure data quality, we applied rigorous preprocessing steps, including language filtering, and removal of Not Safe for Work (NSFW) and duplicate content. We further performed a linguistic analysis using LIWC to examine psychological term frequencies across the eight groups represented in the dataset. To demonstrate the dataset utility, we conducted binary classification experiments for diagnosis detection using both fine-tuned language models and Bag-of-Words (BoW) features. Models trained on MindSET consistently outperformed those trained on previous benchmarks, achieving up to an \textbf{18-point} improvement in F1 for Autism detection. Overall, MindSET provides a robust foundation for researchers exploring the intersection of social media and mental health, supporting both early risk detection and deeper analysis of emerging psychological trends.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- South America > Paraguay > Asunción > Asunción (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- (11 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.69)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.68)
- Health & Medicine > Therapeutic Area > Neurology > Autism (0.50)
Do Androids Dream of Unseen Puppeteers? Probing for a Conspiracy Mindset in Large Language Models
Corso, Francesco, Pierri, Francesco, Morales, Gianmarco De Francisci
In this paper, we investigate whether Large Language Models (LLMs) exhibit conspiratorial tendencies, whether they display sociodemographic biases in this domain, and how easily they can be conditioned into adopting conspiratorial perspectives. Conspiracy beliefs play a central role in the spread of misinformation and in shaping distrust toward institutions, making them a critical testbed for evaluating the social fidelity of LLMs. LLMs are increasingly used as proxies for studying human behavior, yet little is known about whether they reproduce higher-order psychological constructs such as a conspiratorial mindset. To bridge this research gap, we administer validated psychometric surveys measuring conspiracy mindset to multiple models under different prompting and conditioning strategies. Our findings reveal that LLMs show partial agreement with elements of conspiracy belief, and conditioning with socio-demographic attributes produces uneven effects, exposing latent demographic biases. Moreover, targeted prompts can easily shift model responses toward conspiratorial directions, underscoring both the susceptibility of LLMs to manipulation and the potential risks of their deployment in sensitive contexts. These results highlight the importance of critically evaluating the psychological dimensions embedded in LLMs, both to advance computational social science and to inform possible mitigation strategies against harmful uses.
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (0.97)
- Government (0.68)
- Media > News (0.48)
Reasoning and Behavioral Equilibria in LLM-Nash Games: From Mindsets to Actions
We introduce the LLM-Nash framework, a game-theoretic model where agents select reasoning prompts to guide decision-making via Large Language Models (LLMs). Unlike classical games that assume utility-maximizing agents with full rationality, this framework captures bounded rationality by modeling the reasoning process explicitly. Equilibrium is defined over the prompt space, with actions emerging as the behavioral output of LLM inference. This approach enables the study of cognitive constraints, mindset expressiveness, and epistemic learning. Through illustrative examples, we show how reasoning equilibria can diverge from classical Nash outcomes, offering a new foundation for strategic interaction in LLM-enabled systems.
Healthy Distrust in AI systems
Paaßen, Benjamin, Alpsancar, Suzana, Matzner, Tobias, Scharlau, Ingrid
Under the slogan of trustworthy AI, much of contemporary AI research is focused on designing AI systems and usage practices that inspire human trust and, thus, enhance adoption of AI systems. However, a person affected by an AI system may not be convinced by AI system design alone -- neither should they, if the AI system is embedded in a social context that gives good reason to believe that it is used in tension with a person's interest. In such cases, distrust in the system may be justified and necessary to build meaningful trust in the first place. We propose the term "healthy distrust" to describe such a justified, careful stance towards certain AI usage practices. We investigate prior notions of trust and distrust in computer science, sociology, history, psychology, and philosophy, outline a remaining gap that healthy distrust might fill and conceptualize healthy distrust as a crucial part for AI usage that respects human autonomy.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Switzerland > Basel-City > Basel (0.04)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- (13 more...)
- Law (1.00)
- Health & Medicine (1.00)
- Energy (0.68)
- Information Technology > Security & Privacy (0.46)
Visual Language Models show widespread visual deficits on neuropsychological tests
Tangtartharakul, Gene, Storrs, Katherine R.
Visual Language Models (VLMs) show remarkable performance in visual reasoning tasks, successfully tackling college-level challenges that require high-level understanding of images. However, some recent reports of VLMs struggling to reason about elemental visual concepts like orientation, position, continuity, and occlusion suggest a potential gulf between human and VLM vision. Here we use the toolkit of neuropsychology to systematically assess the capabilities of three state-of-the-art VLMs across visual domains. Using 51 tests drawn from six clinical and experimental batteries, we characterise the visual abilities of leading VLMs relative to normative performance in healthy adults. While the models excel in straightforward object recognition tasks, we find widespread deficits in low- and mid-level visual abilities that would be considered clinically significant in humans. These selective deficits, profiled through validated test batteries, suggest that an artificial system can achieve complex object recognition without developing foundational visual concepts that in humans require no explicit training.
- North America > United States (0.14)
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.05)
- North America > Mexico > Puebla (0.04)
- (4 more...)
- Research Report > New Finding (0.92)
- Research Report > Experimental Study (0.87)
Cognitive networks highlight differences and similarities in the STEM mindsets of human and LLM-simulated trainees, experts and academics
Haim, Edith, Bergh, Lars van den, Siew, Cynthia S. Q., Kenett, Yoed N., Marinazzo, Daniele, Stella, Massimo
Understanding attitudes towards STEM means quantifying the cognitive and emotional ways in which individuals, and potentially large language models too, conceptualise such subjects. This study uses behavioural forma mentis networks (BFMNs) to investigate the STEM-focused mindset, i.e. ways of associating and perceiving ideas, of 177 human participants and 177 artificial humans simulated by GPT-3.5. Participants were split in 3 groups - trainees, experts and academics - to compare the influence of expertise level on their mindset. The results revealed that human forma mentis networks exhibited significantly higher clustering coefficients compared to GPT-3.5, indicating that human mindsets displayed a tendency to form and close triads of conceptual associations while recollecting STEM ideas. Human experts, in particular, demonstrated robust clustering coefficients, reflecting better integration of STEM concepts into their cognitive networks. In contrast, GPT-3.5 produced sparser mindsets. Furthermore, both human and GPT mindsets framed mathematics in neutral or positive terms, differently from STEM high schoolers, researchers and other large language models sampled in other works. This research contributes to understanding how mindset structure can provide cognitive insights about memory structure and machine limitations.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Italy > Trentino-Alto Adige/Südtirol > Trentino Province > Trento (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine (1.00)
- Education > Educational Setting > Higher Education (0.93)
- Education > Curriculum > Subject-Specific Education (0.68)
Dataset | Mindset = Explainable AI | Interpretable AI
Wu, Caesar, Buyya, Rajkumar, Li, Yuan Fang, Bouvry, Pascal
We often use "explainable" Artificial Intelligence (XAI)" and "interpretable AI (IAI)" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs. However, these notions can sometimes be confusing because interpretation often has a subjective connotation, while explanations lean towards objective facts. We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset. At the core of this ambiguity is the duality of reasons, in which we can reason either outwards or inwards. When directed outwards, we want the reasons to make sense through the laws of nature. When turned inwards, we want the reasons to be happy, guided by the laws of the heart. While XAI and IAI share reason as the common notion for the goal of transparency, clarity, fairness, reliability, and accountability in the context of ethical AI and trustworthy AI (TAI), their differences lie in that XAI emphasizes the post-hoc analysis of a dataset, and IAI requires a priori mindset of abstraction. This hypothesis can be proved by empirical experiments based on an open dataset and harnessed by High-Performance Computing (HPC). The demarcation of XAI and IAI is indispensable because it would be impossible to determine regulatory policies for many AI applications, especially in healthcare, human resources, banking, and finance. We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Italy > Basilicata > Potenza Province > Potenza (0.04)
- Health & Medicine > Therapeutic Area (0.93)
- Education > Curriculum > Subject-Specific Education (0.68)
WGoM: A novel model for categorical data with weighted responses
The Graded of Membership (GoM) model is a powerful tool for inferring latent classes in categorical data, which enables subjects to belong to multiple latent classes. However, its application is limited to categorical data with nonnegative integer responses, making it inappropriate for datasets with continuous or negative responses. To address this limitation, this paper proposes a novel model named the Weighted Grade of Membership (WGoM) model. Compared with GoM, our WGoM relaxes GoM's distribution constraint on the generation of a response matrix and it is more general than GoM. We then propose an algorithm to estimate the latent mixed memberships and the other WGoM parameters. We derive the error bounds of the estimated parameters and show that the algorithm is statistically consistent. The algorithmic performance is validated in both synthetic and real-world datasets. The results demonstrate that our algorithm is accurate and efficient, indicating its high potential for practical applications. This paper makes a valuable contribution to the literature by introducing a novel model that extends the applicability of the GoM model and provides a more flexible framework for analyzing categorical data with weighted responses.
- Asia > China > Chongqing Province > Chongqing (0.04)
- North America > United States > Virginia > Alexandria County > Alexandria (0.04)
- North America > United States > Florida > Miami-Dade County > Miami Beach (0.04)
- (2 more...)
- Research Report > Promising Solution (0.80)
- Research Report > New Finding (0.66)
Frankenstein's warning: the too-familiar hubris of today's technoscience
Can we imagine a scenario in which the different anxieties aroused by George Romero's horror film Night of the Living Dead and Stanley Kubrick's sci-fi dystopia 2001: A Space Odyssey merge? How might a monster that combined our fear of becoming something less than human with our fear of increasingly "intelligent" machines appear to us and what might it say? There is one work – of both horror and science fiction – that imagines such a monster. Published almost exactly 150 years before Romero and Kubrick released their movies, it is a book in which physical deformity and technological mutiny coalesce, creating a monster that is both a zombie and AI, or something in between the two. A gothic fiction, it is also described by some literary historians as the first science-fiction novel.
- Oceania > Australia (0.05)
- North America > United States > Indiana > Lake County > Munster (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- (2 more...)
- Media > Film (0.90)
- Leisure & Entertainment (0.90)