social psychology
- Oceania > Australia (0.04)
- North America > United States (0.04)
- Asia > China (0.04)
- Oceania > Australia (0.14)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States (0.04)
Perspectives on How Sociology Can Advance Theorizing about Human-Chatbot Interaction and Developing Chatbots for Social Good
Campos-Castillo, Celeste, Kang, Xuan, Laestadius, Linnea I.
Recently, research into chatbots (also known as conversational agents, AI agents, voice assistants), which are computer applications using artificial intelligence to mimic human-like conversation, has grown sharply. Despite this growth, sociology lags other disciplines (including computer science, medicine, psychology, and communication) in publishing about chatbots. We suggest sociology can advance understanding of human-chatbot interaction and offer four sociological theories to enhance extant work in this field. The first two theories (resource substitution theory, power-dependence theory) add new insights to existing models of the drivers of chatbot use, which overlook sociological concerns about how social structure (e.g., systemic discrimination, the uneven distribution of resources within networks) inclines individuals to use chatbots, including problematic levels of emotional dependency on chatbots. The second two theories (affect control theory, fundamental cause of disease theory) help inform the development of chatbot-driven interventions that minimize safety risks and enhance equity by leveraging sociological insights into how chatbot outputs could attend to cultural contexts (e.g., affective norms) to promote wellbeing and enhance communities (e.g., opportunities for civic participation). We discuss the value of applying sociological theories for advancing theorizing about human-chatbot interaction and developing chatbots for social good.
- North America > United States > Michigan (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Wisconsin > Milwaukee County > Milwaukee (0.04)
- (4 more...)
- Law (1.00)
- Health & Medicine > Consumer Health (1.00)
- Government (1.00)
- (3 more...)
Examining the Robustness of Homogeneity Bias to Hyperparameter Adjustments in GPT-4
Vision-Language Models trained on massive collections of human-generated data often reproduce and amplify societal stereotypes. One critical form of stereotyping reproduced by these models is homogeneity bias-the tendency to represent certain groups as more homogeneous than others. We investigate how this bias responds to hyperparameter adjustments in GPT-4, specifically examining sampling temperature and top p which control the randomness of model outputs. By generating stories about individuals from different racial and gender groups and comparing their similarities using vector representations, we assess both bias robustness and its relationship with hyperparameter values. We find that (1) homogeneity bias persists across most hyperparameter configurations, with Black Americans and women being represented more homogeneously than White Americans and men, (2) the relationship between hyperparameters and group representations shows unexpected non-linear patterns, particularly at extreme values, and (3) hyperparameter adjustments affect racial and gender homogeneity bias differently-while increasing temperature or decreasing top p can reduce racial homogeneity bias, these changes show different effects on gender homogeneity bias. Our findings suggest that while hyperparameter tuning may mitigate certain biases to some extent, it cannot serve as a universal solution for addressing homogeneity bias across different social group dimensions.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Missouri > St. Louis County > St. Louis (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)
Explicit vs. Implicit: Investigating Social Bias in Large Language Models through Self-Reflection
Zhao, Yachao, Wang, Bo, Wang, Yan
Large Language Models (LLMs) have been shown to exhibit various biases and stereotypes in their generated content. While extensive research has investigated bias in LLMs, prior work has predominantly focused on explicit bias, leaving the more nuanced implicit biases largely unexplored. This paper presents a systematic framework grounded in social psychology theories to investigate and compare explicit and implicit biases in LLMs. We propose a novel "self-reflection" based evaluation framework that operates in two phases: first measuring implicit bias through simulated psychological assessment methods, then evaluating explicit bias by prompting LLMs to analyze their own generated content. Through extensive experiments on state-of-the-art LLMs across multiple social dimensions, we demonstrate that LLMs exhibit a substantial inconsistency between explicit and implicit biases, where explicit biases manifest as mild stereotypes while implicit biases show strong stereotypes. Furthermore, we investigate the underlying factors contributing to this explicit-implicit bias inconsistency. Our experiments examine the effects of training data scale, model parameters, and alignment techniques. Results indicate that while explicit bias diminishes with increased training data and model size, implicit bias exhibits a contrasting upward trend. Notably, contemporary alignment methods (e.g., RLHF, DPO) effectively suppress explicit bias but show limited efficacy in mitigating implicit bias. These findings suggest that while scaling up models and alignment training can address explicit bias, the challenge of implicit bias requires novel approaches beyond current methodologies.
- North America > Canada > Ontario > Toronto (0.04)
- Asia > China > Tianjin Province > Tianjin (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (3 more...)
- Leisure & Entertainment (0.46)
- Government (0.46)
Social coordination perpetuates stereotypic expectations and behaviors across generations in deep multi-agent reinforcement learning
Gelpí, Rebekah A., Tang, Yikai, Jackson, Ethan C., Cunningham, William A.
Despite often being perceived as morally objectionable, stereotypes are a common feature of social groups, a phenomenon that has often been attributed to biased motivations or limits on the ability to process information. We argue that one reason for this continued prevalence is that pre-existing expectations about how others will behave, in the context of social coordination, can change the behaviors of one's social partners, creating the very stereotype one expected to see, even in the absence of other potential sources of stereotyping. We use a computational model of dynamic social coordination to illustrate how this "feedback loop" can emerge, engendering and entrenching stereotypic behavior, and then show that human behavior on the task generates a comparable feedback loop. Notably, people's choices on the task are not related to social dominance or system justification, suggesting biased motivations are not necessary to maintain these stereotypes.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Oceania > New Zealand (0.04)
- (10 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
From Mobilisation to Radicalisation: Probing the Persistence and Radicalisation of Social Movements Using an Agent-Based Model
Thomas, Emma F., Ye, Mengbin, Angus, Simon D., Mathew, Tony J., Louis, Winnifred, Walsh, Liam, Ellery, Silas, Lizzio-Wilson, Morgana, McGarty, Craig
We are living in an age of protest. Although we have an excellent understanding of the factors that predict participation in protest, we understand little about the conditions that foster a sustained (versus transient) movement. How do interactions between supporters and authorities combine to influence whether and how people engage (i.e., using conventional or radical tactics)? This paper introduces a novel, theoretically-founded and empirically-informed agent-based model (DIMESim) to address these questions. We model the complex interactions between the psychological attributes of the protester (agents), the authority to whom the protests are targeted, and the environment that allows protesters to coordinate with each other -- over time, and at a population scale. Where an authority is responsive and failure is contested, a modest sized conventional movement endured. Where authorities repeatedly and incontrovertibly fail the movement, the population disengaged from action but evidenced an ongoing commitment to radicalism (latent radicalism).
- South America > Venezuela (0.04)
- South America > Chile (0.04)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- (10 more...)
- Government (1.00)
- Law Enforcement & Public Safety > Terrorism (0.67)
- Law (0.67)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.46)
Metacognitive Myopia in Large Language Models
Scholten, Florian, Rebholz, Tobias R., Hütter, Mandy
Large Language Models (LLMs) exhibit potentially harmful biases that reinforce culturally inherent stereotypes, cloud moral judgments, or amplify positive evaluations of majority groups. Previous explanations mainly attributed bias in LLMs to human annotators and the selection of training data. Consequently, they have typically been addressed with bottom-up approaches such as reinforcement learning or debiasing corpora. However, these methods only treat the effects of LLM biases by indirectly influencing the model architecture, but do not address the underlying causes in the computational process. Here, we propose metacognitive myopia as a cognitive-ecological framework that can account for a conglomerate of established and emerging LLM biases and provide a lever to address problems in powerful but vulnerable tools. Our theoretical framework posits that a lack of the two components of metacognition, monitoring and control, causes five symptoms of metacognitive myopia in LLMs: integration of invalid tokens and embeddings, susceptibility to redundant information, neglect of base rates in conditional computation, decision rules based on frequency, and inappropriate higher-order statistical inference for nested data structures. As a result, LLMs produce erroneous output that reaches into the daily high-stakes decisions of humans. By introducing metacognitive regulatory processes into LLMs, engineers and scientists can develop precise remedies for the underlying causes of these biases. Our theory sheds new light on flawed human-machine interactions and raises ethical concerns regarding the increasing, imprudent implementation of LLMs in organizational structures.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > United States > Minnesota (0.04)
- North America > United States > California (0.04)
- (4 more...)
- Law (1.00)
- Government (1.00)
- Health & Medicine > Therapeutic Area (0.67)
- Media > Music (0.46)
Can Social Ontological Knowledge Representations be Measured Using Machine Learning?
Personal Social Ontology (PSO), it is proposed, is how an individual perceives the ontological properties of terms. For example, an absolute fatalist would arguably use terms that remove any form of agency from a person. Such fatalism has the impact of ontologically defining acts such as winning, victory and success in a manner that is contrary to how a non-fatalist would ontologically define them. While both the said fatalist and non-fatalist would agree on the dictionary definition of these terms, they would differ on specifically how they can be brought about. This difference between the two individuals can be induced from their usage of these terms, i.e., the co-occurrence of these terms with other terms. As such a quantification of this such co-occurrence offers an avenue to characterise the social ontological views of the speaker. In this paper we ask, what specific term co-occurrence should be measured in order to obtain a valid and reliable psychometric measure of a persons social ontology? We consider the social psychology and social neuroscience literature to arrive at a list of social concepts that can be considered principal features of personal social ontology, and then propose an NLP pipeline to capture the articulation of these terms in language.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Human behaviour through a LENS: How Linguistic content triggers Emotions and Norms and determines Strategy choices
Over the last two decades, a growing body of experimental research has provided evidence that linguistic frames influence human behaviour in economic games, beyond the economic consequences of the available actions. This article proposes a novel framework that transcends the traditional confines of outcome-based preference models. According to the LENS model, the Linguistic description of the decision problem triggers Emotional responses and suggests potential Norms of behaviour, which then interact to shape an individual's Strategic choice. The article reviews experimental evidence that supports each path of the LENS model. Furthermore, it identifies and discusses several critical research questions that arise from this model, pointing towards avenues for future inquiry.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.87)