Goto

Collaborating Authors

 psychological science



Decision-Making Amid Information-Based Threats in Sociotechnical Systems: A Review

Allred, Aaron R., Richardson, Erin E., Bostrom, Sarah R., Crum, James, Spencer, Cara, Tossell, Chad, Niemeyer, Richard E., Hirshfield, Leanne, Hayman, Allison P. A.

arXiv.org Artificial Intelligence

Technological systems increasingly mediate human information exchange, spanning interactions among humans as well as between humans and artificial agents. The unprecedented scale and reliance on information disseminated through these systems substantially expand the scope of information-based influence that can both enable and undermine sound decision-making. Consequently, understanding and protecting decision-making today faces growing challenges, as individuals and organizations must navigate evolving opportunities and information-based threats across varied domains and information environments. While these risks are widely recognized, research remains fragmented: work evaluating information-based threat phenomena has progressed largely in isolation from foundational studies of human information processing. In this review, we synthesize insights from both domains to identify shared cognitive mechanisms that mediate vulnerability to information-based threats and shape behavioral outcomes. Finally, we outline directions for future research aimed at integrating these perspectives, emphasizing the importance of such integration for mitigating human vulnerabilities and aligning human-machine representations.


Imagining and building wise machines: The centrality of AI metacognition

Johnson, Samuel G. B., Karimi, Amir-Hossein, Bengio, Yoshua, Chater, Nick, Gerstenberg, Tobias, Larson, Kate, Levine, Sydney, Mitchell, Melanie, Rahwan, Iyad, Schölkopf, Bernhard, Grossmann, Igor

arXiv.org Artificial Intelligence

Recent advances in artificial intelligence (AI) have produced systems capable of increasingly sophisticated performance on cognitive tasks. However, AI systems still struggle in critical ways: unpredictable and novel environments (robustness), lack of transparency in their reasoning (explainability), challenges in communication and commitment (cooperation), and risks due to potential harmful actions (safety). We argue that these shortcomings stem from one overarching failure: AI systems lack wisdom. Drawing from cognitive and social sciences, we define wisdom as the ability to navigate intractable problems - those that are ambiguous, radically uncertain, novel, chaotic, or computationally explosive - through effective task-level and metacognitive strategies. While AI research has focused on task-level strategies, metacognition - the ability to reflect on and regulate one's thought processes - is underdeveloped in AI systems. In humans, metacognitive strategies such as recognizing the limits of one's knowledge, considering diverse perspectives, and adapting to context are essential for wise decision-making. We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety. By focusing on developing wise AI, we suggest an alternative to aligning AI with specific human values - a task fraught with conceptual and practical difficulties. Instead, wise AI systems can thoughtfully navigate complex situations, account for diverse human values, and avoid harmful actions. We discuss potential approaches to building wise AI, including benchmarking metacognitive abilities and training AI systems to employ wise reasoning. Prioritizing metacognition in AI research will lead to systems that act not only intelligently but also wisely in complex, real-world situations.


Metacognitive Myopia in Large Language Models

Scholten, Florian, Rebholz, Tobias R., Hütter, Mandy

arXiv.org Artificial Intelligence

Large Language Models (LLMs) exhibit potentially harmful biases that reinforce culturally inherent stereotypes, cloud moral judgments, or amplify positive evaluations of majority groups. Previous explanations mainly attributed bias in LLMs to human annotators and the selection of training data. Consequently, they have typically been addressed with bottom-up approaches such as reinforcement learning or debiasing corpora. However, these methods only treat the effects of LLM biases by indirectly influencing the model architecture, but do not address the underlying causes in the computational process. Here, we propose metacognitive myopia as a cognitive-ecological framework that can account for a conglomerate of established and emerging LLM biases and provide a lever to address problems in powerful but vulnerable tools. Our theoretical framework posits that a lack of the two components of metacognition, monitoring and control, causes five symptoms of metacognitive myopia in LLMs: integration of invalid tokens and embeddings, susceptibility to redundant information, neglect of base rates in conditional computation, decision rules based on frequency, and inappropriate higher-order statistical inference for nested data structures. As a result, LLMs produce erroneous output that reaches into the daily high-stakes decisions of humans. By introducing metacognitive regulatory processes into LLMs, engineers and scientists can develop precise remedies for the underlying causes of these biases. Our theory sheds new light on flawed human-machine interactions and raises ethical concerns regarding the increasing, imprudent implementation of LLMs in organizational structures.


Cooperative Evolutionary Pressure and Diminishing Returns Might Explain the Fermi Paradox: On What Super-AIs Are Like

Vallstrom, Daniel

arXiv.org Artificial Intelligence

With an evolutionary approach, the basis of morality can be explained as adaptations to problems of cooperation. With 'evolution' taken in a broad sense, evolving AIs that satisfy the conditions for evolution to apply will be subject to the same cooperative evolutionary pressure as biological entities. Here the adaptiveness of increased cooperation as material safety and wealth increase is discussed -- for humans, for other societies, and for AIs. Diminishing beneficial returns from increased access to material resources also suggests the possibility that, on the whole, there will be no incentive to for instance colonize entire galaxies, thus providing a possible explanation of the Fermi paradox, wondering where everybody is. It is further argued that old societies could engender, give way to, super-AIs, since it is likely that super-AIs are feasible, and fitter. Closing is an aside on effective ways for morals and goals to affect life and society, emphasizing environments, cultures, and laws, and exemplified by how to eat. Appended are an algorithm for colonizing for example a galaxy quickly, models of the evolution of cooperation and fairness under diminishing returns, and software for simulating signaling development. It is also noted that there can be no exponential colonization or reproduction, for mathematical reasons, as each entity takes up a certain amount of space.


Emergence of a phonological bias in ChatGPT

Toro, Juan Manuel

arXiv.org Artificial Intelligence

Current large language models, such as OpenAI's ChatGPT, have captured the public's attention because how remarkable they are in the use of language. Here, I demonstrate that ChatGPT displays phonological biases that are a hallmark of human language processing. More concretely, just like humans, ChatGPT has a consonant bias. That is, the chatbot has a tendency to use consonants over vowels to identify words. This is observed across languages that differ in their relative distribution of consonants and vowels such as English and Spanish.


How Machine Learning Is Transforming Psychological Science – Association for Psychological Science – APS

#artificialintelligence

Artificial intelligence and machine learning are providing insights that will soon transcend scientists’ observational capabilities, potentially leading to revolutionary advances in understanding human psychology.


Women are better at finding and remembering words than men, study shows

Daily Mail - Science & tech

That's because a new study has found that women are better at finding and remembering words than men. Researchers from the University of Bergen in Norway have analysed the results of 168 studies on gender differences in'verbal fluency' and'verbal-episodic memory'. Verbal fluency is a measure of one's vocabulary, while verbal-episodic memory is the ability to recall words one has come across in the past. The female advantage is consistent across time and life span, but it is also relatively small,' said Professor Marco Hirnstein. Researchers from the University of Bergen in Norway have analysed the results of 168 studies on gender differences in'verbal fluency' and'verbal-episodic memory' (stock image) A study by a team from the University of Pennsylvania scanned the brains of 900 men, women and children aged eight to 22. From the scans they were able to create a complete road map of the connections in each of their brains, called their'connectome'.


Perspectives on Machine Learning from Psychology's Reproducibility Crisis

Bell, Samuel J., Kampman, Onno P.

arXiv.org Artificial Intelligence

In the early 2010s, a crisis of reproducibility rocked the field of psychology. Following a period of reflection, the field has responded with radical reform of its scientific practices. More recently, similar questions about the reproducibility of machine learning research have also come to the fore. In this short paper, we present select ideas from psychology's reformation, translating them into relevance for a machine learning audience.


Health News - Why Do Some People Never Forget A Face?

AITopics Original Links

"Face recognition is an important social skill, but not all of us are equally good at it," says Beijing Normal University cognitive psychologist Jia Liu. A new study by Liu and colleagues Ruosi Wang, Jingguang Li, Huizhen Fang, and Moqian Tian provides the first experimental evidence that the inequality of abilities is rooted in the unique way in which the mind perceives faces. "Individuals who process faces more holistically"--that is, as an integrated whole--"are better at face recognition," says Liu. The findings will appear in an upcoming issue of Psychological Science, a journal published by the Association for Psychological Science. In daily life, we recognize faces both holistically and also "analytically"--that is, picking out individual parts, such as eyes or nose.