Goto

Collaborating Authors

 anthropomorphize


A Longitudinal Randomized Control Study of Companion Chatbot Use: Anthropomorphism and Its Mediating Role on Social Impacts

Guingrich, Rose E., Graziano, Michael S. A.

arXiv.org Artificial Intelligence

Many Large Language Model (LLM) chatbots are designed and used for companionship, and people have reported forming friendships, mentorships, and romantic partnerships with them. Concerns that companion chatbots may harm or replace real human relationships have been raised, but whether and how these social consequences occur remains unclear. In the present longitudinal study ($N = 183$), participants were randomly assigned to a chatbot condition (text chat with a companion chatbot) or to a control condition (text-based word games) for 10 minutes a day for 21 days. Participants also completed four surveys during the 21 days and engaged in audio recorded interviews on day 1 and 21. Overall, social health and relationships were not significantly impacted by companion chatbot interactions across 21 days of use. However, a detailed analysis showed a different story. People who had a higher desire to socially connect also tended to anthropomorphize the chatbot more, attributing humanlike properties to it; and those who anthropomorphized the chatbot more also reported that talking to the chatbot had a greater impact on their social interactions and relationships with family and friends. Via a mediation analysis, our results suggest a key mechanism at work: the impact of human-AI interaction on human-human social outcomes is mediated by the extent to which people anthropomorphize the AI agent, which is in turn motivated by a desire to socially connect. In a world where the desire to socially connect is on the rise, this finding may be cause for concern.


Social Robots As Companions for Lonely Hearts: The Role of Anthropomorphism and Robot Appearance

Jung, Yoonwon, Hahn, Sowon

arXiv.org Artificial Intelligence

Loneliness is a distressing personal experience and a growing social issue. Social robots could alleviate the pain of loneliness, particularly for those who lack in-person interaction. This paper investigated how the effect of loneliness on the anthropomorphism of social robots differs by robot appearance, and how it influences purchase intention. Participants viewed a video of one of the three robots (machine-like, animal-like, and human-like) moving and interacting with a human counterpart. Bootstrapped multiple regression results revealed that although the unique effect of animal-likeness on anthropomorphism compared to human-likeness was higher, lonely individuals' tendency to anthropomorphize the animal-like robot was lower than that of the human-like robot. This moderating effect remained significant after covariates were included. Bootstrapped mediation analysis showed that anthropomorphism had both a positive direct effect on purchase intent and a positive indirect effect mediated by likability. Our results suggest that lonely individuals' tendency of anthropomorphizing social robots should not be summarized into one unified inclination. Moreover, by extending the effect of loneliness on anthropomorphism to likability and purchase intent, this current study explored the potential of social robots to be adopted as companions of lonely individuals in their real life. Lastly, we discuss the practical implications of the current study for designing social robots.


Anti-'Terminator': AI not a 'creature' working toward self-awareness, OpenAI CEO Altman says

FOX News

OpenAI CEO Sam Altman took questions from reporters following his congressional hearing and defined "scary AI." OpenAI CEO Sam Altman said people should not try to "anthropomorphize" artificial intelligence and should discuss the powerful tech systems in the context of it being a "tool" and not a "creature." "I think there's a huge amount of speculation on that question," Altman told reporters Tuesday on Capitol Hill when asked how quickly AI could become "self-aware" if Congress does not regulate the technology. The line of questioning had echoes of the "Terminator" film series, in which AI brings about the apocalypse on the day it becomes "self-aware." "I think it's very important that we keep talking about this as a tool, not a creature, because it's so tempting to anthropomorphize it," he added. "I totally understand where the anxiety comes from. I think it's the wrong frame … the wrong way to think about it."


'Is This AI Sapient?' Is The Wrong Question To Ask About LaMDA - AI Summary

#artificialintelligence

And so the risk here is not that the AI is truly sentient but that we are well-poised to create sophisticated machines that can imitate humans to such a degree that we cannot help but anthropomorphize them--and that large tech companies can exploit this in deeply unethical ways. As should be clear from the way we treat our pets, or how we've interacted with Tamagotchi, or how we video gamers reload a save if we accidentally make an NPC cry, we are actually very capable of empathizing with the nonhuman. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata--the metadata you leave behind online that illustrates how you think--is vulnerable to exploitation in the near future. In her section of the work, Suzanne Kite draws on Lakota ontologies to argue that it is essential to recognize that sapience does not define the boundaries of who (or what) is a "being" worthy of respect. This is the AI ethical dilemma that stands before us: the need to make kin of our machines weighed against the myriad ways this can and will be weaponized against us in the next phase of surveillance capitalism.


'Is This AI Sapient?' Is the Wrong Question to Ask About LaMDA

#artificialintelligence

The uproar caused by Blake Lemoine, a Google engineer who believes that one of the company's most sophisticated chat programs, LaMDA (or Language Model for Dialogue Applications) is sapient, has had a curious element: actual AI ethics experts all but renouncing further discussion of the AI sapience question, or deeming it a distraction. They're right to do so. In reading the edited transcript Lemoine released, it was abundantly clear that LaMDA was pulling from any number of websites to generate its text; its interpretation of a Zen koan could've come from anywhere, and its fable read like an automatically generated story (though its depiction of the monster as "wearing human skin" was a delightfully HAL-9000 touch). There was no spark of consciousness there, just little magic tricks that paper over the cracks. But it's easy to see how someone might be fooled, looking at social media responses to the transcript--with even some educated people expressing amazement and a willingness to believe.


Is AI an Existential Threat?

#artificialintelligence

When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize. We will explore two different types of AI, Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI). To understand what ANI is you simply need to understand that every single AI application that is currently available is a form of ANI. These are fields of AI which have a narrow field of specialty, for example autonomous vehicles use AI which is designed with the sole purpose of moving a vehicle from point A to B. Another type of ANI might be a chess program which is optimized to play chess, and even if the chess program continuously improves itself by using reinforcement learning, the chess program will never be able to operate an autonomous vehicle.


Talking to your pets and car is a sign of intelligence

Daily Mail - Science & tech

While it's common for children to talk to their stuffed toys or animals, adults tend to outgrow this and are seen as odd if they do. But there's a scientific reason why humans tend to talk to animals or objects, and it's linked to social intelligence. One of the reasons we might anthropomorphize - give human form or attributes to an animal, plant, material or object, is because of our unique ability to recognize and find faces everywhere. Researchers say there is a scientific reason why humans tend to talk to animals or objects, and it's linked to social intelligence Dr Nicholas Epley, a professor of behavioral science at the University of Chicago and an anthropomorphism expert, told Quartz: 'Historically, anthropomorphizing has been treated as a sign of childishness or stupidity, but it's actually a natural byproduct of the tendency that makes humans uniquely smart on this planet'. He said whether or not we realize it, humans anthropomorphize objects and events all the time.


Digital Analogues (Intro): Artificial Intelligence Systems Should Be Treated Like... - FLI - Future of Life Institute

#artificialintelligence

This piece was originally published on Medium in Imaginary Papers, an online publication of Arizona State University's Center for Science and the Imagination. Matt Scherer runs the Law and AI blog. Artificial intelligence (A.I.) systems are becoming increasingly ubiquitous in our economy and society, and are being designed with an ever-increasing ability to operate free of direct human supervision. Algorithmic trading systems account for a huge and still-growing share of stock market transactions, and autonomous vehicles with A.I. "drivers" are already being tested on the roads. Because they operate with less human supervision and control than earlier technologies, the rising prevalence of autonomous A.I. raises the question of how legal systems can ensure that victims receive compensation if (read: when) an A.I. system causes physical or economic harm during the course of its operations.


Should we anthropomorphize an AI who wants to kill us all?

#artificialintelligence

Musk, Gates, and Hawking are worrying about AI. But would a sentient AI really want to kill us all? And if it does, should we anthropomorphize the AI to give us humans some measure of advantage? One way to consider these questions is to peer into our human nature or even science fiction for clues. The answers may be a matter of perspective: Are we looking into the window or out?


Are Deep Neural Networks Creative?

#artificialintelligence

Are deep neural networks creative? It seems like a reasonable question. Google's "Inceptionism" technique transforms images, iteratively modifying them to enhance the activation of specific neurons in a deep net. The images appear trippy, transforming rocks into buildings or leaves into insects. Another neural generative model, introduced by Leon Gatys of the University of Tubingen in Germany, can extract the style from one image (say a painting by Van Gogh), and apply it to the content of another image (say a photograph). Generative adversarial networks (GANs), introduced by Ian Goodfellow, are capable of synthesizing novel images by modeling the distribution of seen images.