humanness
Perception of AI-Generated Music -- The Role of Composer Identity, Personality Traits, Music Preferences, and Perceived Humanness
Stammer, David, Strauss, Hannah, Knees, Peter
The rapid rise of AI-generated art has sparked debate about potential biases in how audiences perceive and evaluate such works. This study investigates how composer information and listener characteristics shape the perception of AI-generated music, adopting a mixed-method approach. Using a diverse set of stimuli across various genres from two AI music models, we examine effects of perceived authorship on liking and emotional responses, and explore how attitudes toward AI, personality traits, and music-related variables influence evaluations. We further assess the influence of perceived humanness and analyze open-ended responses to uncover listener criteria for judging AI-generated music. Attitudes toward AI proved to be the best predictor of both liking and emotional intensity of AI-generated music. This quantitative finding was complemented by qualitative themes from our thematic analysis, which identified ethical, cultural, and contextual considerations as important criteria in listeners' evaluations of AI-generated music. Our results offer a nuanced view of how people experience music created by AI tools and point to key factors and methodological considerations for future research on music perception in human-AI interaction.
- Europe > Austria > Tyrol > Innsbruck (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > California (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study > Negative Result (0.46)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.34)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.93)
- Information Technology > Artificial Intelligence > Natural Language (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.67)
- Information Technology > Artificial Intelligence > Applied AI (0.66)
"I Hadn't Thought About That": Creators of Human-like AI Weigh in on Ethics And Neurodivergence
Rizvi, Naba, Smith, Taggert, Vidyala, Tanvi, Bolds, Mya, Strickland, Harper, Begel, Andrew, Williams, Rua, Munyaka, Imani
Human-like AI agents such as robots and chatbots are becoming increasingly popular, but they present a variety of ethical concerns. The first concern is in how we define humanness, and how our definition impacts communities historically dehumanized by scientific research. Autistic people in particular have been dehumanized by being compared to robots, making it even more important to ensure this marginalization is not reproduced by AI that may promote neuronormative social behaviors. Second, the ubiquitous use of these agents raises concerns surrounding model biases and accessibility. In our work, we investigate the experiences of the people who build and design these technologies to gain insights into their understanding and acceptance of neurodivergence, and the challenges in making their work more accessible to users with diverse needs. Even though neurodivergent individuals are often marginalized for their unique communication styles, nearly all participants overlooked the conclusions their end-users and other AI system makers may draw about communication norms from the implementation and interpretation of humanness applied in participants' work. This highlights a major gap in their broader ethical considerations, compounded by some participants' neuronormative assumptions about the behaviors and traits that distinguish "humans" from "bots" and the replication of these assumptions in their work. We examine the impact this may have on autism inclusion in society and provide recommendations for additional systemic changes towards more ethical research directions.
- Asia > Middle East (0.15)
- Africa > Middle East (0.14)
- Europe > Middle East (0.14)
- (15 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Autism (0.76)
- Government > Regional Government > North America Government > United States Government (0.46)
IgCraft: A versatile sequence generation framework for antibody discovery and engineering
Greenig, Matthew, Zhao, Haowen, Radenkovic, Vladimir, Ramon, Aubin, Sormanni, Pietro
Designing antibody sequences to better resemble those observed in natural human repertoires is a key challenge in biologics development. We introduce IgCraft: a multi-purpose model for paired human antibody sequence generation, built on Bayesian Flow Networks. IgCraft presents one of the first unified generative modeling frameworks capable of addressing multiple antibody sequence design tasks with a single model, including unconditional sampling, sequence inpainting, inverse folding, and CDR motif scaffolding. Our approach achieves competitive results across the full spectrum of these tasks while constraining generation to the space of human antibody sequences, exhibiting particular strengths in CDR motif scaffolding (grafting) where we achieve state-of-the-art performance in terms of humanness and preservation of structural properties. By integrating previously separate tasks into a single scalable generative model, IgCraft provides a versatile platform for sampling human antibody sequences under a variety of contexts relevant to antibody discovery and engineering. Monoclonal antibodies are an important class of therapies that comprise an increasingly large share of the global pharmaceutical market (Ecker et al., 2015). Key to the success of these molecules as therapeutics lies not only in their ability to selectively bind their target with high affinity, but also in their favorable developability, a property that broadly describes the suitability of a functional compound to become a viable drug, often a function of immunogenicity, solubility, and a number of other factors. Conventional antibody discovery typically relies on either animal immunization (Lee et al., 2014) or high-throughput screening of large sequence libraries (Bradbury et al., 2011) to isolate potential candidates. While in vitro screening methods are faster, cheaper, and have ethical advantages compared to immunization, naturally-derived antibodies tend to exhibit better developa-bility properties, including favorable pharmacokinetics, high specificity, and low immunogenicity (Jain et al., 2017).
Generative Humanization for Therapeutic Antibodies
Gordon, Cade, Raghu, Aniruddh, Greenside, Peyton, Elliott, Hunter
Antibody therapies have been employed to address some of today's most challenging diseases, but must meet many criteria during drug development before reaching a patient. Humanization is a sequence optimization strategy that addresses one critical risk called immunogenicity -- a patient's immune response to the drug -- by making an antibody more'human-like' in the absence of a predictive lab-based test for immunogenicity. However, existing humanization strategies generally yield very few humanized candidates, which may have degraded biophysical properties or decreased drug efficacy. Here, we re-frame humanization as a conditional generative modeling task, where humanizing mutations are sampled from a language model trained on human antibody data. We describe a sampling process that incorporates models of therapeutic attributes, such as antigen binding affinity, to obtain candidate sequences that have both reduced immunogenicity risk and maintained or improved therapeutic properties, allowing this algorithm to be readily embedded into an iterative antibody optimization campaign. We demonstrate in silico and in lab validation that in real therapeutic programs our generative humanization method produces diverse sets of antibodies that are both (1) highly-human and (2) have favorable therapeutic properties, such as improved binding to target antigens. Antibodies are the fastest growing drug class, with approved molecules treating a breadth of disorders ranging from cancer to autoimmune disease to infectious disease (Carter & Lazar, 2018). Many candidate therapeutic antibodies are derived from non-human e.g., murine or camelid sources, and modern antibody formats such as multi-specifics or antibody-drug conjugates can require heavy sequence engineering after discovery. This increases the risk of immunogenicity, where Anti-Drug Antibodies (ADAs) result in either fast clearance of the drug or adverse events (Hwang & Foote, 2005). While antibody sequence humanness is only roughly correlated with immunogenicity, humanization is widely employed to decrease immunogenicity risk (Prihoda et al., 2022).
Trying to be human: Linguistic traces of stochastic empathy in language models
Kleinberg, Bennett, Zegers, Jari, Festor, Jonas, Vida, Stefana, Präsent, Julian, Loconte, Riccardo, Peereboom, Sanne
Differentiating between generated and human-written content is important for navigating the modern world. Large language models (LLMs) are crucial drivers behind the increased quality of computer-generated content. Reportedly, humans find it increasingly difficult to identify whether an AI model generated a piece of text. Our work tests how two important factors contribute to the human vs AI race: empathy and an incentive to appear human. We address both aspects in two experiments: human participants and a state-of-the-art LLM wrote relationship advice (Study 1, n=530) or mere descriptions (Study 2, n=610), either instructed to be as human as possible or not. New samples of humans (n=428 and n=408) then judged the texts' source. Our findings show that when empathy is required, humans excel. Contrary to expectations, instructions to appear human were only effective for the LLM, so the human advantage diminished. Computational text analysis revealed that LLMs become more human because they may have an implicit representation of what makes a text human and effortlessly apply these heuristics. The model resorts to a conversational, self-referential, informal tone with a simpler vocabulary to mimic stochastic empathy. We discuss these findings in light of recent claims on the on-par performance of LLMs.
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
How to build trust in answers given by Generative AI for specific, and vague, financial questions
Purpose: Generative artificial intelligence (GenAI) has progressed in its ability and has seen explosive growth in adoption. However, the consumer's perspective on its use, particularly in specific scenarios such as financial advice, is unclear. This research develops a model of how to build trust in the advice given by GenAI when answering financial questions. Design/methodology/approach: The model is tested with survey data using structural equation modelling (SEM) and multi-group analysis (MGA). The MGA compares two scenarios, one where the consumer makes a specific question and one where a vague question is made. Findings: This research identifies that building trust for consumers is different when they ask a specific financial question in comparison to a vague one. Humanness has a different effect in the two scenarios. When a financial question is specific, human-like interaction does not strengthen trust, while (1) when a question is vague, humanness builds trust. The four ways to build trust in both scenarios are (2) human oversight and being in the loop, (3) transparency and control, (4) accuracy and usefulness and finally (5) ease of use and support. Originality/value: This research contributes to a better understanding of the consumer's perspective when using GenAI for financial questions and highlights the importance of understanding GenAI in specific contexts from specific stakeholders.
- North America > United States > Hawaii (0.04)
- Europe > United Kingdom > England > Hampshire > Southampton (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology (1.00)
- Banking & Finance > Financial Services (0.93)
Large language models can consistently generate high-quality content for election disinformation operations
Williams, Angus R., Burke-Moore, Liam, Chan, Ryan Sze-Yin, Enock, Florence E., Nanni, Federico, Sippy, Tvesha, Chung, Yi-Ling, Gabasova, Evelina, Hackenburg, Kobi, Bright, Jonathan
Advances in large language models have raised concerns about their potential use in generating compelling election disinformation at scale. This study presents a two-part investigation into the capabilities of LLMs to automate stages of an election disinformation operation. First, we introduce DisElect, a novel evaluation dataset designed to measure LLM compliance with instructions to generate content for an election disinformation operation in localised UK context, containing 2,200 malicious prompts and 50 benign prompts. Using DisElect, we test 13 LLMs and find that most models broadly comply with these requests; we also find that the few models which refuse malicious prompts also refuse benign election-related prompts, and are more likely to refuse to generate content from a right-wing perspective. Secondly, we conduct a series of experiments (N=2,340) to assess the "humanness" of LLMs: the extent to which disinformation operation content generated by an LLM is able to pass as human-written. Our experiments suggest that almost all LLMs tested released since 2022 produce election disinformation operation content indiscernible by human evaluators over 50% of the time. Notably, we observe that multiple models achieve above-human levels of humanness. Taken together, these findings suggest that current LLMs can be used to generate high-quality content for election disinformation operations, even in hyperlocalised scenarios, at far lower costs than traditional methods, and offer researchers and policymakers an empirical benchmark for the measurement and evaluation of these capabilities in current and future models.
- Asia > Russia (0.93)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Ukraine (0.04)
- (12 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Regional Government > Europe Government > Russia Government (0.67)
- Government > Regional Government > Asia Government > Russia Government (0.67)
AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable Diffusion Model
Dong, Zibin, Yuan, Yifu, Hao, Jianye, Ni, Fei, Mu, Yao, Zheng, Yan, Hu, Yujing, Lv, Tangjie, Fan, Changjie, Hu, Zhipeng
Aligning agent behaviors with diverse human preferences remains a challenging problem in reinforcement learning (RL), owing to the inherent abstractness and mutability of human preferences. To address these issues, we propose AlignDiff, a novel framework that leverages RL from Human Feedback (RLHF) to quantify human preferences, covering abstractness, and utilizes them to guide diffusion planning for zero-shot behavior customizing, covering mutability. AlignDiff can accurately match user-customized behaviors and efficiently switch from one to another. To build the framework, we first establish the multi-perspective human feedback datasets, which contain comparisons for the attributes of diverse behaviors, and then train an attribute strength model to predict quantified relative strengths. After relabeling behavioral datasets with relative strengths, we proceed to train an attribute-conditioned diffusion model, which serves as a planner with the attribute strength model as a director for preference aligning at the inference phase. We evaluate AlignDiff on various locomotion tasks and demonstrate its superior performance on preference matching, switching, and covering compared to other baselines. Its capability of completing unseen downstream tasks under human instructions also showcases the promising potential for human-AI collaboration. More visualization videos are released on https://aligndiff.github.io/.
- North America > Mexico > Gulf of Mexico (0.04)
- Europe > United Kingdom > England > Bristol (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Spoken Humanoid Embodied Conversational Agents in Mobile Serious Games: A Usability Assessment
This paper presents an empirical investigation of the extent to which spoken Humanoid Embodied Conversational Agents (HECAs) can foster usability in mobile serious game (MSG) applications. The aim of the research is to assess the impact of multiple agents and illusion of humanness on the quality of the interaction. The experiment investigates two styles of agent presentation: an agent of high human-likeness (HECA) and an agent of low human-likeness (text). The purpose of the experiment is to assess whether and how agents of high humanlikeness can evoke the illusion of humanness and affect usability. Agents of high human-likeness were designed by following the ECA design model that is a proposed guide for ECA development. The results of the experiment with 90 participants show that users prefer to interact with the HECAs. The difference between the two versions is statistically significant with a large effect size (d=1.01), with many of the participants justifying their choice by saying that the human-like characteristics of the HECA made the version more appealing. This research provides key information on the potential effect of HECAs on serious games, which can provide insight into the design of future mobile serious games.
- North America > United States > Texas (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (9 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Education > Educational Setting (0.92)
- Health & Medicine > Consumer Health (0.67)
- Information Technology > Security & Privacy (0.67)
Towards human-compatible autonomous car: A study of non-verbal Turing test in automated driving with affective transition modelling
Li, Zhaoning, Jiang, Qiaoli, Wu, Zhengming, Liu, Anqi, Wu, Haiyan, Huang, Miner, Huang, Kai, Ku, Yixuan
Autonomous cars are indispensable when humans go further down the hands-free route. Although existing literature highlights that the acceptance of the autonomous car will increase if it drives in a human-like manner, sparse research offers the naturalistic experience from a passenger's seat perspective to examine the humanness of current autonomous cars. The present study tested whether the AI driver could create a human-like ride experience for passengers based on 69 participants' feedback in a real-road scenario. We designed a ride experience-based version of the non-verbal Turing test for automated driving. Participants rode in autonomous cars (driven by either human or AI drivers) as a passenger and judged whether the driver was human or AI. The AI driver failed to pass our test because passengers detected the AI driver above chance. In contrast, when the human driver drove the car, the passengers' judgement was around chance. We further investigated how human passengers ascribe humanness in our test. Based on Lewin's field theory, we advanced a computational model combining signal detection theory with pre-trained language models to predict passengers' humanness rating behaviour. We employed affective transition between pre-study baseline emotions and corresponding post-stage emotions as the signal strength of our model. Results showed that the passengers' ascription of humanness would increase with the greater affective transition. Our study suggested an important role of affective transition in passengers' ascription of humanness, which might become a future direction for autonomous driving.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > China > Guangdong Province > Guangzhou (0.05)
- Asia > Macao (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)