Goto

Collaborating Authors

 distrust


In Generative AI We (Dis)Trust? Computational Analysis of Trust and Distrust in Reddit Discussions

Pessianzadeh, Aria, Sultana, Naima, Bulck, Hildegarde Van den, Gefen, David, Jabari, Shahin, Rezapour, Rezvaneh

arXiv.org Artificial Intelligence

The rise of generative AI (GenAI) has impacted many aspects of human life. As these systems become embedded in everyday practices, understanding public trust in them also becomes essential for responsible adoption and governance. Prior work on trust in AI has largely drawn from psychology and human-computer interaction, but there is a lack of computational, large-scale, and longitudinal approaches to measuring trust and distrust in GenAI and large language models (LLMs). This paper presents the first computational study of Trust and Distrust in GenAI, using a multi-year Reddit dataset (2022--2025) spanning 39 subreddits and 197,618 posts. Crowd-sourced annotations of a representative sample were combined with classification models to scale analysis. We find that Trust and Distrust are nearly balanced over time, with shifts around major model releases. Technical performance and usability dominate as dimensions, while personal experience is the most frequent reason shaping attitudes. Distinct patterns also emerge across trustors (e.g., experts, ethicists, general users). Our results provide a methodological framework for large-scale Trust analysis and insights into evolving public perceptions of GenAI.


Young men shifting to political right is causing women to distrust dating apps, says Atlantic writer

FOX News

Atlantic writer Faith Hill claimed that women have developed a distrust of dating apps due to young men becoming more conservative during an appearance on CNN's "The Assignment with Audie Cornish" on Thursday. Young men's shift to the political right has complicated the dating world and led to distrust by women of dating apps, according to The Atlantic writer Faith Hill, who appeared on CNN on Thursday. Hill argued that women's growing distrust of dating apps stems from men -- young men in particular -- becoming more conservative while young women are becoming more progressive, leading to the sexes "growing further apart in a lot of ways." "You see that young men are moving further to the right. And I think for a lot of women in particular, it can just sort of feel like, 'This is not a time where I trust men -- I feel respected by men. I don't necessarily want to go out and meet strangers who are men,'" Hill said.


Enriching Moral Perspectives on AI: Concepts of Trust amongst Africans

Amugongo, Lameck Mbangula, Bidwell, Nicola J, Mwatukange, Joseph

arXiv.org Artificial Intelligence

The trustworthiness of AI is considered essential to the adoption and application of AI systems. However, the meaning of trust varies across industry, research and policy spaces. Studies suggest that professionals who develop and use AI regard an AI system as trustworthy based on their personal experiences and social relations at work. Studies about trust in AI and the constructs that aim to operationalise trust in AI (e.g., consistency, reliability, explainability and accountability). However, the majority of existing studies about trust in AI are situated in Western, Educated, Industrialised, Rich and Democratic (WEIRD) societies. The few studies about trust and AI in Africa do not include the views of people who develop, study or use AI in their work. In this study, we surveyed 157 people with professional and/or educational interests in AI from 25 African countries, to explore how they conceptualised trust in AI. Most respondents had links with workshops about trust and AI in Africa in Namibia and Ghana. Respondents' educational background, transnational mobility, and country of origin influenced their concerns about AI systems. These factors also affected their levels of distrust in certain AI applications and their emphasis on specific principles designed to foster trust. Respondents often expressed that their values are guided by the communities in which they grew up and emphasised communal relations over individual freedoms. They described trust in many ways, including applying nuances of Afro-relationalism to constructs in international discourse, such as reliability and reliance. Thus, our exploratory study motivates more empirical research about the ways trust is practically enacted and experienced in African social realities of AI design, use and governance.


Healthy Distrust in AI systems

Paaßen, Benjamin, Alpsancar, Suzana, Matzner, Tobias, Scharlau, Ingrid

arXiv.org Artificial Intelligence

Under the slogan of trustworthy AI, much of contemporary AI research is focused on designing AI systems and usage practices that inspire human trust and, thus, enhance adoption of AI systems. However, a person affected by an AI system may not be convinced by AI system design alone -- neither should they, if the AI system is embedded in a social context that gives good reason to believe that it is used in tension with a person's interest. In such cases, distrust in the system may be justified and necessary to build meaningful trust in the first place. We propose the term "healthy distrust" to describe such a justified, careful stance towards certain AI usage practices. We investigate prior notions of trust and distrust in computer science, sociology, history, psychology, and philosophy, outline a remaining gap that healthy distrust might fill and conceptualize healthy distrust as a crucial part for AI usage that respects human autonomy.


UKElectionNarratives: A Dataset of Misleading Narratives Surrounding Recent UK General Elections

Haouari, Fatima, Scarton, Carolina, Faggiani, Nicolò, Nikolaidis, Nikolaos, Kotseva, Bonka, Farha, Ibrahim Abu, Linge, Jens, Bontcheva, Kalina

arXiv.org Artificial Intelligence

Misleading narratives play a crucial role in shaping public opinion during elections, as they can influence how voters perceive candidates and political parties. This entails the need to detect these narratives accurately. To address this, we introduce the first taxonomy of common misleading narratives that circulated during recent elections in Europe. Based on this taxonomy, we construct and analyse UKElectionNarratives: the first dataset of human-annotated misleading narratives which circulated during the UK General Elections in 2019 and 2024. We also benchmark Pre-trained and Large Language Models (focusing on GPT-4o), studying their effectiveness in detecting election-related misleading narratives. Finally, we discuss potential use cases and make recommendations for future research directions using the proposed codebook and dataset.


The Importance of Distrust in Trusting Digital Worker Chatbots

Communications of the ACM

Adopting and implementing digital automation technologies, including artificial intelligence (AI) models such as ChatGPT, robotic process automation (RPA), and other emerging AI technologies, will revolutionize many industries and business models. It is forecasted that the rise of AI will impact a wide range of job functions and roles. White-collar positions such as administrative, customer service, and back-office roles will all be impacted by AI-fueled digital automation. The adoption of digital workers is currently positioned in the early adopter phase of the product lifecycle.1 AI-driven digital workers are expected to substantially alter many white-collar tasks, including finance, customer support, human resources, sales, and marketing.42 A study from Oxford University and Deloitte predicts AI is a significant risk to the white-collar workforce.


Trust in AI: Progress, Challenges, and Future Directions

Afroogh, Saleh, Akbari, Ali, Malone, Evan, Kargar, Mohammadali, Alambeigi, Hananeh

arXiv.org Artificial Intelligence

The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems (as opposed to other technologies) have ubiquitously diffused in our life not only as some beneficial tools to be used by human agents but also are going to be substitutive agents on our behalf, or manipulative minds that would influence human thought, decision, and agency. Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion, as trust can increase, and distrust may reduce the rate of adoption of AI. Recently, varieties of studies have paid attention to the variant dimension of trust/distrust in AI, and its relevant considerations. In this systematic literature review, after conceptualization of trust in the current AI literature review, we will investigate trust in different types of human-Machine interaction, and its impact on technology acceptance in different domains. In addition to that, we propose a taxonomy of technical (i.e., safety, accuracy, robustness) and non-technical axiological (i.e., ethical, legal, and mixed) trustworthiness metrics, and some trustworthy measurements. Moreover, we examine some major trust-breakers in AI (e.g., autonomy and dignity threat), and trust makers; and propose some future directions and probable solutions for the transition to a trustworthy AI.


Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

Duenser, Andreas, Douglas, David M.

arXiv.org Artificial Intelligence

We present an overview of the literature on trust in AI and AI trustworthiness and argue for the need to distinguish these concepts more clearly and to gather more empirically evidence on what contributes to people s trusting behaviours. We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system. AI ethics principles such as explainability and transparency are often assumed to promote user trust, but empirical evidence of how such features actually affect how users perceive the system s trustworthiness is not as abundance or not that clear. AI systems should be recognised as socio-technical systems, where the people involved in designing, developing, deploying, and using the system are as important as the system for determining whether it is trustworthy. Without recognising these nuances, trust in AI and trustworthy AI risk becoming nebulous terms for any desirable feature for AI systems.


Can distrust in AI disrupt your business? - Raconteur

#artificialintelligence

Are you scared yet, human? Artificial intelligence (AI) has proliferated with transformative effects in recent years, in sectors from autonomous vehicles to personalised shopping. But the latest deployment of AI to generate content such as text, images or audio has caused quite a stir. ChatGPT, a particularly superior language model, even passed the US medical speciality exam. That's not to say there haven't been some bloopers.


Distrust in (X)AI -- Measurement Artifact or Distinct Construct?

Scharowski, Nicolas, Perrig, Sebastian A. C.

arXiv.org Artificial Intelligence

Trust is a key motivation in developing explainable artificial intelligence (XAI). However, researchers attempting to measure trust in AI face numerous challenges, such as different trust conceptualizations, simplified experimental tasks that may not induce uncertainty as a prerequisite for trust, and the lack of validated trust questionnaires in the context of AI. While acknowledging these issues, we have identified a further challenge that currently seems underappreciated - the potential distinction between trust as one construct and \emph{distrust} as a second construct independent of trust. While there has been long-standing academic discourse for this distinction and arguments for both the one-dimensional and two-dimensional conceptualization of trust, distrust seems relatively understudied in XAI. In this position paper, we not only highlight the theoretical arguments for distrust as a distinct construct from trust but also contextualize psychometric evidence that likewise favors a distinction between trust and distrust. It remains to be investigated whether the available psychometric evidence is sufficient for the existence of distrust or whether distrust is merely a measurement artifact. Nevertheless, the XAI community should remain receptive to considering trust and distrust for a more comprehensive understanding of these two relevant constructs in XAI.