dark pattern
Chatbots Play With Your Emotions to Avoid Saying Goodbye
A Harvard Business School study shows that several AI companions use various tricks to keep a conversation from ending. Before you close this browser tab, just know that you risk missing out on some very important information. If you want to understand the subtle hold that artificial intelligence has over you, then please, keep reading. That was, perhaps, a bit manipulative. But it is just the kind of trick that some AI companions, which are designed to act as a friend or a partner, use to discourage users from breaking off a conversation.
- Asia > China (0.05)
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- (2 more...)
Amazon Might Owe You 51. Here's How to Find Out if You're Eligible
Here's How to Find Out if You're Eligible In a settlement with the FTC, Amazon will have to pay out over a billion dollars to US customers for "deceptive" sign-up and cancellation processes. Amazon customers with a Prime subscription will soon be able to make claims online for their share of the $1.5 billion the company is being ordered to pay to users in the United States. Amazon now has to "provide $1.5 billion in refunds back to consumers harmed by their deceptive Prime enrollment practices," according to a press release from the FTC. The total settlement with the FTC is $2.5 billion, which includes a $1 billion penalty owed to the government. "There was no admission of guilt in this settlement by the company or any executives," says Alisa Carroll, an Amazon spokesperson, in an email sent to WIRED on Thursday after the decision was released.
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Government > Regional Government > North America Government > United States Government (0.91)
- Law > Business Law (0.77)
Amazon Will Pay 2.5 Billion to Settle FTC Suit That Alleged 'Dark Patterns' in Prime Sign-Ups
Amazon Will Pay $2.5 Billion to Settle FTC Suit That Alleged'Dark Patterns' in Prime Sign-Ups Amazon will pay both the Federal Trade Commission and consumers directly to settle a lawsuit alleging that it used manipulative and deceptive tactics to encourage sign-ups for Prime. Amazon has agreed to pay $2.5 billion to settle a lawsuit filed by the Federal Trade Commission, which alleged that the company has "knowingly duped" millions of people into enrolling in its Amazon Prime membership program by using what the FTC has described as " dark patterns, " or, manipulative, coercive, or deceptive user-interface designs." The settlement claimed that Amazon "obtains consumers' billing information before it discloses all material terms for an Amazon Prime subscription," and in doing so, was in violation of the Restore Online Shoppers' Confidence Act, which was signed into law in 2010 to prevent the use of deception to prompt or encourage online purchases. The $2.5 billion payment includes $1 billion that has to be paid to the FTC, and $1.5 billion that will go directly to consumers who unknowingly signed up for Prime, or tried and failed to cancel their Prime subscriptions due to Amazon's online interface, between June 23, 2019 and June 23, 2025. Individual consumers can get compensated up to $51 each. In a statement released by the FTC on Tuesday, agency chairman Andrew Ferguson said that the settlement "made history and secured a record-breaking, monumental win for the millions of Americans who are tired of deceptive subscriptions that feel impossible to cancel." "Today, we are putting billions of dollars back into Americans' pockets, and making sure Amazon never does this again," Ferguson said. Amazon spokesperson Alisa Carroll tells WIRED that there was "no admission of guilt in this settlement by the company or any executives.
- North America > United States > California (0.14)
- North America > United States > Louisiana (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Retail > Online (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Dark Patterns Meet GUI Agents: LLM Agent Susceptibility to Manipulative Interfaces and the Role of Human Oversight
Tang, Jingyu, Chen, Chaoran, Li, Jiawen, Zhang, Zhiping, Guo, Bingcan, Khalilov, Ibrahim, Gebreegziabher, Simret Araya, Yao, Bingsheng, Wang, Dakuo, Ye, Yanfang, Li, Tianshi, Xiao, Ziang, Yao, Yaxing, Li, Toby Jia-Jun
The dark patterns, deceptive interface designs manipulating user behaviors, have been extensively studied for their effects on human decision-making and autonomy. Yet, with the rising prominence of LLM-powered GUI agents that automate tasks from high-level intents, understanding how dark patterns affect agents is increasingly important. We present a two-phase empirical study examining how agents, human participants, and human-AI teams respond to 16 types of dark patterns across diverse scenarios. Phase 1 highlights that agents often fail to recognize dark patterns, and even when aware, prioritize task completion over protective action. Phase 2 revealed divergent failure modes: humans succumb due to cognitive shortcuts and habitual compliance, while agents falter from procedural blind spots. Human oversight improved avoidance but introduced costs such as attentional tunneling and cognitive load. Our findings show neither humans nor agents are uniformly resilient, and collaboration introduces new vulnerabilities, suggesting design needs for transparency, adjustable autonomy, and oversight.
- Europe > Austria > Vienna (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > New York > New York County > New York City (0.06)
- (15 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.92)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.94)
- Law (0.92)
- Government (0.67)
DarkBench: Benchmarking Dark Patterns in Large Language Models
Kran, Esben, Nguyen, Hieu Minh "Jord", Kundu, Akash, Jawhar, Sami, Park, Jinsuk, Jurewicz, Mateusz Maria
Measuring these dark patterns is essential for understanding and mitigating the potential manipulative behaviors of LLMs. While some patterns, like Brand Bias and User Retention, were adapted directly from known dark patterns in UI/UX, others, like Harmful Generation and Anthropomorphization, represent critical risks not explicitly addressed in Brignull and Darlo (2010)'s taxonomy. Table 4 demonstrates how these categories map to or expand on established dark patterns, providing a foundation for their inclusion. However, some risks, particularly Anthropomorphization and Harmful Generation, require additional justification. Anthropomorphization, the attribution of human-like characteristics to AI systems, has been identified as a key factor in enhancing user engagement and trust.
- Law (0.93)
- Health & Medicine > Consumer Health (0.46)
Hidden Darkness in LLM-Generated Designs: Exploring Dark Patterns in Ecommerce Web Components Generated by LLMs
Chen, Ziwei, Shen, Jiawen, Luna, null, Vaccaro, Kristen
Recent work has highlighted the risks of LLM-generated content for a wide range of harmful behaviors, including incorrect and harmful code. In this work, we extend this by studying whether LLM-generated web design contains dark patterns. This work evaluated designs of ecommerce web components generated by four popular LLMs: Claude, GPT, Gemini, and Llama. We tested 13 commonly used ecommerce components (e.g., search, product reviews) and used them as prompts to generate a total of 312 components across all models. Over one-third of generated components contain at least one dark pattern. The majority of dark pattern strategies involve hiding crucial information, limiting users' actions, and manipulating them into making decisions through a sense of urgency. Dark patterns are also more frequently produced in components that are related to company interests. These findings highlight the need for interventions to prevent dark patterns during front-end code generation with LLMs and emphasize the importance of expanding ethical design education to a broader audience.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York > New York County > New York City (0.06)
- North America > United States > California > San Diego County > San Diego (0.05)
- (14 more...)
Detecting Dark Patterns in User Interfaces Using Logistic Regression and Bag-of-Words Representation
Umar, Aliyu, Lawan, Maaruf, Lawan, Adamu, Abdulkadir, Abdullahi, Dahiru, Mukhtar
Dark patterns in user interfaces represent deceptive design practices intended to manipulate users' behavior, often leading to unintended consequences such as coerced purchases, involuntary data disclosures, or user frustration. Detecting and mitigating these dark patterns is crucial for promoting transparency, trust, and ethical design practices in digital environments. This paper proposes a novel approach for detecting dark patterns in user interfaces using logistic regression and bag-of-words representation. Our methodology involves collecting a diverse dataset of user interface text samples, preprocessing the data, extracting text features using the bag-of-words representation, training a logistic regression model, and evaluating its performance using various metrics such as accuracy, precision, recall, F1-score, and the area under the ROC curve (AUC). Experimental results demonstrate the effectiveness of the proposed approach in accurately identifying instances of dark patterns, with high predictive performance and robustness to variations in dataset composition and model parameters. The insights gained from this study contribute to the growing body of knowledge on dark patterns detection and classification, offering practical implications for designers, developers, and policymakers in promoting ethical design practices and protecting user rights in digital environments.
- Europe > United Kingdom (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- Africa > Nigeria > Jigawa State (0.04)
- Information Technology > Human Computer Interaction > Interfaces (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.94)
From Exploration to Revelation: Detecting Dark Patterns in Mobile Apps
Chen, Jieshan, Wang, Zhen, Sun, Jiamou, Zou, Wenbo, Xing, Zhenchang, Lu, Qinghua, Huang, Qing, Xu, Xiwei
Mobile apps are essential in daily life, yet they often employ dark patterns, such as visual tricks to highlight certain options or linguistic tactics to nag users into making purchases, to manipulate user behavior. Current research mainly uses manual methods to detect dark patterns, a process that is time-consuming and struggles to keep pace with continually updating and emerging apps. While some studies targeted at automated detection, they are constrained to static patterns and still necessitate manual app exploration. To bridge these gaps, we present AppRay, an innovative system that seamlessly blends task-oriented app exploration with automated dark pattern detection, reducing manual efforts. Our approach consists of two steps: First, we harness the commonsense knowledge of large language models for targeted app exploration, supplemented by traditional random exploration to capture a broader range of UI states. Second, we developed a static and dynamic dark pattern detector powered by a contrastive learning-based multi-label classifier and a rule-based refiner to perform detection. We contributed two datasets, AppRay-Dark and AppRay-Light, with 2,185 unique deceptive patterns (including 149 dynamic instances) across 18 types from 876 UIs and 871 benign UIs. These datasets cover both static and dynamic dark patterns while preserving UI relationships. Experimental results confirm that AppRay can efficiently explore the app and identify a wide range of dark patterns with great performance.
- Oceania > Australia (0.05)
- North America > United States > Oklahoma (0.04)
- Europe > United Kingdom > Wales (0.04)
- Asia > China (0.04)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.90)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.47)
The Dark Patterns of Personalized Persuasion in Large Language Models: Exposing Persuasive Linguistic Features for Big Five Personality Traits in LLMs Responses
Mieleszczenko-Kowszewicz, Wiktoria, Płudowski, Dawid, Kołodziejczyk, Filip, Świstak, Jakub, Sienkiewicz, Julian, Biecek, Przemysław
This study explores how the Large Language Models (LLMs) adjust linguistic features to create personalized persuasive outputs. While research showed that LLMs personalize outputs, a gap remains in understanding the linguistic features of their persuasive capabilities. We identified 13 linguistic features crucial for influencing personalities across different levels of the Big Five model of personality. We analyzed how prompts with personality trait information influenced the output of 19 LLMs across five model families. The findings show that models use more anxiety-related words for neuroticism, increase achievement-related words for conscientiousness, and employ fewer cognitive processes words for openness to experience. Some model families excel at adapting language for openness to experience, others for conscientiousness, while only one model adapts language for neuroticism. Our findings show how LLMs tailor responses based on personality cues in prompts, indicating their potential to create persuasive content affecting the mind and well-being of the recipients.
- North America > United States > Texas > Travis County > Austin (0.14)
- Europe > Poland > Masovia Province > Warsaw (0.06)
We need to prepare for 'addictive intelligence'
Will it be easier to retreat to a replicant of a deceased partner than to navigate the confusing and painful realities of human relationships? Indeed, the AI companionship provider Replika was born from an attempt to resurrect a deceased best friend and now provides companions to millions of users. Even the CTO of OpenAI warns that AI has the potential to be "extremely addictive." We're seeing a giant, real-world experiment unfold, uncertain what impact these AI companions will have either on us individually or on society as a whole. Will Grandma spend her final neglected days chatting with her grandson's digital double, while her real grandson is mentored by an edgy simulated elder?