filter bubble
- Government (0.93)
- Information Technology > Services (0.51)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.95)
- Information Technology > Data Science > Data Mining (0.93)
Quantifying the Potential to Escape Filter Bubbles: A Behavior-Aware Measure via Contrastive Simulation
Feng, Difu, Xu, Qianqian, Wang, Zitai, Hua, Cong, Yang, Zhiyong, Huang, Qingming
Nowadays, recommendation systems have become crucial to online platforms, shaping user exposure by accurate preference modeling. However, such an exposure strategy can also reinforce users' existing preferences, leading to a notorious phenomenon named filter bubbles. Given its negative effects, such as group polarization, increasing attention has been paid to exploring reasonable measures to filter bubbles. However, most existing evaluation metrics simply measure the diversity of user exposure, failing to distinguish between algorithmic preference modeling and actual information confinement. In view of this, we introduce Bubble Escape Potential (BEP), a behavior-aware measure that quantifies how easily users can escape from filter bubbles. Specifically, BEP leverages a contrastive simulation framework that assigns different behavioral tendencies (e.g., positive vs. negative) to synthetic users and compares the induced exposure patterns. This design enables decoupling the effect of filter bubbles and preference modeling, allowing for more precise diagnosis of bubble severity. We conduct extensive experiments across multiple recommendation models to examine the relationship between predictive accuracy and bubble escape potential across different groups. To the best of our knowledge, our empirical results are the first to quantitatively validate the dilemma between preference modeling and filter bubbles. What's more, we observe a counter-intuitive phenomenon that mild random recommendations are ineffective in alleviating filter bubbles, which can offer a principled foundation for further work in this direction.
- Government (0.93)
- Information Technology > Services (0.71)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.95)
- Information Technology > Data Science > Data Mining (0.93)
Avoiding Over-Personalization with Rule-Guided Knowledge Graph Adaptation for LLM Recommendations
Spadea, Fernando, Seneviratne, Oshani
We present a lightweight neuro-symbolic framework to mitigate over-personalization in LLM-based recommender systems by adapting user-side Knowledge Graphs (KGs) at inference time. Instead of retraining models or relying on opaque heuristics, our method restructures a user's Personalized Knowledge Graph (PKG) to suppress feature co-occurrence patterns that reinforce Personalized Information Environments (PIEs), i.e., algorithmically induced filter bubbles that constrain content diversity. These adapted PKGs are used to construct structured prompts that steer the language model toward more diverse, Out-PIE recommendations while preserving topical relevance. We introduce a family of symbolic adaptation strategies, including soft reweighting, hard inversion, and targeted removal of biased triples, and a client-side learning algorithm that optimizes their application per user. Experiments on a recipe recommendation benchmark show that personalized PKG adaptations significantly increase content novelty while maintaining recommendation quality, outperforming global adaptation and naive prompt-based methods.
- North America > United States > New York > Rensselaer County > Troy (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Japan (0.04)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.88)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Semantic Networks (0.84)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.70)
Looking for Fairness in Recommender Systems
Recommender systems can be found everywhere today, shaping our everyday experience whenever we're consuming content, ordering food, buying groceries online, or even just reading the news. Let's imagine we're in the process of building a recommender system to make content suggestions to users on social media. When thinking about fairness, it becomes clear there are several perspectives to consider: the users asking for tailored suggestions, the content creators hoping for some limelight, and society at large, navigating the repercussions of algorithmic recommendations. A shared fairness concern across all three is the emergence of filter bubbles, a side-effect that takes place when recommender systems are almost "too good", making recommendations so tailored that users become inadvertently confined to a narrow set of opinions/themes and isolated from alternative ideas. From the user's perspective, this is akin to manipulation. From the small content creator's perspective, this is an obstacle preventing them access to a whole range of potential fans. From society's perspective, the potential consequences are far-reaching, influencing collective opinions, social behavior and political decisions. How can our recommender system be fine-tuned to avoid the creation of filter bubbles, and ensure a more inclusive and diverse content landscape? Approaching this problem involves defining one (or more) performance metric to represent diversity, and tweaking our recommender system's performance through the lens of fairness. By incorporating this metric into our evaluation framework, we aim to strike a balance between personalized recommendations and the broader societal goal of fostering rich and varied cultures and points of view.
Mitigating Societal Cognitive Overload in the Age of AI: Challenges and Directions
Societal cognitive overload, driven by the deluge of inform ation and complexity in the AI age, poses a critical challenge to human well-being an d societal resilience. This paper argues that mitigating cognitive overload is not only essential for improving present-day life but also a crucial prerequisite fo r navigating the potential risks of advanced AI, including existential threats. W e exa mine how AI exacerbates cognitive overload through various mechanisms, incl uding information proliferation, algorithmic manipulation, automation anxiet ies, deregulation, and the erosion of meaning. The paper reframes the AI safety debate t o center on cognitive overload, highlighting its role as a bridge between near-te rm harms and long-term risks. It concludes by discussing potential institutional adaptations, research directions, and policy considerations that arise from adopti ng an overload-resilient perspective on human-AI alignment, suggesting pathways fo r future exploration rather than prescribing definitive solutions. W e stand at a precipice. Human societies are increasingly st ruggling to process the sheer volume and complexity of information in the digital age, a conditio n dramatically amplified by the rapid proliferation of artificial intelligence (AI). While Toffle r (1970) foresaw "future shock" from accelerating change and Eppler & Mengis (2004); Bawden & Robin son (2009) analyzed individual information overload, Byung-Chul Han, in his critique of ne oliberalism and technological domination (Han, 2017), argues that contemporary society faces a regime of technological domination that exploits and overwhelms the psyche. This exploitation and overwhelming of the psyche, now dramatically amplified by AI-driven information and comple xity, elevates information overload to a systemic crisis: societal cognitive overload .
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Singapore (0.04)
- Law (1.00)
- Government (1.00)
- Health & Medicine (0.68)
- Media > News (0.47)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Simulating Filter Bubble on Short-video Recommender System with Large Language Model Agents
Sukiennik, Nicholas, Wang, Haoyu, Zeng, Zailin, Gao, Chen, Li, Yong
An increasing reliance on recommender systems has led to concerns about the creation of filter bubbles on social media, especially on short video platforms like TikTok. However, their formation is still not entirely understood due to the complex dynamics between recommendation algorithms and user feedback. In this paper, we aim to shed light on these dynamics using a large language model-based simulation framework. Our work employs real-world short-video data containing rich video content information and detailed user-agents to realistically simulate the recommendation-feedback cycle. Through large-scale simulations, we demonstrate that LLMs can replicate real-world user-recommender interactions, uncovering key mechanisms driving filter bubble formation. We identify critical factors, such as demographic features and category attraction that exacerbate content homogenization. To mitigate this, we design and test interventions including various cold-start and feedback weighting strategies, showing measurable reductions in filter bubble effects. Our framework enables rapid prototyping of recommendation strategies, offering actionable solutions to enhance content diversity in real-world systems. Furthermore, we analyze how LLM-inherent biases may propagate through recommendations, proposing safeguards to promote equity for vulnerable groups, such as women and low-income populations. By examining the interplay between recommendation and LLM agents, this work advances a deeper understanding of algorithmic bias and provides practical tools to promote inclusive digital spaces.
- Europe > Norway > Norwegian Sea (0.24)
- North America > United States > Texas > Travis County > Austin (0.04)
- South America > Brazil (0.04)
- (6 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.99)
Comparable Corpora: Opportunities for New Research Directions
Most conference papers present new results, but this paper will focus more on opportunities for the audience to make their own contributions. This paper is intended to challenge the community to think more broadly about what we can do with comparable corpora. We will start with a review of the history, and then suggest new directions for future research. This was a keynote at BUCC-2025, a workshop associated with Coling-2025.
- Asia > China > Hong Kong (0.05)
- North America > United States > New York (0.04)
- Europe > Finland > Uusimaa > Helsinki (0.04)
- (12 more...)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Machine Translation (0.96)
- Information Technology > Communications (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems
Ahn, Yongsu, Wolter, Quinn K, Dick, Jonilyn, Dick, Janet, Lin, Yu-Ru
Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, this tool benefits both general users and researchers by increasing transparency and offering personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.
- North America > Mexico (0.04)
- Europe > Portugal (0.04)
- Asia > Taiwan (0.04)
- Asia > South Korea (0.04)
- Research Report (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (0.46)
A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions
Pappalardo, Luca, Ferragina, Emanuele, Citraro, Salvatore, Cornacchia, Giuliano, Nanni, Mirco, Rossetti, Giulio, Gezici, Gizem, Giannotti, Fosca, Lalli, Margherita, Gambetta, Daniele, Mauro, Giovanni, Morini, Virginia, Pansanella, Valentina, Pedreschi, Dino
Recommendation systems and assistants (from now on, recommenders) - algorithms suggesting items or providing solutions based on users' preferences or requests [99, 105, 141, 166] - influence through online platforms most actions of our day to day life. For example, recommendations on social media suggest new social connections, those on online retail platforms guide users' product choices, navigation services offer routes to desired destinations, and generative AI platforms produce content based on users' requests. Unlike other AI tools, such as medical diagnostic support systems, robotic vision systems, or autonomous driving, which assist in specific tasks or functions, recommenders are ubiquitous in online platforms, shaping our decisions and interactions instantly and profoundly. The influence recommenders exert on users' behaviour may generate long-lasting and often unintended effects on human-AI ecosystems [131], such as amplifying political radicalisation processes [82], increasing CO2 emissions in the environment [36] and amplifying inequality, biases and discriminations [120]. The interaction between humans and recommenders has been examined in various fields using different nomenclatures, research methods and datasets, often producing incongruent findings.
- Europe > Portugal > Lisbon > Lisbon (0.14)
- Europe > Italy > Tuscany > Pisa Province > Pisa (0.04)
- North America > United States > Virginia (0.04)
- (17 more...)
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Transportation > Passenger (1.00)
- Transportation > Infrastructure & Services (1.00)
- Transportation > Ground > Road (1.00)
- (9 more...)
- Information Technology > Information Management > Search (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Communications > Networks (1.00)
- (4 more...)