Morini, Virginia
A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions
Pappalardo, Luca, Ferragina, Emanuele, Citraro, Salvatore, Cornacchia, Giuliano, Nanni, Mirco, Rossetti, Giulio, Gezici, Gizem, Giannotti, Fosca, Lalli, Margherita, Gambetta, Daniele, Mauro, Giovanni, Morini, Virginia, Pansanella, Valentina, Pedreschi, Dino
Recommendation systems and assistants (from now on, recommenders) - algorithms suggesting items or providing solutions based on users' preferences or requests [99, 105, 141, 166] - influence through online platforms most actions of our day to day life. For example, recommendations on social media suggest new social connections, those on online retail platforms guide users' product choices, navigation services offer routes to desired destinations, and generative AI platforms produce content based on users' requests. Unlike other AI tools, such as medical diagnostic support systems, robotic vision systems, or autonomous driving, which assist in specific tasks or functions, recommenders are ubiquitous in online platforms, shaping our decisions and interactions instantly and profoundly. The influence recommenders exert on users' behaviour may generate long-lasting and often unintended effects on human-AI ecosystems [131], such as amplifying political radicalisation processes [82], increasing CO2 emissions in the environment [36] and amplifying inequality, biases and discriminations [120]. The interaction between humans and recommenders has been examined in various fields using different nomenclatures, research methods and datasets, often producing incongruent findings.
From Perils to Possibilities: Understanding how Human (and AI) Biases affect Online Fora
Morini, Virginia, Pansanella, Valentina, Abramski, Katherine, Cau, Erica, Failla, Andrea, Citraro, Salvatore, Rossetti, Giulio
Social media platforms are online fora where users engage in discussions, share content, and build connections. This review explores the dynamics of social interactions, user-generated contents, and biases within the context of social media analysis (analyzing works that use the tools offered by complex network analysis and natural language processing) through the lens of three key points of view: online debates, online support, and human-AI interactions. On the one hand, we delineate the phenomenon of online debates, where polarization, misinformation, and echo chamber formation often proliferate, driven by algorithmic biases and extreme mechanisms of homophily. On the other hand, we explore the emergence of online support groups through users' self-disclosure and social support mechanisms. Online debates and support mechanisms present a duality of both perils and possibilities within social media; perils of segregated communities and polarized debates, and possibilities of empathy narratives and self-help groups. This dichotomy also extends to a third perspective: users' reliance on AI-generated content, such as the ones produced by Large Language Models, which can manifest both human biases hidden in training sets and non-human biases that emerge from their artificial neural architectures. Analyzing interdisciplinary approaches, we aim to deepen the understanding of the complex interplay between social interactions, user-generated content, and biases within the realm of social media ecosystems.