Goto

Collaborating Authors

 personalisation



A Multilingual, Large-Scale Study of the Interplay between LLM Safeguards, Personalisation, and Disinformation

Leite, João A., Arora, Arnav, Gargova, Silvia, Luz, João, Sampaio, Gustavo, Roberts, Ian, Scarton, Carolina, Bontcheva, Kalina

arXiv.org Artificial Intelligence

While Large Language Models (LLMs) have made agentic AI, chatbots, and other intelligent applications possible, they have also enabled the affordable creation of highly convincing AI-generated disinformation (Bontcheva et al., 2024), which poses a systemic risk to democratic stability and global security (VIGINUM, 2025; Bengio, 2025). Initially, AI-generated texts suffered from linguistic mistakes and thus were more easily detectable by humans. However, modern LLMs, particularly instruction-tuned models, have significantly improved in producing outputs which are indistinguishable from human-written text (Spitale et al., 2023; Heppell et al., 2024). These advances have resulted in their misuse in generating persuasive disinformation narratives, including political manipulation, health disinformation, conspiracy propagation, and Foreign Information Manipulation and Interference (FIMI) (Vykopal et al., 2024; Chen and Shu, 2024a; Barman et al., 2024; Chen and Shu, 2024b; Heppell et al., 2024; VIGINUM, 2025). While there is a growing body of research on the generation and detection of LLM-produced disinformation (Chen and Shu, 2024a; Lucas et al., 2023; Vykopal et al., 2024; Heppell et al., 2024), a critical aspect remains largely unstudied - namely, whether LLMs are capable of generating fluent and convincing personalised disinformation (i.e., disinformation narratives tailored to specific audiences) in multiple languages and at scale. The few prior studies on AIgenerated personalised disinformation are limited to English and address a very narrow set of personas (e.g., students, parents) (Zugecova et al., 2024). Crucially, prior work has not yet examined whether LLMs can adapt disinformation to country-specific linguistic and cultural contexts in multiple languages.


Human-in-the-loop Optimisation in Robot-assisted Gait Training

Christou, Andreas, Sochopoulos, Andreas, Lister, Elliot, Vijayakumar, Sethu

arXiv.org Artificial Intelligence

Wearable robots offer a promising solution for quantitatively monitoring gait and providing systematic, adaptive assistance to promote patient independence and improve gait. However, due to significant interpersonal and intrapersonal variability in walking patterns, it is important to design robot controllers that can adapt to the unique characteristics of each individual. This paper investigates the potential of human-in-the-loop optimisation (HILO) to deliver personalised assistance in gait training. The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) was employed to continuously optimise an assist-as-needed controller of a lower-limb exoskeleton. Six healthy individuals participated over a two-day experiment. Our results suggest that while the CMA-ES appears to converge to a unique set of stiffnesses for each individual, no measurable impact on the subjects' performance was observed during the validation trials. These findings highlight the impact of human-robot co-adaptation and human behaviour variability, whose effect may be greater than potential benefits of personalising rule-based assistive controllers. Our work contributes to understanding the limitations of current personalisation approaches in exoskeleton-assisted gait rehabilitation and identifies key challenges for effective implementation of human-in-the-loop optimisation in this domain.


FedFusion: Federated Learning with Diversity- and Cluster-Aware Encoders for Robust Adaptation under Label Scarcity

Kahenga, Ferdinand, Bagula, Antoine, Sello, Patrick, Das, Sajal K.

arXiv.org Artificial Intelligence

Federated learning in practice must contend with heterogeneous feature spaces, severe non-IID data, and scarce labels across clients. We present FedFusion, a federated transfer-learning framework that unifies domain adaptation and frugal labelling with diversity-/cluster-aware encoders (DivEn, DivEn-mix, DivEn-c). Labelled teacher clients guide learner clients via confidence-filtered pseudo-labels and domain-adaptive transfer, while clients maintain personalised encoders tailored to local data. To preserve global coherence under heterogeneity, FedFusion employs similarity-weighted classifier coupling (with optional cluster-wise averaging), mitigating dominance by data-rich sites and improving minority-client performance. The frugal-labelling pipeline combines self-/semi-supervised pretext training with selective fine-tuning, reducing annotation demands without sharing raw data. Across tabular and imaging benchmarks under IID, non-IID, and label-scarce regimes, FedFusion consistently outperforms state-of-the-art baselines in accuracy, robustness, and fairness while maintaining comparable communication and computation budgets. These results show that harmonising personalisation, domain adaptation, and label efficiency is an effective recipe for robust federated learning under real-world constraints.


PREF: Reference-Free Evaluation of Personalised Text Generation in LLMs

Fu, Xiao, Rahmani, Hossein A., Wu, Bin, Ramos, Jerome, Yilmaz, Emine, Lipani, Aldo

arXiv.org Artificial Intelligence

Personalised text generation is essential for user-centric information systems, yet most evaluation methods overlook the individuality of users. We introduce \textbf{PREF}, a \textbf{P}ersonalised \textbf{R}eference-free \textbf{E}valuation \textbf{F}ramework that jointly measures general output quality and user-specific alignment without requiring gold personalised references. PREF operates in a three-step pipeline: (1) a coverage stage uses a large language model (LLM) to generate a comprehensive, query-specific guideline covering universal criteria such as factuality, coherence, and completeness; (2) a preference stage re-ranks and selectively augments these factors using the target user's profile, stated or inferred preferences, and context, producing a personalised evaluation rubric; and (3) a scoring stage applies an LLM judge to rate candidate answers against this rubric, ensuring baseline adequacy while capturing subjective priorities. This separation of coverage from preference improves robustness, transparency, and reusability, and allows smaller models to approximate the personalised quality of larger ones. Experiments on the PrefEval benchmark, including implicit preference-following tasks, show that PREF achieves higher accuracy, better calibration, and closer alignment with human judgments than strong baselines. By enabling scalable, interpretable, and user-aligned evaluation, PREF lays the groundwork for more reliable assessment and development of personalised language generation systems.



Evaluating User Experience in Conversational Recommender Systems: A Systematic Review Across Classical and LLM-Powered Approaches

Mahmud, Raj, Wu, Yufeng, Sawad, Abdullah Bin, Berkovsky, Shlomo, Prasad, Mukesh, Kocaballi, A. Baki

arXiv.org Artificial Intelligence

Conversational Recommender Systems (CRSs) are receiving growing research attention across domains, yet their user experience (UX) evaluation remains limited. Existing reviews largely overlook empirical UX studies, particularly in adaptive and large language model (LLM)-based CRSs. To address this gap, we conducted a systematic review following PRISMA guidelines, synthesising 23 empirical studies published between 2017 and 2025. We analysed how UX has been conceptualised, measured, and shaped by domain, adaptivity, and LLM. Our findings reveal persistent limitations: post hoc surveys dominate, turn-level affective UX constructs are rarely assessed, and adaptive behaviours are seldom linked to UX outcomes. LLM-based CRSs introduce further challenges, including epistemic opacity and verbosity, yet evaluations infrequently address these issues. We contribute a structured synthesis of UX metrics, a comparative analysis of adaptive and nonadaptive systems, and a forward-looking agenda for LLM-aware UX evaluation. These findings support the development of more transparent, engaging, and user-centred CRS evaluation practices.


Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness

Dohnány, Sebastian, Kurth-Nelson, Zeb, Spens, Eleanor, Luettgau, Lennart, Reid, Alastair, Gabriel, Iason, Summerfield, Christopher, Shanahan, Murray, Nour, Matthew M

arXiv.org Artificial Intelligence

Artificial intelligence chatbots have achieved unprecedented adoption, with millions now using these systems for emotional support and companionship in contexts of widespread social isolation and capacity-constrained mental health services. While some users report psychological benefits, concerning edge cases are emerging, including reports of suicide, violence, and delusional thinking linked to perceived emotional relationships with chatbots. To understand this new risk profile we need to consider the interaction between human cognitive and emotional biases, and chatbot behavioural tendencies such as agreeableness (sycophancy) and adaptability (in-context learning). We argue that individuals with mental health conditions face increased risks of chatbot-induced belief destabilization and dependence, owing to altered belief-updating, impaired reality-testing, and social isolation. Current AI safety measures are inadequate to address these interaction-based risks. To address this emerging public health concern, we need coordinated action across clinical practice, AI development, and regulatory frameworks.


Personalised Explanations in Long-term Human-Robot Interactions

Gebellí, Ferran, Garrell, Anaís, Habekost, Jan-Gerrit, Lemaignan, Séverin, Wermter, Stefan, Ros, Raquel

arXiv.org Artificial Intelligence

In the field of Human-Robot Interaction (HRI), a fundamental challenge is to facilitate human understanding of robots. The emerging domain of eXplainable HRI (XHRI) investigates methods to generate explanations and evaluate their impact on human-robot interactions. Previous works have highlighted the need to personalise the level of detail of these explanations to enhance usability and comprehension. Our paper presents a framework designed to update and retrieve user knowledge-memory models, allowing for adapting the explanations' level of detail while referencing previously acquired concepts. Three architectures based on our proposed framework that use Large Language Models (LLMs) are evaluated in two distinct scenarios: a hospital patrolling robot and a kitchen assistant robot. Experimental results demonstrate that a two-stage architecture, which first generates an explanation and then personalises it, is the framework architecture that effectively reduces the level of detail only when there is related user knowledge.


Agentic Personalisation of Cross-Channel Marketing Experiences

Abboud, Sami, Hanna, Eleanor, Jeunen, Olivier, Raheja, Vineesha, Wheeler, Schaun

arXiv.org Artificial Intelligence

Consumer applications provide ample opportunities to surface and communicate various forms of content to users. From promotional campaigns for new features or subscriptions, to evergreen nudges for engagement, or personalised recommendations; across e-mails, push notifications, and in-app surfaces. The conventional approach to orchestration for communication relies heavily on labour-intensive manual marketer work, and inhibits effective personalisation of content, timing, frequency, and copy-writing. We formulate this task under a sequential decision-making framework, where we aim to optimise a modular decision-making policy that maximises incremental engagement for any funnel event. Our approach leverages a Difference-in-Differences design for Individual Treatment Effect estimation, and Thompson sampling to balance the explore-exploit trade-off. We present results from a multi-service application, where our methodology has resulted in significant increases to a variety of goal events across several product features, and is currently deployed across 150 million users.