Retrieval-Augmented Review Generation for Poisoning Recommender Systems

Yang, Shiyi, Li, Xinshu, Zhou, Guanglin, Wang, Chen, Xu, Xiwei, Zhu, Liming, Yao, Lina

arXiv.org Artificial Intelligence 

Abstract--Recent studies have shown that recommender systems (RSs) are highly vulnerable to data poisoning attacks, where malicious actors inject fake user profiles, including a group of well-designed fake ratings, to manipulate recommendations. Due to security and privacy constraints in practice, attackers typically possess limited knowledge of the victim system and thus need to craft profiles that have transferability across black-box RSs. T o maximize the attack impact, the profiles often remains imperceptible. However, generating such high-quality profiles with the restricted resources is challenging. Some works suggest incorporating fake textual reviews to strengthen the profiles; yet, the poor quality of the reviews largely undermines the attack effectiveness and imperceptibility under the practical setting. T o tackle the above challenges, in this paper, we propose to enhance the quality of the review text by harnessing in-context learning (ICL) capabilities of multimodal foundation models. T o this end, we introduce a demonstration retrieval algorithm and a text style transfer strategy to augment the navie ICL. Specifically, we propose a novel practical attack framework named RAGAN to generate high-quality fake user profiles, which can gain insights into the robustness of RSs. The profiles are generated by a jailbreaker and collaboratively optimized on an instructional agent and a guardian to improve the attack transferability and imperceptibility. Comprehensive experiments on various real-world datasets demonstrate that RAGAN achieves the state-of-the-art poisoning attack performance. Impact Statement--Recommender systems play a vital role across e-commerce, online content, and social media platforms, benefiting both users and businesses through personalized suggestions and improved engagement. These advantages also create incentives for malicious actors to exploit them. Recent studies reveal that modern recommender systems are vulnerable to data poisoning attacks, leading to unfair competition and loss of user trust. However, existing attack methods often have limited practicality, overestimating system robustness under real-world constraints.