Reinforced Strategy Optimization for Conversational Recommender Systems via Network-of-Experts

Zhao, Xiaoyan, Yan, Ming, Zhang, Yang, Deng, Yang, Wang, Jian, Zhu, Fengbin, Qiu, Yilun, Cheng, Hong, Chua, Tat-Seng

arXiv.org Artificial Intelligence 

Abstract--Conversational Recommender Systems (CRSs) aim to provide personalized recommendations through multi-turn natural language interactions with users. Given the strong interaction and reasoning skills of Large Language Models (LLMs), leveraging LLMs for CRSs has recently emerged as a promising direction. However, existing LLM-based methods often lack explicit optimization of interaction strategies, instead relying on unified prompts and the LLM's internal knowledge to decide how to interact, which can lead to suboptimal outcomes. In this paper, we propose a novel R einforced S trategy O ptimization (RSO) method for CRS, which decomposes the process of generating strategy-driven response decisions into the macro-level strategy planning and micro-level strategy adaptation through a network-of-experts architecture. At the macro level, a Planner expert selects macro-level interaction strategies (e.g., recommend, explain, encourage). At the micro level, an Actor expert generates detailed responses conditioned on the selected macro-level strategy, guided by auxiliary experts that provide complementary information such as user preferences and factual grounding. This hierarchical decomposition disentangles the optimization of different sub-tasks involved in CRS response generation, enabling more tractable learning at each level. T o address the scarcity of high-quality multi-turn training data, we formulate strategy learning as a reinforcement learning problem, guided by an LLMbased reward model to achieve automatic strategy exploration. Extensive experiments show that RSO significantly improves interaction performance compared to state-of-the-art baselines, demonstrating the effectiveness of explicit hierarchical strategy optimization for CRS. Conversational Recommender Systems (CRSs) [3]-[9] aim to interact with users through natural language conversation, elicit their preferences, and refine recommendations to maximize user satisfaction and acceptance of the recommendations. X. Zhao and H. Cheng are with The Chinese University of Hong Kong, Hong Kong, China. M. Y an is with the University of Science and Technology of China, Hefei, China. Qiu, and T. Chua are with the National University of Singapore, Singapore.