Customer-R1: Personalized Simulation of Human Behaviors via RL-based LLM Agent in Online Shopping
Wang, Ziyi, Lu, Yuxuan, Zhang, Yimeng, Huang, Jing, Wang, Dakuo
–arXiv.org Artificial Intelligence
Simulating step-wise human behavior with Large Language Models (LLMs) has become an emerging research direction, enabling applications in various practical domains. While prior methods, including prompting, supervised fine-tuning (SFT), and reinforcement learning (RL), have shown promise in modeling step-wise behavior, they primarily learn a population-level policy without conditioning on a user's persona, yielding generic rather than personalized simulations. In this work, we pose a critical question: how can LLM agents better simulate personalized user behavior? We introduce Customer-R1, an RL-based method for personalized, step-wise user behavior simulation in online shopping environments. Our policy is conditioned on an explicit persona, and we optimize next-step rationale and action generation via action correctness reward signals. Experiments on the OPeRA dataset emonstrate that Customer-R1 not only significantly outperforms prompting and SFT-based baselines in next-action prediction tasks, but also better matches users' action distribution, indicating higher fidelity in personalized behavior simulation.
arXiv.org Artificial Intelligence
Oct-21-2025
- Country:
- North America > United States > Michigan (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology > Services
- e-Commerce Services (0.72)
- Retail > Online (0.72)
- Information Technology > Services
- Technology: