CogDual: Enhancing Dual Cognition of LLMs via Reinforcement Learning with Implicit Rule-Based Rewards
Liu, Cheng, Lu, Yifei, Ye, Fanghua, Li, Jian, Chen, Xingyu, Ren, Feiliang, Tu, Zhaopeng, Li, Xiaolong
–arXiv.org Artificial Intelligence
Role-Playing Language Agents (RPLAs) have emerged as a significant application direction for Large Language Models (LLMs). Existing approaches typically rely on prompt engineering or supervised fine-tuning to enable models to imitate character behaviors in specific scenarios, but often neglect the underlying \emph{cognitive} mechanisms driving these behaviors. Inspired by cognitive psychology, we introduce \textbf{CogDual}, a novel RPLA adopting a \textit{cognize-then-respond } reasoning paradigm. By jointly modeling external situational awareness and internal self-awareness, CogDual generates responses with improved character consistency and contextual alignment. To further optimize the performance, we employ reinforcement learning with two general-purpose reward schemes designed for open-domain text generation. Extensive experiments on the CoSER benchmark, as well as Cross-MR and LifeChoice, demonstrate that CogDual consistently outperforms existing baselines and generalizes effectively across diverse role-playing tasks.
arXiv.org Artificial Intelligence
Jul-24-2025
- Country:
- Asia
- China
- Guangdong Province > Shenzhen (0.04)
- Hong Kong (0.04)
- Liaoning Province > Shenyang (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- China
- Europe > Middle East
- Malta (0.04)
- Asia
- Genre:
- Personal > Interview (0.46)
- Research Report (1.00)
- Industry:
- Leisure & Entertainment (1.00)
- Technology: