Diverse Behavior Is What Game AI Needs: Generating Varied Human-Like Playing Styles Using Evolutionary Multi-Objective Deep Reinforcement Learning
Zheng, Yan, Shen, Ruimin, Hao, Jianye, Chen, Yinfeng, Fan, Changjie
Designing artificial intelligence for games (Game AI) has been long recognized as a notoriously challenging task in game industry, as it mainly relies on manual design, requiring plenty of domain knowledge. More frustratingly, even spending a lot of efforts, a satisfying Game AI is still hard to achieve by manual design due to the almost infinite search space. The recent success of deep reinforcement learning (DRL) sheds light on advancing automated game designing, significantly relaxing human competitive intelligent supp ort. However, existing DRL algorithms mostly focus on training a Game AI to win the game rather that the way it wins (style). To bridge the gap, we introduce EMO-DRL, an end-to-end game design framework, leveraging evolutionary algorithm, DRL and multi-objective optimization (MOO) to perform intelligent and automatic game design. Firstly, EMO-DRL proposes the style-oriented learning to bypass manual reward shaping in DRL and directly learns a Game AI with an expected style in an end-to-end fashion. On this basis, the prioritized multi-objective optimization is introduced to achieve more diverse, nature and humanlike Game AI. Large-scale evaluations on a Atari game and a commercial massively mul-tiplayer online game are conducted. The results demonstrat es that EMO-DRL, compared to existing algorithms, achieve better game designs in an intelligent and automatic way.
Oct-20-2019
- Country:
- Asia > China
- Tianjin Province > Tianjin (0.04)
- Zhejiang Province > Hangzhou (0.04)
- Asia > China
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment > Games > Computer Games (1.00)
- Technology: