EXPLORER: Exploration-guided Reasoning for Textual Reinforcement Learning
Basu, Kinjal, Murugesan, Keerthiram, Chaudhury, Subhajit, Campbell, Murray, Talamadupula, Kartik, Klinger, Tim
–arXiv.org Artificial Intelligence
Text-based games (TBGs) have emerged as an important collection of NLP tasks, requiring reinforcement learning (RL) agents to combine natural language understanding with reasoning. A key challenge for agents attempting to solve such tasks is to generalize across multiple games and demonstrate good performance on both seen and unseen objects. Purely deep-RL-based approaches may perform well on seen objects; however, they fail to showcase the same performance on unseen objects. Commonsense-infused deep-RL agents may work better on unseen data; unfortunately, their policies are often not interpretable or easily transferable. To tackle these issues, in this paper, we present EXPLORER which is an exploration-guided reasoning agent for textual reinforcement learning. EXPLORER is neurosymbolic in nature, as it relies on a neural module for exploration and a symbolic module for exploitation. It can also learn generalized symbolic policies and perform well over unseen data. Our experiments show that EXPLORER outperforms the baseline agents on Text-World cooking (TW-Cooking) and Text-World Commonsense (TWC) games.
arXiv.org Artificial Intelligence
Mar-15-2024
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Leisure & Entertainment > Games (0.93)
- Technology: