Kohita, Ryosuke
Neuro-Symbolic Reinforcement Learning with First-Order Logic
Kimura, Daiki, Ono, Masaki, Chaudhury, Subhajit, Kohita, Ryosuke, Wachi, Akifumi, Agravante, Don Joven, Tatsubori, Michiaki, Munawar, Asim, Gray, Alexander
Deep reinforcement learning (RL) methods often require many trials before convergence, and no direct interpretability of trained policies is provided. In order to achieve fast convergence and interpretability for the policy in RL, we propose a novel RL method for text-based games with a recent neuro-symbolic framework called Logical Neural Network, which can learn symbolic and interpretable rules in their differentiable network. The method is first to extract first-order logical facts from text observation and external word meaning network (ConceptNet), then train a policy in the network with directly interpretable logical operators. Our experimental results show RL training with the proposed method converges significantly faster than other state-of-the-art neuro-symbolic methods in a TextWorld benchmark.
Reinforcement Learning with External Knowledge by using Logical Neural Networks
Kimura, Daiki, Chaudhury, Subhajit, Wachi, Akifumi, Kohita, Ryosuke, Munawar, Asim, Tatsubori, Michiaki, Gray, Alexander
Conventional deep reinforcement learning methods are sample-inefficient and usually require a large number of training trials before convergence. Since such methods operate on an unconstrained action set, they can lead to useless actions. A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic. The LNNs functions as an end-to-end differentiable network that minimizes a novel contradiction loss to learn interpretable rules. In this paper, we utilize LNNs to define an inference graph using basic logical operations, such as AND and NOT, for faster convergence in reinforcement learning. Specifically, we propose an integrated method that enables model-free reinforcement learning from external knowledge sources in an LNNs-based logical constrained framework such as action shielding and guide. Our results empirically demonstrate that our method converges faster compared to a model-free reinforcement learning method that doesn't have such logical constraints.