Instance-Based State Identification for Reinforcement Learning
–Neural Information Processing Systems
This paper presents instance-based state identification, an approach to reinforcement learning and hidden state that builds disambiguating amountsof short-term memory online, and also learns with an order of magnitude fewer training steps than several previous approaches. Inspiredby a key similarity between learning with hidden state and learning in continuous geometrical spaces, this approach uses instance-based (or "memory-based") learning, a method that has worked well in continuous spaces. 1 BACKGROUND AND RELATED WORK When a robot's next course of action depends on information that is hidden from the sensors because of problems such as occlusion, restricted range, bounded field of view and limited attention, the robot suffers from hidden state. More formally, we say a reinforcement learning agent suffers from the hidden state problem if the agent's state representation is non-Markovian with respect to actions and utility. The hidden state problem arises as a case of perceptual aliasing: the mapping between statesof the world and sensations of the agent is not one-to-one [Whitehead, 1992]. If the agent's perceptual system produces the same outputs for two world states in which different actions are required, and if the agent's state representation consists only of its percepts, then the agent will fail to choose correct actions.
Neural Information Processing Systems
Dec-31-1995