Affordances as Transferable Knowledge for Planning Agents
Barth-Maron, Gabriel (Brown University) | Abel, David (Brown University) | MacGlashan, James (Brown University) | Tellex, Stefanie (Brown University)
Robotic agents often map perceptual input to simplified representations that do not reflect the complexity and richness of the world. This simplification is due in large part to the limitations of planning algorithms, which fail in large stochastic state spaces on account of the well-known "curse of dimensionality." Existing approaches to address this problem fail to prevent autonomous agents from considering many actions which would be obviously irrelevant to a human solving the same problem. We formalize the notion of affordances as knowledge added to an Markov Decision Process (MDP) that prunes actions in a state- and reward- general way. This pruning significantly reduces the number of state-action pairs the agent needs to evaluate in order to act near-optimally. We demonstrate our approach in the Minecraft domain as a model for robotic tasks, showing significant increase in speed and reduction in state-space exploration during planning. Further, we provide a learning framework that enables an agent to learn affordances through experience, opening the door for agents to learn to adapt and plan through new situations. We provide preliminary results indicating that the learning process effectively produces affordances that help solve an MDP faster, suggesting that affordances serve as an effective, transferable piece of knowledge for planning agents in large state spaces.
Nov-1-2014