Concept-Aware Feature Extraction for Knowledge Transfer in Reinforcement Learning
Winder, John (University of Maryland, Baltimore County) | desJardins, Marie (University of Maryland, Baltimore County)
We introduce a novel mechanism for knowledge transfer via concept formation to augment reinforcement learning agents operating in complex, uncertain domains. Based on their observations, agents form concepts and associate them with actions to generalize their decisions at higher levels of abstraction. Concepts serve as simple, portable, efficient packets of hierarchical information that can be learned in parallel. The use of conceptual knowledge simultaneously provides an interpretable, semantic explanation of an agent's decisions, making the techniques promising for human-interaction domains such as games, where human observers wish to inspect an agent's rationale. This technique extends previous work on probabilistic learning with Markov decision processes (MDPs) by introducing rich hierarchical feature structures that can be learned from experience, enabling more effective learning transfer to new, related tasks.
Apr-6-2018
- Technology: