Daniels, Zachary
A Domain-Agnostic Approach for Characterization of Lifelong Learning Systems
Baker, Megan M., New, Alexander, Aguilar-Simon, Mario, Al-Halah, Ziad, Arnold, Sébastien M. R., Ben-Iwhiwhu, Ese, Brna, Andrew P., Brooks, Ethan, Brown, Ryan C., Daniels, Zachary, Daram, Anurag, Delattre, Fabien, Dellana, Ryan, Eaton, Eric, Fu, Haotian, Grauman, Kristen, Hostetler, Jesse, Iqbal, Shariq, Kent, Cassandra, Ketz, Nicholas, Kolouri, Soheil, Konidaris, George, Kudithipudi, Dhireesha, Learned-Miller, Erik, Lee, Seungwon, Littman, Michael L., Madireddy, Sandeep, Mendez, Jorge A., Nguyen, Eric Q., Piatko, Christine D., Pilly, Praveen K., Raghavan, Aswin, Rahman, Abrar, Ramakrishnan, Santhosh Kumar, Ratzlaff, Neale, Soltoggio, Andrea, Stone, Peter, Sur, Indranil, Tang, Zhipeng, Tiwari, Saket, Vedder, Kyle, Wang, Felix, Xu, Zifan, Yanguas-Gil, Angel, Yedidsion, Harel, Yu, Shangqun, Vallabha, Gautam K.
Despite the advancement of machine learning techniques in recent years, state-of-the-art systems lack robustness to "real world" events, where the input distributions and tasks encountered by the deployed systems will not be limited to the original training context, and systems will instead need to adapt to novel distributions and tasks while deployed. This critical gap may be addressed through the development of "Lifelong Learning" systems that are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability. Unfortunately, efforts to improve these capabilities are typically treated as distinct areas of research that are assessed independently, without regard to the impact of each separate capability on other aspects of the system. We instead propose a holistic approach, using a suite of metrics and an evaluation framework to assess Lifelong Learning in a principled way that is agnostic to specific domains or system techniques. Through five case studies, we show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems. We highlight how the proposed suite of metrics quantifies performance trade-offs present during Lifelong Learning system development - both the widely discussed Stability-Plasticity dilemma and the newly proposed relationship between Sample Efficient and Robust Learning. Further, we make recommendations for the formulation and use of metrics to guide the continuing development of Lifelong Learning systems and assess their progress in the future.
System Design for an Integrated Lifelong Reinforcement Learning Agent for Real-Time Strategy Games
Sur, Indranil, Daniels, Zachary, Rahman, Abrar, Faber, Kamil, Gallardo, Gianmarco J., Hayes, Tyler L., Taylor, Cameron E., Gurbuz, Mustafa Burak, Smith, James, Joshi, Sahana, Japkowicz, Nathalie, Baron, Michael, Kira, Zsolt, Kanan, Christopher, Corizzo, Roberto, Divakaran, Ajay, Piacentino, Michael, Hostetler, Jesse, Raghavan, Aswin
As Artificial and Robotic Systems are increasingly deployed and relied upon for real-world applications, it is important that they exhibit the ability to continually learn and adapt in dynamically-changing environments, becoming Lifelong Learning Machines. Continual/lifelong learning (LL) involves minimizing catastrophic forgetting of old tasks while maximizing a model's capability to learn new tasks. This paper addresses the challenging lifelong reinforcement learning (L2RL) setting. Pushing the state-of-the-art forward in L2RL and making L2RL useful for practical applications requires more than developing individual L2RL algorithms; it requires making progress at the systems-level, especially research into the non-trivial problem of how to integrate multiple L2RL algorithms into a common framework. In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system. As an instantiation of L2RLCF, we develop a standard API allowing easy integration of novel lifelong learning components. We describe a case study that demonstrates how multiple independently-developed LL components can be integrated into a single realized system. We also introduce an evaluation environment in order to measure the effect of combining various system components. Our evaluation environment employs different LL scenarios (sequences of tasks) consisting of Starcraft-2 minigames and allows for the fair, comprehensive, and quantitative comparison of different combinations of components within a challenging common evaluation environment.