Not enough data to create a plot.
Try a different view from the menu above.
Soures, Nicholas
Continual Learning and Catastrophic Forgetting
van de Ven, Gido M., Soures, Nicholas, Kudithipudi, Dhireesha
This book chapter delves into the dynamics of continual learning, which is the process of incrementally learning from a non-stationary stream of data. Although continual learning is a natural skill for the human brain, it is very challenging for artificial neural networks. An important reason is that, when learning something new, these networks tend to quickly and drastically forget what they had learned before, a phenomenon known as catastrophic forgetting. Especially in the last decade, continual learning has become an extensively studied topic in deep learning. This book chapter reviews the insights that this field has generated.
Design Principles for Lifelong Learning AI Accelerators
Kudithipudi, Dhireesha, Daram, Anurag, Zyarah, Abdullah M., Zohora, Fatima Tuz, Aimone, James B., Yanguas-Gil, Angel, Soures, Nicholas, Neftci, Emre, Mattina, Matthew, Lomonaco, Vincenzo, Thiem, Clare D., Epstein, Benjamin
Lifelong learning - an agent's ability to learn throughout its lifetime - is a hallmark of biological learning systems and a central challenge for artificial intelligence (AI). The development of lifelong learning algorithms could lead to a range of novel AI applications, but this will also require the development of appropriate hardware accelerators, particularly if the models are to be deployed on edge platforms, which have strict size, weight, and power constraints. Here, we explore the design of lifelong learning AI accelerators that are intended for deployment in untethered environments. We identify key desirable capabilities for lifelong learning accelerators and highlight metrics to evaluate such accelerators. We then discuss current edge AI accelerators and explore the future design of lifelong learning accelerators, considering the role that different emerging technologies could play.