On the Computation of the Fisher Information in Continual Learning
Continual learning is a rapidly growing subfield of deep learning devoted to enabling neural networks to incrementally learn new tasks, domains or classes while not forgetting previously learned ones. Such continual learning is crucial for addressing real-world problems where data are constantly changing, such as in healthcare, autonomous driving or robotics. Unfortunately, continual learning is challenging for deep neural networks, mainly due to their tendency to forget previously acquired skills when learning something new. Elastic Weight Consolidation (EWC) [1], developed by Kirkpatrick and colleagues from DeepMind, is one of the most popular methods for continual learning with deep neural networks. To this day, this method is featured as a baseline in a large proportion of continual learning studies. However, in the original paper the exact implementation of EWC was not well described, and no official code was provided. A previous blog post by Huszár [2] already addressed an issue relating to how EWC should behave when there are more than two tasks.
Feb-17-2025
- Country:
- Asia > China (0.04)
- Europe
- Belgium > Flanders
- Flemish Brabant > Leuven (0.04)
- Netherlands > South Holland
- Delft (0.04)
- Belgium > Flanders
- Genre:
- Research Report (0.40)
- Industry:
- Education (0.47)
- Health & Medicine (0.48)
- Information Technology (0.34)
- Technology: