A study on the plasticity of neural networks
Berariu, Tudor, Czarnecki, Wojciech, De, Soham, Bornschein, Jorg, Smith, Samuel, Pascanu, Razvan, Clopath, Claudia
–arXiv.org Artificial Intelligence
For example, PackNet (Mallya & Lazebnik, 2017) eventually One aim shared by multiple settings, such as continual gets to a point where all neurons are frozen and learning is learning or transfer learning, is to leverage not possible anymore. In the same fashion, accumulating previously acquired knowledge to converge faster constraints in EWC (Kirkpatrick et al., 2017) might lead on the current task. Usually this is done through to a strongly regularised objective that does not allow for fine-tuning, where an implicit assumption is that the new task's loss to be minimised. Alternatively, learning the network maintains its plasticity, meaning that might become less data efficient, referred to as negative the performance it can reach on any given task is forward transfer, an effect often noticed for regularisation not affected negatively by previously seen tasks.
arXiv.org Artificial Intelligence
Oct-14-2023