A Simple Baseline for Stable and Plastic Neural Networks
Künzel, Étienne, Jaziri, Achref, Ramesh, Visvanathan
–arXiv.org Artificial Intelligence
Continual learning in computer vision requires that models adapt to a continuous stream of tasks without forgetting prior knowledge, yet existing approaches often tip the balance heavily toward either plasticity or stability. We introduce RDBP, a simple, low-overhead baseline that unites two complementary mechanisms: ReLUDown, a lightweight activation modification that preserves feature sensitivity while preventing neuron dormancy, and Decreasing Backpropagation, a biologically inspired gradient-scheduling scheme that progressively shields early layers from catastrophic updates. Evaluated on the Continual ImageNet benchmark, RDBP matches or exceeds the plasticity and stability of state-of-the-art methods while reducing computational cost. RDBP thus provides both a practical solution for real-world continual learning and a clear benchmark against which future continual learning strategies can be measured. Continual learning in computer vision tackles the fundamental challenge of enabling models to adapt to a continuous stream of visual information rather than to a single static dataset. Such systems must continuously integrate new concepts while retaining the features and representations learned from previous tasks.
arXiv.org Artificial Intelligence
Jul-21-2025
- Country:
- Europe > Germany > Hesse > Darmstadt Region > Frankfurt (0.04)
- Genre:
- Research Report (1.00)
- Technology: