lider
OntheEffectivenessofLipschitz-Driven RehearsalinContinualLearning
Rehearsal approaches enjoy immense popularity with Continual Learning (CL) practitioners. These methods collect samples from previously encountered data distributions in a small memory buffer; subsequently, they repeatedly optimize on the latter to prevent catastrophic forgetting. This work draws attention to a hidden pitfallofthis widespread practice: repeated optimization onasmall pool of data inevitably leads to tight and unstable decision boundaries, which are a major hindrance to generalization.
On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning
Rehearsal approaches enjoy immense popularity with Continual Learning (CL) practitioners. These methods collect samples from previously encountered data distributions in a small memory buffer; subsequently, they repeatedly optimize on the latter to prevent catastrophic forgetting. This work draws attention to a hidden pitfall of this widespread practice: repeated optimization on a small pool of data inevitably leads to tight and unstable decision boundaries, which are a major hindrance to generalization. To address this issue, we propose Lipschitz-DrivEn Rehearsal (LiDER), a surrogate objective that induces smoothness in the backbone network by constraining its layer-wise Lipschitz constants w.r.t.
- North America > United States > California (0.04)
- Asia > Middle East > Jordan (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning
Rehearsal approaches enjoy immense popularity with Continual Learning (CL) practitioners. These methods collect samples from previously encountered data distributions in a small memory buffer; subsequently, they repeatedly optimize on the latter to prevent catastrophic forgetting. This work draws attention to a hidden pitfall of this widespread practice: repeated optimization on a small pool of data inevitably leads to tight and unstable decision boundaries, which are a major hindrance to generalization. To address this issue, we propose Lipschitz-DrivEn Rehearsal (LiDER), a surrogate objective that induces smoothness in the backbone network by constraining its layer-wise Lipschitz constants w.r.t. By means of extensive experiments, we show that applying LiDER delivers a stable performance gain to several state-of-the-art rehearsal CL methods across multiple datasets, both in the presence and absence of pre-training.
On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning
Bonicelli, Lorenzo, Boschini, Matteo, Porrello, Angelo, Spampinato, Concetto, Calderara, Simone
Rehearsal approaches enjoy immense popularity with Continual Learning (CL) practitioners. These methods collect samples from previously encountered data distributions in a small memory buffer; subsequently, they repeatedly optimize on the latter to prevent catastrophic forgetting. This work draws attention to a hidden pitfall of this widespread practice: repeated optimization on a small pool of data inevitably leads to tight and unstable decision boundaries, which are a major hindrance to generalization. To address this issue, we propose Lipschitz-DrivEn Rehearsal (LiDER), a surrogate objective that induces smoothness in the backbone network by constraining its layer-wise Lipschitz constants w.r.t. replay examples. By means of extensive experiments, we show that applying LiDER delivers a stable performance gain to several state-of-the-art rehearsal CL methods across multiple datasets, both in the presence and absence of pre-training. Through additional ablative experiments, we highlight peculiar aspects of buffer overfitting in CL and better characterize the effect produced by LiDER. Code is available at https://github.com/aimagelab/LiDER
- North America > United States > California (0.04)
- Asia > Middle East > Jordan (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy
Du, Yunshu, Warnell, Garrett, Gebremedhin, Assefaw, Stone, Peter, Taylor, Matthew E.
Experience replay (ER) improves the data efficiency of off-policy reinforcement learning (RL) algorithms by allowing an agent to store and reuse its past experiences in a replay buffer. While many techniques have been proposed to enhance ER by biasing how experiences are sampled from the buffer, thus far they have not considered strategies for refreshing experiences inside the buffer. In this work, we introduce Lucid Dreaming for Experience Replay (LiDER), a conceptually new framework that allows replay experiences to be refreshed by leveraging the agent's current policy. LiDER 1) moves an agent back to a past state; 2) lets the agent try following its current policy to execute different actions---as if the agent were "dreaming" about the past, but is aware of the situation and can control the dream to encounter new experiences; and 3) stores and reuses the new experience if it turned out better than what the agent previously experienced, i.e., to refresh its memories. LiDER is designed to be easily incorporated into off-policy, multi-worker RL algorithms that use ER; we present in this work a case study of applying LiDER to an actor-critic based algorithm. Results show LiDER consistently improves performance over the baseline in four Atari 2600 games. Our open-source implementation of LiDER and the data used to generate all plots in this paper are available at github.com/duyunshu/lucid-dreaming-for-exp-replay.
- North America > Canada > Alberta (0.14)
- North America > United States > Washington (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- (5 more...)
Calcalist to Host Tel Aviv Big Data Conference
Speakers at the event include Israeli pop star and entrepreneur Ivri Lider. Lider is co-owner of MyPart Inc., a Tel Aviv-based startup that lets little-known artists offer their original songs, lyrics, music, translations, and visual art to successful musicians of their choice. One of the key speakers will be Avi Korenblum, a 20-year veteran of Israeli intelligence organizations. In 2012, Korenblum founded New York-headquartered online data analysis company Voyager Labs. Incorporated as Voyager Analytics Inc., the company develops artificial intelligence technology that provides enterprises with real-time actionable insights into their users' on-site activity.
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.66)
- North America > United States > New York (0.27)
- Information Technology > Security & Privacy (0.81)
- Media > Music (0.59)
- Information Technology > Data Science > Data Mining > Big Data (0.43)
- Information Technology > Artificial Intelligence > Machine Learning (0.36)