Goto

Collaborating Authors

 Dokania, Puneet


Online Continual Learning Without the Storage Constraint

arXiv.org Artificial Intelligence

Traditional online continual learning (OCL) research has primarily focused on mitigating catastrophic forgetting with fixed and limited storage allocation throughout an agent's lifetime. However, a broad range of real-world applications are primarily constrained by computational costs rather than storage limitations. In this paper, we target such applications, investigating the online continual learning problem under relaxed storage constraints and limited computational budgets. We contribute a simple algorithm, which updates a kNN classifier continually along with a fixed, pretrained feature extractor. We selected this algorithm due to its exceptional suitability for online continual learning. It can adapt to rapidly changing streams, has zero stability gap, operates within tiny computational budgets, has low storage requirements by only storing features, and has a consistency property: It never forgets previously seen data. These attributes yield significant improvements, allowing our proposed algorithm to outperform existing methods by over 20% in accuracy on two large-scale OCL datasets: Continual LOCalization (CLOC) with 39M images and 712 classes and Continual Google Landmarks V2 (CGLM) with 580K images and 10,788 classes, even when existing methods retain all previously seen images. Furthermore, we achieve this superior performance with considerably reduced computational and storage expenses. We provide code to reproduce our results at github.com/drimpossible/ACM.


Computationally Budgeted Continual Learning: What Does Matter?

arXiv.org Artificial Intelligence

Continual Learning (CL) aims to sequentially train models on streams of incoming data that vary in distribution by preserving previous knowledge while adapting to new data. Current CL literature focuses on restricted access to previously seen data, while imposing no constraints on the computational budget for training. This is unreasonable for applications in-the-wild, where systems are primarily constrained by computational and time budgets, not storage. We revisit this problem with a large-scale benchmark and analyze the performance of traditional CL approaches in a compute-constrained setting, where effective memory samples used in training can be implicitly restricted as a consequence of limited computation. We conduct experiments evaluating various CL sampling strategies, distillation losses, and partial fine-tuning on two large-scale datasets, namely ImageNet2K and Continual Google Landmarks V2 in data incremental, class incremental, and time incremental settings. Through extensive experiments amounting to a total of over 1500 GPU-hours, we find that, under compute-constrained setting, traditional CL approaches, with no exception, fail to outperform a simple minimal baseline that samples uniformly from memory. Our conclusions are consistent in a different number of stream time steps, e.g., 20 to 200, and under several computational budgets. This suggests that most existing CL methods are particularly too computationally expensive for realistic budgeted deployment. Code for this project is available at: https://github.com/drimpossible/BudgetCL.


Simulation-Based Inference for Global Health Decisions

arXiv.org Machine Learning

This is fomenting the development of comprehensive modelling The COVID-19 pandemic has highlighted the importance and simulation to support the design of health interventions of in-silico epidemiological modelling in predicting and policies, and to guide decision-making in a variety of the dynamics of infectious diseases to inform health system domains [22, 49]. For example, simulations health policy and decision makers about suitable prevention have provided valuable insight to deal with public health and containment strategies. Work in this setting problems such as tobacco consumption in New Zealand [50], involves solving challenging inference and control and diabetes and obesity in the US [58]. They have been problems in individual-based models of ever increasing used to explore policy options such as those in maternal and complexity. Here we discuss recent breakthroughs antenatal care in Uganda [44], and applied to evaluate health in machine learning, specifically in simulation-based reform scenarios such as predicting changes in access to inference, and explore its potential as a novel venue primary care services in Portugal [21]. Their applicability for model calibration to support the design and evaluation in informing the design of cancer screening programmes of public health interventions. To further stimulate has been also discussed [42, 23]. Recently, simulations have research, we are developing software interfaces that informed the response to the COVID-19 outbreak [19].