Linear Mode Connectivity under Data Shifts for Deep Ensembles of Image Classifiers

Hepburn, C., Zielke, T., Raulf, A. P.

arXiv.org Artificial Intelligence 

--The phenomenon of linear mode connectivity (LMC) links several aspects of deep learning, including training stability under noisy stochastic gradients, the smoothness and generalization of local minima (basins), the similarity and functional diversity of sampled models, and architectural effects on data processing. In this work, we experimentally study LMC under data shifts and identify conditions that mitigate their impact. We interpret data shifts as an additional source of stochastic gradient noise, which can be reduced through small learning rates and large batch sizes. These parameters influence whether models converge to the same local minimum or to regions of the loss landscape with varying smoothness and generalization. Although models sampled via LMC tend to make similar errors more frequently than those converging to different basins, the benefit of LMC lies in balancing training efficiency against the gains achieved from larger, more diverse ensembles. Code and supplementary materials will be made publicly available at https://github.com/DLR-KI/LMC in due course. ODE connectivity refers to a phenomenon, when stochastic gradient descent (SGD) solutions or modes are connected via a path of low loss in neural networks parameter space [1], [2]. So every solution along such path exhibits similar performance and generalization as those solutions, between which the path is constructed. Moreover, such paths were shown to be embedded in a multi-dimensional manifold of low loss [3]. When a connecting path is linear the phenomenon is referred to as linear mode connectivity (LMC) [4]. LMC was investigated under different perspectives: (1) conditions affecting LMC [4], [5], (2) connectivity of layers, features or different types of solutions [6], [7], [8] and (3) so-called "re-basin" approaches, that "transport" a solution from one local minimum From a practical view point, LMC is expected to improve ensemble methods, in particular in federated learning setting, robustness of fine-tuned models, distributed optimization and model pruning [13], [9]. This work focuses on LMC from the perspective of data shifts [14], which are ever-present in real world applications. In particular, when training is performed on multiple training datasets separately and ensembles of models are employed.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found