Goto

Collaborating Authors

 imagenet1k


You Don't Need Domain-Specific Data Augmentations When Scaling Self-Supervised Learning

Neural Information Processing Systems

All instantiations of this paradigm were trained using strong and well-established hand-crafted data augmentations, leading to the general belief that they are required for the proper training and performance of such models.







$\boldsymbolλ$-Orthogonality Regularization for Compatible Representation Learning

Ricci, Simone, Biondi, Niccolò, Pernici, Federico, Patras, Ioannis, Del Bimbo, Alberto

arXiv.org Artificial Intelligence

Retrieval systems rely on representations learned by increasingly powerful models. However, due to the high training cost and inconsistencies in learned representations, there is significant interest in facilitating communication between representations and ensuring compatibility across independently trained neural networks. In the literature, two primary approaches are commonly used to adapt different learned representations: affine transformations, which adapt well to specific distributions but can significantly alter the original representation, and orthogonal transformations, which preserve the original structure with strict geometric constraints but limit adaptability. A key challenge is adapting the latent spaces of updated models to align with those of previous models on downstream distributions while preserving the newly learned representation spaces. In this paper, we impose a relaxed orthogonality constraint, namely $λ$-Orthogonality regularization, while learning an affine transformation, to obtain distribution-specific adaptation while retaining the original learned representations. Extensive experiments across various architectures and datasets validate our approach, demonstrating that it preserves the model's zero-shot performance and ensures compatibility across model updates. Code available at: \href{https://github.com/miccunifi/lambda_orthogonality.git}{https://github.com/miccunifi/lambda\_orthogonality}.


You Don't Need Domain-Specific Data Augmentations When Scaling Self-Supervised Learning

Neural Information Processing Systems

All instantiations of this paradigm were trained using strong and well-established hand-crafted data augmentations, leading to the general belief that they are required for the proper training and performance of such models.