No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations Walter Simoncini 1, Andrei Bursuc
–Neural Information Processing Systems
Our method is simple: given any pretrained model, we first compute gradients from various self-supervised objectives for each input. These gradients are projected to a lower dimension and then concatenated with the model's output embedding. The resulting features are evaluated on k-nearest neighbor classification over 11 datasets from vision, 5 from natural language processing, and 2 from audio.
Neural Information Processing Systems
Mar-18-2025, 19:19:25 GMT
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (0.46)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (1.00)
- Statistical Learning > Nearest Neighbor Methods (0.70)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Machine Learning
- Data Science > Data Mining (1.00)
- Sensing and Signal Processing > Image Processing (1.00)
- Artificial Intelligence
- Information Technology