Structured Output Regularization: a framework for few-shot transfer learning
Ewen, Nicolas, Diaz-Rodriguez, Jairo, Ramsay, Kelly
Transfer learning is often used in deep learning when data is limited, such as in medical imaging applications (Kim et al., 2022). Foundation models, that is large, publicly available, pre-trained models, are often fine-tuned for such tasks where little data is available (Wang et al., 2023; Zhang and Metaxas, 2024; Khan et al., 2025). Beyond freezing part of a model to reduce overfitting, various techniques can increase training data such as data augmentation, and self supervised learning. These methods can reduce overfitting (Chollet, 2021; Wang et al., 2023; Ewen and Khan, 2021), but still struggle when there is little data available (Wang et al., 2023). We propose a new approach, Structured Output Regularization (SOR), a simple framework that adapts and prunes pretrained networks using very little labeled data. Instead of unfreezing internal weights, SOR keeps internal structures frozen, e.g., convolutional filters or higher-level blocks, and regularizes their outputs. Specifically, we freeze internal structure weights, we add new weights between each frozen structure, penalized via lasso penalty to encourage sparsity, and train the network. Structures whose new weights are driven to zero can be removed, yielding a smaller, task-tailored model without training the full parameter set. To regularize the final layer structures, SOR applies group lasso.
Oct-13-2025
- Country:
- North America > Canada > Ontario > Toronto (0.04)
- Genre:
- Overview (0.67)
- Research Report (0.50)
- Industry:
- Health & Medicine > Diagnostic Medicine > Imaging (0.67)
- Technology: