DiSSECT: Structuring Transfer-Ready Medical Image Representations through Discrete Self-Supervision
–arXiv.org Artificial Intelligence
Self-supervised learning (SSL) has emerged as a powerful paradigm for medical image representation learning, particularly in settings with limited labeled data. However, existing SSL methods often rely on complex architectures, anatomy-specific priors, or heavily tuned augmentations, which limit their scalability and generalizability. More critically, these models are prone to shortcut learning, especially in modalities like chest X-rays, where anatomical similarity is high and pathology is subtle. In this work, we introduce DiSSECT -- Discrete Self-Supervision for Efficient Clinical Transferable Representations, a framework that integrates multi-scale vector quantization into the SSL pipeline to impose a discrete representational bottleneck. This constrains the model to learn repeatable, structure-aware features while suppressing view-specific or low-utility patterns, improving representation transfer across tasks and domains. DiSSECT achieves strong performance on both classification and segmentation tasks, requiring minimal or no fine-tuning, and shows particularly high label efficiency in low-label regimes. We validate DiSSECT across multiple public medical imaging datasets, demonstrating its robustness and generalizability compared to existing state-of-the-art approaches.
arXiv.org Artificial Intelligence
Sep-24-2025
- Country:
- Europe
- France > Grand Est
- Bas-Rhin > Strasbourg (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Switzerland (0.04)
- France > Grand Est
- Europe
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Technology: