SynthFM: Training Modality-agnostic Foundation Models for Medical Image Segmentation without Real Medical Data
Sengupta, Sourya, Chakrabarty, Satrajit, Ravi, Keerthi Sravan, Avinash, Gopal, Soni, Ravi
–arXiv.org Artificial Intelligence
SYNTHFM: TRAINING MODALITY -AGNOSTIC FOUNDA TION MODELS FOR MEDICAL IMAGE SEGMENT A TION WITHOUT REAL MEDICAL DA T A Sourya Sengupta 1, 2, Satrajit Chakrabarty 1, Keerthi Sravan Ravi 1, Gopal Avinash 1, Ravi Soni 1 1 GE HealthCare, San Ramon, CA, USA 2 University of Illinois Urbana-Champaign, Urbana, IL, USA ABSTRACT Foundation models like the Segment Anything Model (SAM) excel in zero-shot segmentation for natural images but struggle with medical image segmentation due to differences in texture, contrast, and noise. Annotating medical images is costly and requires domain expertise, limiting large-scale annotated data availability. To address this, we propose Syn-thFM, a synthetic data generation framework that mimics the complexities of medical images, enabling foundation models to adapt without real medical data. Using SAM's pretrained encoder and training the decoder from scratch on SynthFM's dataset, we evaluated our method on 11 anatomical structures across 9 datasets (CT, MRI, and Ultrasound). SynthFM outperformed zero-shot baselines like SAM and MedSAM, achieving superior results under different prompt settings and on out-of-distribution datasets.
arXiv.org Artificial Intelligence
Apr-14-2025
- Country:
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)