CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection
Liu, Jie, Zhang, Yixiao, Chen, Jie-Neng, Xiao, Junfei, Lu, Yongyi, Landman, Bennett A., Yuan, Yixuan, Yuille, Alan, Tang, Yucheng, Zhou, Zongwei
–arXiv.org Artificial Intelligence
An increasing number of public datasets have shown a marked impact on automated organ segmentation and tumor detection. However, due to the small size and partially labeled problem of each dataset, as well as a limited investigation of diverse types of tumors, the resulting models are often limited to segmenting specific organs/tumors and ignore the semantics of anatomical structures, nor can they be extended to novel domains. To address these issues, we propose the CLIP-Driven Universal Model, which incorporates text embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models. This CLIP-based label encoding captures anatomical relationships, enabling the model to learn a structured feature embedding and segment 25 organs and 6 types of tumors. The proposed model is developed from an assembly of 14 datasets, using a total of 3,410 CT scans for training and then evaluated on 6,162 external CT scans from 3 additional datasets. We rank first on the Medical Segmentation Decathlon (MSD) public leaderboard and achieve state-of-the-art results on Beyond The Cranial Vault (BTCV). Additionally, the Universal Model is computationally more efficient (6x faster) compared with dataset-specific models, generalized better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks.
arXiv.org Artificial Intelligence
Aug-17-2023
- Country:
- Asia > China (0.28)
- North America (0.28)
- Genre:
- Research Report (0.63)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (1.00)
- Nuclear Medicine (1.00)
- Therapeutic Area > Oncology (1.00)
- Health & Medicine
- Technology: