Improve Cross-Modality Segmentation by Treating MRI Images as Inverted CT Scans

Häntze, Hartmut, Xu, Lina, Donle, Leonhard, Dorfner, Felix J., Hering, Alessa, Adams, Lisa C., Bressem, Keno K.

arXiv.org Artificial Intelligence 

Segmentation of medical images plays a vital role in many automatic image analysis tools. While segmentation has been well established for computed tomography (CT) scans, with multiple open source models available [1, 2], multi-class segmentation of magnetic resonance imaging (MRI), especially outside the brain, is lacking behind. The main reason for this challenge is that training segmentation models requires a large number of annotated images, and the more classes involved, the greater the annotation effort needed. While this problem can be partially alleviated by using augmented CT scans with existing labels for retraining a model [3], implementing and training an augmented model is resource-intensive, time-consuming, and technically challenging. In this short paper, we demonstrate that image augmentation, specifically inversion, can be sufficient to bridge the gap between MRI and CT segmentation performance and CT segmentation model can be used to generate masks for MR images. One key difference between MRI and CT images is that dense tissue, such as bones, appears bright (hyperdense) in CT scans but dark (hypointense) in MRI images. We attempt to minimize this difference by using negatives of MRI images and analyze whether it has an effect on the semantic segmentation performance of models trained solely on CT data.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found