Nyholm, Tufve
Using Synthetic Images to Augment Small Medical Image Datasets
Vu, Minh H., Tronchin, Lorenzo, Nyholm, Tufve, Löfstedt, Tommy
Recent years have witnessed a growing academic and industrial interest in deep learning (DL) for medical imaging. To perform well, DL models require very large labeled datasets. However, most medical imaging datasets are small, with a limited number of annotated samples. The reason they are small is usually because delineating medical images is time-consuming and demanding for oncologists. There are various techniques that can be used to augment a dataset, for example, to apply affine transformations or elastic transformations to available images, or to add synthetic images generated by a Generative Adversarial Network (GAN). In this work, we have developed a novel conditional variant of a current GAN method, the StyleGAN2, to generate multi-modal high-resolution medical images with the purpose to augment small medical imaging datasets with these synthetic images. We use the synthetic and real images from six datasets to train models for the downstream task of semantic segmentation. The quality of the generated medical images and the effect of this augmentation on the segmentation performance were evaluated afterward. Finally, the results indicate that the downstream segmentation models did not benefit from the generated images. Further work and analyses are required to establish how this augmentation affects the segmentation performance.
QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results
Mehta, Raghav, Filos, Angelos, Baid, Ujjwal, Sako, Chiharu, McKinley, Richard, Rebsamen, Michael, Datwyler, Katrin, Meier, Raphael, Radojewski, Piotr, Murugesan, Gowtham Krishnan, Nalawade, Sahil, Ganesh, Chandan, Wagner, Ben, Yu, Fang F., Fei, Baowei, Madhuranthakam, Ananth J., Maldjian, Joseph A., Daza, Laura, Gomez, Catalina, Arbelaez, Pablo, Dai, Chengliang, Wang, Shuo, Reynaud, Hadrien, Mo, Yuan-han, Angelini, Elsa, Guo, Yike, Bai, Wenjia, Banerjee, Subhashis, Pei, Lin-min, AK, Murat, Rosas-Gonzalez, Sarahi, Zemmoura, Ilyess, Tauber, Clovis, Vu, Minh H., Nyholm, Tufve, Lofstedt, Tommy, Ballestar, Laura Mora, Vilaplana, Veronica, McHugh, Hugh, Talou, Gonzalo Maso, Wang, Alan, Patel, Jay, Chang, Ken, Hoebel, Katharina, Gidwani, Mishka, Arun, Nishanth, Gupta, Sharut, Aggarwal, Mehak, Singh, Praveer, Gerstner, Elizabeth R., Kalpathy-Cramer, Jayashree, Boutry, Nicolas, Huard, Alexis, Vidyaratne, Lasitha, Rahman, Md Monibor, Iftekharuddin, Khan M., Chazalon, Joseph, Puybareau, Elodie, Tochon, Guillaume, Ma, Jun, Cabezas, Mariano, Llado, Xavier, Oliver, Arnau, Valencia, Liliana, Valverde, Sergi, Amian, Mehdi, Soltaninejad, Mohammadreza, Myronenko, Andriy, Hatamizadeh, Ali, Feng, Xue, Dou, Quan, Tustison, Nicholas, Meyer, Craig, Shah, Nisarg A., Talbar, Sanjay, Weber, Marc-Andre, Mahajan, Abhishek, Jakab, Andras, Wiest, Roland, Fathallah-Shaykh, Hassan M., Nazeri, Arash, Milchenko1, Mikhail, Marcus, Daniel, Kotrotsou, Aikaterini, Colen, Rivka, Freymann, John, Kirby, Justin, Davatzikos, Christos, Menze, Bjoern, Bakas, Spyridon, Gal, Yarin, Arbel, Tal
Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses.
End-to-End Cascaded U-Nets with a Localization Network for Kidney Tumor Segmentation
Vu, Minh H., Grimbergen, Guus, Simkó, Attila, Nyholm, Tufve, Löfstedt, Tommy
Kidney tumor segmentation emerges as a new frontier of computer vision in medical imaging. This is partly due to its challenging manual annotation and great medical impact. Within the scope of the Kidney Tumor Segmentation Challenge 2019, that is aiming at combined kidney and tumor segmentation, this work proposes a novel combination of 3D U-Nets--collectively denoted TuNet--utilizing the resulting kidney masks for the consecutive tumor segmentation. The proposed method achieves a Sørensen-Dice coefficient score of 0.902 for the kidney, and 0.408 for the tumor segmentation, computed from a fivefold cross-validation on the 210 patients available in the data. 1 Introduction Kidney cancer has an annual worldwide prevalence of over 400 000 new cases, with over 175 000 deaths in 2018 [1]. The most common type of kidney cancer is renal cell carcinoma (RCC) [10].
TuNet: End-to-end Hierarchical Brain Tumor Segmentation using Cascaded Networks
Vu, Minh H., Nyholm, Tufve, Löfstedt, Tommy
Glioma is one of the most common types of brain tumors arising in the glial cells in the human brain and spinal cord. In addition to the threat of death, glioma treatment is also very costly. Hence, automatic and accurate segmentation and measurement from the early stages are critical in order to prolong the survival rates of the patients and to reduce the costs of health care. In the present work, we propose a novel end-to-end cascaded network for semantic segmentation that utilizes the hierarchical structure of the tumor sub-regions with ResNet-like blocks and Squeeze-and-Excitation modules after each convolution and concatenation block. By utilizing cross-validation, an average ensemble technique, and a simple post-processing technique, we obtained dice scores of 90.34, 81.12, and 78.42 and Hausdorff Distances (95th percentile) of 4.32, 6.28, and 3.70 for the whole tumor, tumor core, and enhancing tumor, respectively, on the online validation set.
Whole-brain substitute CT generation using Markov random field mixture models
Hildeman, Anders, Bolin, David, Wallin, Jonas, Johansson, Adam, Nyholm, Tufve, Asklund, Thomas, Yu, Jun
Computed tomography (CT) equivalent information is needed for attenuation correction in PET imaging and for dose planning in radiotherapy. Prior work has shown that Gaussian mixture models can be used to generate a substitute CT (s-CT) image from a specific set of MRI modalities. This work introduces a more flexible class of mixture models for s-CT generation, that incorporates spatial dependency in the data through a Markov random field prior on the latent field of class memberships associated with a mixture model. Furthermore, the mixture distributions are extended from Gaussian to normal inverse Gaussian (NIG), allowing heavier tails and skewness. The amount of data needed to train a model for s-CT generation is of the order of 100 million voxels. The computational efficiency of the parameter estimation and prediction methods are hence paramount, especially when spatial dependency is included in the models. A stochastic Expectation Maximization (EM) gradient algorithm is proposed in order to tackle this challenge. The advantages of the spatial model and NIG distributions are evaluated with a cross-validation study based on data from 14 patients. The study show that the proposed model enhances the predictive quality of the s-CT images by reducing the mean absolute error with 17.9%. Also, the distribution of CT values conditioned on the MR images are better explained by the proposed model as evaluated using continuous ranked probability scores.