Vilaplana, Veronica
QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results
Mehta, Raghav, Filos, Angelos, Baid, Ujjwal, Sako, Chiharu, McKinley, Richard, Rebsamen, Michael, Datwyler, Katrin, Meier, Raphael, Radojewski, Piotr, Murugesan, Gowtham Krishnan, Nalawade, Sahil, Ganesh, Chandan, Wagner, Ben, Yu, Fang F., Fei, Baowei, Madhuranthakam, Ananth J., Maldjian, Joseph A., Daza, Laura, Gomez, Catalina, Arbelaez, Pablo, Dai, Chengliang, Wang, Shuo, Reynaud, Hadrien, Mo, Yuan-han, Angelini, Elsa, Guo, Yike, Bai, Wenjia, Banerjee, Subhashis, Pei, Lin-min, AK, Murat, Rosas-Gonzalez, Sarahi, Zemmoura, Ilyess, Tauber, Clovis, Vu, Minh H., Nyholm, Tufve, Lofstedt, Tommy, Ballestar, Laura Mora, Vilaplana, Veronica, McHugh, Hugh, Talou, Gonzalo Maso, Wang, Alan, Patel, Jay, Chang, Ken, Hoebel, Katharina, Gidwani, Mishka, Arun, Nishanth, Gupta, Sharut, Aggarwal, Mehak, Singh, Praveer, Gerstner, Elizabeth R., Kalpathy-Cramer, Jayashree, Boutry, Nicolas, Huard, Alexis, Vidyaratne, Lasitha, Rahman, Md Monibor, Iftekharuddin, Khan M., Chazalon, Joseph, Puybareau, Elodie, Tochon, Guillaume, Ma, Jun, Cabezas, Mariano, Llado, Xavier, Oliver, Arnau, Valencia, Liliana, Valverde, Sergi, Amian, Mehdi, Soltaninejad, Mohammadreza, Myronenko, Andriy, Hatamizadeh, Ali, Feng, Xue, Dou, Quan, Tustison, Nicholas, Meyer, Craig, Shah, Nisarg A., Talbar, Sanjay, Weber, Marc-Andre, Mahajan, Abhishek, Jakab, Andras, Wiest, Roland, Fathallah-Shaykh, Hassan M., Nazeri, Arash, Milchenko1, Mikhail, Marcus, Daniel, Kotrotsou, Aikaterini, Colen, Rivka, Freymann, John, Kirby, Justin, Davatzikos, Christos, Menze, Bjoern, Bakas, Spyridon, Gal, Yarin, Arbel, Tal
Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses.
Monte-Carlo Sampling applied to Multiple Instance Learning for Histological Image Classification
Combalia, Marc, Vilaplana, Veronica
We propose a patch sampling strategy based on a sequential Monte-Carlo method for high resolution image classification in the context of Multiple Instance Learning. When compared with grid sampling and uniform sampling techniques, it achieves higher generalization performance. We validate the strategy on two artificial datasets and two histological datasets for breast cancer and sun exposure classification.
Brain MRI super-resolution using 3D generative adversarial networks
Sanchez, Irina, Vilaplana, Veronica
In this work we propose an adversarial learning approach to generate high resolution MRI scans from low resolution images. The architecture, based on the SRGAN model, adopts 3D convolutions to exploit volumetric information. For the discriminator, the adversarial loss uses least squares in order to stabilize the training. For the generator, the loss function is a combination of a least squares adversarial loss and a content term based on mean square error and image gradients in order to improve the quality of the generated images. We explore different solutions for the upsampling phase. We present promising results that improve classical interpolation, showing the potential of the approach for 3D medical imaging super-resolution. Source code available at https://github.com/imatge-upc/3D-GAN-superresolution
Voxelwise nonlinear regression toolbox for neuroimage analysis: Application to aging and neurodegenerative disease modeling
Puch, Santi, Aduriz, Asier, Casamitjana, Adrià, Vilaplana, Veronica, Petrone, Paula, Operto, Grégory, Cacciaglia, Raffaele, Skouras, Stavros, Falcon, Carles, Molinuevo, José Luis, Gispert, Juan Domingo
This paper describes a new neuroimaging analysis toolbox that allows for the modeling of nonlinear effects at the voxel level, overcoming limitations of methods based on linear models like the GLM. We illustrate its features using a relevant example in which distinct nonlinear trajectories of Alzheimer's disease related brain atrophy patterns were found across the full biological spectrum of the disease. The open-source toolbox presented in this paper is available at https://github.com/imatge-upc/VNeAT.