Pieper, Steve
The NCI Imaging Data Commons as a platform for reproducible research in computational pathology
Schacherer, Daniela P., Herrmann, Markus D., Clunie, David A., Höfener, Henning, Clifford, William, Longabaugh, William J. R., Pieper, Steve, Kikinis, Ron, Fedorov, Andrey, Homeyer, André
Background and Objectives: Reproducibility is a major challenge in developing machine learning (ML)-based solutions in computational pathology (CompPath). The NCI Imaging Data Commons (IDC) provides >120 cancer image collections according to the FAIR principles and is designed to be used with cloud ML services. Here, we explore its potential to facilitate reproducibility in CompPath research. Methods: Using the IDC, we implemented two experiments in which a representative ML-based method for classifying lung tumor tissue was trained and/or evaluated on different datasets. To assess reproducibility, the experiments were run multiple times with separate but identically configured instances of common ML services. Results: The AUC values of different runs of the same experiment were generally consistent. However, we observed small variations in AUC values of up to 0.045, indicating a practical limit to reproducibility. Conclusions: We conclude that the IDC facilitates approaching the reproducibility limit of CompPath research (i) by enabling researchers to reuse exactly the same datasets and (ii) by integrating with cloud ML services so that experiments can be run in identically configured computing environments.
DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images
Diaz-Pinto, Andres, Mehta, Pritesh, Alle, Sachidanand, Asad, Muhammad, Brown, Richard, Nath, Vishwesh, Ihsani, Alvin, Antonelli, Michela, Palkovics, Daniel, Pinter, Csaba, Alkalay, Ron, Pieper, Steve, Roth, Holger R., Xu, Daguang, Dogra, Prerna, Vercauteren, Tom, Feng, Andrew, Quraini, Abood, Ourselin, Sebastien, Cardoso, M. Jorge
Automatic segmentation of medical images is a key step for diagnostic and interventional tasks. However, achieving this requires large amounts of annotated volumes, which can be tedious and time-consuming task for expert annotators. In this paper, we introduce DeepEdit, a deep learning-based method for volumetric medical image annotation, that allows automatic and semi-automatic segmentation, and click-based refinement. DeepEdit combines the power of two methods: a non-interactive (i.e. automatic segmentation using nnU-Net, UNET or UNETR) and an interactive segmentation method (i.e. DeepGrow), into a single deep learning model. It allows easy integration of uncertainty-based ranking strategies (i.e. aleatoric and epistemic uncertainty computation) and active learning. We propose and implement a method for training DeepEdit by using standard training combined with user interaction simulation. Once trained, DeepEdit allows clinicians to quickly segment their datasets by using the algorithm in auto segmentation mode or by providing clicks via a user interface (i.e. 3D Slicer, OHIF). We show the value of DeepEdit through evaluation on the PROSTATEx dataset for prostate/prostatic lesions and the Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) dataset for abdominal CT segmentation, using state-of-the-art network architectures as baseline for comparison. DeepEdit could reduce the time and effort annotating 3D medical images compared to DeepGrow alone. Source code is available at https://github.com/Project-MONAI/MONAILabel
FiberStars: Visual Comparison of Diffusion Tractography Data between Multiple Subjects
Franke, Loraine, Weidele, Daniel Karl I., Zhang, Fan, Cetin-Karayumak, Suheyla, Pieper, Steve, O'Donnell, Lauren J., Rathi, Yogesh, Haehn, Daniel
Tractography from high-dimensional diffusion magnetic resonance imaging (dMRI) data allows brain's structural connectivity analysis. Recent dMRI studies aim to compare connectivity patterns across thousands of subjects to understand subtle abnormalities in brain's white matter connectivity across disease populations. Besides connectivity differences, researchers are also interested in investigating distributions of biologically sensitive dMRI derived metrics across subject groups. Existing software products focus solely on the anatomy or are not intuitive and restrict the comparison of multiple subjects. In this paper, we present the design and implementation of FiberStars, a visual analysis tool for tractography data that allows the interactive and scalable visualization of brain fiber clusters in 2D and 3D. With FiberStars, researchers can analyze and compare multiple subjects in large collections of brain fibers. To evaluate the usability of our software, we performed a quantitative user study. We asked non-experts to find patterns in a large tractography dataset with either FiberStars or AFQ-Browser, an existing dMRI exploration tool. Our results show that participants using FiberStars can navigate extensive collections of tractography faster and more accurately. We discuss our findings and provide an analysis of the requirements for comparative visualizations of tractography data. All our research, software, and results are available openly.