Diaz-Pinto, Andres
DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images
Diaz-Pinto, Andres, Mehta, Pritesh, Alle, Sachidanand, Asad, Muhammad, Brown, Richard, Nath, Vishwesh, Ihsani, Alvin, Antonelli, Michela, Palkovics, Daniel, Pinter, Csaba, Alkalay, Ron, Pieper, Steve, Roth, Holger R., Xu, Daguang, Dogra, Prerna, Vercauteren, Tom, Feng, Andrew, Quraini, Abood, Ourselin, Sebastien, Cardoso, M. Jorge
Automatic segmentation of medical images is a key step for diagnostic and interventional tasks. However, achieving this requires large amounts of annotated volumes, which can be tedious and time-consuming task for expert annotators. In this paper, we introduce DeepEdit, a deep learning-based method for volumetric medical image annotation, that allows automatic and semi-automatic segmentation, and click-based refinement. DeepEdit combines the power of two methods: a non-interactive (i.e. automatic segmentation using nnU-Net, UNET or UNETR) and an interactive segmentation method (i.e. DeepGrow), into a single deep learning model. It allows easy integration of uncertainty-based ranking strategies (i.e. aleatoric and epistemic uncertainty computation) and active learning. We propose and implement a method for training DeepEdit by using standard training combined with user interaction simulation. Once trained, DeepEdit allows clinicians to quickly segment their datasets by using the algorithm in auto segmentation mode or by providing clicks via a user interface (i.e. 3D Slicer, OHIF). We show the value of DeepEdit through evaluation on the PROSTATEx dataset for prostate/prostatic lesions and the Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) dataset for abdominal CT segmentation, using state-of-the-art network architectures as baseline for comparison. DeepEdit could reduce the time and effort annotating 3D medical images compared to DeepGrow alone. Source code is available at https://github.com/Project-MONAI/MONAILabel
MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images
Diaz-Pinto, Andres, Alle, Sachidanand, Nath, Vishwesh, Tang, Yucheng, Ihsani, Alvin, Asad, Muhammad, Pérez-García, Fernando, Mehta, Pritesh, Li, Wenqi, Flores, Mona, Roth, Holger R., Vercauteren, Tom, Xu, Daguang, Dogra, Prerna, Ourselin, Sebastien, Feng, Andrew, Cardoso, M. Jorge
The lack of annotated datasets is a major bottleneck for training new task-specific supervised machine learning models, considering that manual annotation is extremely expensive and time-consuming. To address this problem, we present MONAI Label, a free and open-source framework that facilitates the development of applications based on artificial intelligence (AI) models that aim at reducing the time required to annotate radiology datasets. Through MONAI Label, researchers can develop AI annotation applications focusing on their domain of expertise. It allows researchers to readily deploy their apps as services, which can be made available to clinicians via their preferred user interface. Currently, MONAI Label readily supports locally installed (3D Slicer) and web-based (OHIF) frontends and offers two active learning strategies to facilitate and speed up the training of segmentation algorithms. MONAI Label allows researchers to make incremental improvements to their AI-based annotation application by making them available to other researchers and clinicians alike. Additionally, MONAI Label provides sample AI-based interactive and non-interactive labeling applications, that can be used directly off the shelf, as plug-and-play to any given dataset. Significant reduced annotation times using the interactive model can be observed on two public datasets.