Automatic L3 slice detection in 3D CT images using fully-convolutional networks Artificial Intelligence

The analysis of single CT slices extracted at the third lumbar vertebra (L3) has garnered significant clinical interest in the past few years, in particular in regards to quantifying sarcopenia (muscle loss). In this paper, we propose an efficient method to automatically detect the L3 slice in 3D CT images. Our method works with images with a variety of fields of view, occlusions, and slice thicknesses. 3D CT images are first converted into 2D via Maximal Intensity Projection (MIP), reducing the dimensionality of the problem. The MIP images are then used as input to a 2D fully-convolutional network to predict the L3 slice locations in the form of 2D confidence maps. In addition we propose a variant architecture with less parameters allowing 1D confidence map prediction and slightly faster prediction time without loss of accuracy. Quantitative evaluation of our method on a dataset of 1006 3D CT images yields a median error of 1mm, similar to the inter-rater median error of 1mm obtained from two annotators, demonstrating the effectiveness of our method in efficiently and accurately detecting the L3 slice.

AI gets a backbone: Deep learning makes it easier for doctors to read spine scans


The spinal cord is the vital link between the brain and body, a superhighway of life-critical information protected by the bony vertebra of the spinal column. The spinal column is the most common site for bone metastasis, when tumors spread from internal organs to the bones. Estimates indicate that at least 30% and as many as 70% of patients with cancer will experience spread of cancer to their spine. Each year, about 10,000 Americans develop primary or metastatic spinal cord tumors. "The spine is a place where people can miss lesions, especially very small metastases, which could be the difference between finding an early, treatable tumor and finding a tumor too late to be treatable," says Dr. Michael Fanariotis, chief radiologist for CT at the Telemark Hospital in Skien, Norway.


AAAI Conferences

Spine Surgery P. Merloz Service de Chirurgie Orthopedique CHU Grenoble, 38700 La Tronche FRANCE Abstract Computer assisted spine surgery follows the basic ideas developed for Computer Assisted Medical Intervention (CAMI). Quantitative analysis of medical images makes it possible to localize with great accuracy anatomical structures. This is fruitfully used to drive guiding systems. This method tends to minimize invasive surgery and increases the quality of surgical interventions. In this article we present our methodology and develop some results leading to clinical experimentations.

Automatic Pulmonary Lobe Segmentation Using Deep Learning Artificial Intelligence

Pulmonary lobe segmentation is an important task for pulmonary disease related Computer Aided Diagnosis systems (CADs). Classical methods for lobe segmentation rely on successful detection of fissures and other anatomical information such as the location of blood vessels and airways. With the success of deep learning in recent years, Deep Convolutional Neural Network (DCNN) has been widely applied to analyze medical images like Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), which, however, requires a large number of ground truth annotations. In this work, we release our manually labeled 50 CT scans which are randomly chosen from the LUNA16 dataset and explore the use of deep learning on this task. We propose pre-processing CT image by cropping region that is covered by the convex hull of the lungs in order to mitigate the influence of noise from outside the lungs. Moreover, we design a hybrid loss function with dice loss to tackle extreme class imbalance issue and focal loss to force model to focus on voxels that are hard to be discriminated. To validate the robustness and performance of our proposed framework trained with a small number of training examples, we further tested our model on CT scans from an independent dataset. Experimental results show the robustness of the proposed approach, which consistently improves performance across different datasets by a maximum of $5.87\%$ as compared to a baseline model.

Leveraging Pre-Trained 3D Object Detection Models For Fast Ground Truth Generation Artificial Intelligence

Training 3D object detectors for autonomous driving has been limited to small datasets due to the effort required to generate annotations. Reducing both task complexity and the amount of task switching done by annotators is key to reducing the effort and time required to generate 3D bounding box annotations. This paper introduces a novel ground truth generation method that combines human supervision with pretrained neural networks to generate per-instance 3D point cloud segmentation, 3D bounding boxes, and class annotations. The annotators provide object anchor clicks which behave as a seed to generate instance segmentation results in 3D. The points belonging to each instance are then used to regress object centroids, bounding box dimensions, and object orientation. Our proposed annotation scheme requires 30x lower human annotation time. We use the KITTI 3D object detection dataset to evaluate the efficiency and the quality of our annotation scheme. We also test the the proposed scheme on previously unseen data from the Autonomoose self-driving vehicle to demonstrate generalization capabilities of the network.