Parker, Maria
Spatiotemporal Learning with Context-aware Video Tubelets for Ultrasound Video Analysis
Li, Gary Y., Chen, Li, Hicks, Bryson, Schnittke, Nikolai, Kessler, David O., Shupp, Jeffrey, Parker, Maria, Baloescu, Cristiana, Moore, Christopher, Gregory, Cynthia, Gregory, Kenton, Raju, Balasundar, Kruecker, Jochen, Chen, Alvin
Computer-aided pathology detection algorithms for video-based imaging modalities must accurately interpret complex spatiotemporal information by integrating findings across multiple frames. Current state-of-the-art methods operate by classifying on video sub-volumes (tubelets), but they often lose global spatial context by focusing only on local regions within detection ROIs. Here we propose a lightweight framework for tubelet-based object detection and video classification that preserves both global spatial context and fine spatiotemporal features. To address the loss of global context, we embed tubelet location, size, and confidence as inputs to the classifier. Additionally, we use ROI-aligned feature maps from a pre-trained detection model, leveraging learned feature representations to increase the receptive field and reduce computational complexity. Our method is efficient, with the spatiotemporal tubelet classifier comprising only 0.4M parameters. We apply our approach to detect and classify lung consolidation and pleural effusion in ultrasound videos. Five-fold cross-validation on 14,804 videos from 828 patients shows our method outperforms previous tubelet-based approaches and is suited for real-time workflows.
Ultrasound Image Synthesis Using Generative AI for Lung Ultrasound Detection
Chou, Yu-Cheng, Li, Gary Y., Chen, Li, Zahiri, Mohsen, Balaraju, Naveen, Patil, Shubham, Hicks, Bryson, Schnittke, Nikolai, Kessler, David O., Shupp, Jeffrey, Parker, Maria, Baloescu, Cristiana, Moore, Christopher, Gregory, Cynthia, Gregory, Kenton, Raju, Balasundar, Kruecker, Jochen, Chen, Alvin
Developing reliable healthcare AI models requires training with representative and diverse data. In imbalanced datasets, model performance tends to plateau on the more prevalent classes while remaining low on less common cases. To overcome this limitation, we propose DiffUltra, the first generative AI technique capable of synthesizing realistic Lung Ultrasound (LUS) images with extensive lesion variability. Specifically, we condition the generative AI by the introduced Lesion-anatomy Bank, which captures the lesion's structural and positional properties from real patient data to guide the image synthesis.We demonstrate that DiffUltra improves consolidation detection by 5.6% in AP compared to the models trained solely on real patient data. More importantly, DiffUltra increases data diversity and prevalence of rare cases, leading to a 25% AP improvement in detecting rare instances such as large lung consolidations, which make up only 10% of the dataset.