Spatiotemporal Learning with Context-aware Video Tubelets for Ultrasound Video Analysis

Li, Gary Y., Chen, Li, Hicks, Bryson, Schnittke, Nikolai, Kessler, David O., Shupp, Jeffrey, Parker, Maria, Baloescu, Cristiana, Moore, Christopher, Gregory, Cynthia, Gregory, Kenton, Raju, Balasundar, Kruecker, Jochen, Chen, Alvin

arXiv.org Artificial Intelligence 

Computer-aided pathology detection algorithms for video-based imaging modalities must accurately interpret complex spatiotemporal information by integrating findings across multiple frames. Current state-of-the-art methods operate by classifying on video sub-volumes (tubelets), but they often lose global spatial context by focusing only on local regions within detection ROIs. Here we propose a lightweight framework for tubelet-based object detection and video classification that preserves both global spatial context and fine spatiotemporal features. To address the loss of global context, we embed tubelet location, size, and confidence as inputs to the classifier. Additionally, we use ROI-aligned feature maps from a pre-trained detection model, leveraging learned feature representations to increase the receptive field and reduce computational complexity. Our method is efficient, with the spatiotemporal tubelet classifier comprising only 0.4M parameters. We apply our approach to detect and classify lung consolidation and pleural effusion in ultrasound videos. Five-fold cross-validation on 14,804 videos from 828 patients shows our method outperforms previous tubelet-based approaches and is suited for real-time workflows.