dysphagia
Video-SwinUNet: Spatio-temporal Deep Learning Framework for VFSS Instance Segmentation
Zeng, Chengxi, Yang, Xinyu, Smithard, David, Mirmehdi, Majid, Gambaruto, Alberto M, Burghardt, Tilo
This paper presents a deep learning framework for medical video segmentation. Convolution neural network (CNN) and transformer-based methods have achieved great milestones in medical image segmentation tasks due to their incredible semantic feature encoding and global information comprehension abilities. However, most existing approaches ignore a salient aspect of medical video data - the temporal dimension. Our proposed framework explicitly extracts features from neighbouring frames across the temporal dimension and incorporates them with a temporal feature blender, which then tokenises the high-level spatio-temporal feature to form a strong global feature encoded via a Swin Transformer. The final segmentation results are produced via a UNet-like encoder-decoder architecture. Our model outperforms other approaches by a significant margin and improves the segmentation benchmarks on the VFSS2022 dataset, achieving a dice coefficient of 0.8986 and 0.8186 for the two datasets tested. Our studies also show the efficacy of the temporal feature blending scheme and cross-dataset transferability of learned capabilities. Code and models are fully available at https://github.com/SimonZeng7108/Video-SwinUNet.
- Europe > United Kingdom > England > Bristol (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Europe > Spain > Andalusia > Granada Province > Granada (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Health & Medicine > Diagnostic Medicine > Imaging (0.36)
A Prototype Intelligent Assistant to Help Dysphagia Patients Eat Safely At Home
Freed, Michael (SRI International) | Burns, Brian (SRI International) | Heller, Aaron (SRI International) | Sanchez, Daniel (SRI International) | Beaumont-Bowman, Sharon (Brooklyn College)
For millions of people with swallowing disorders, preventing potentially deadly aspiration pneumonia requires following prescribed safe eating strategies. But adherence is poor, and caregivers’ ability to encourage adherence is limited by the onerous and socially aversive need to monitoring another’s eating. We have developed an early prototype for an intelligent assistant that monitors adherence and provides feedback to the patient, and tested monitoring precision with healthy subjects for one strategy called a “chin tuck.” Results indicate that adaptations of current generation machine vision and personal assistant technologies could effectively monitor chin tuck adherence, and suggest the feasibility of a more general assistant that encourages adherence to a wide range of safe eating strategies.
- North America > United States > New York > Kings County > New York City (0.04)
- North America > United States > Missouri > Saint Louis County > Maryland Heights (0.04)
- North America > United States > Maryland (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Research Report (0.47)
- Overview (0.34)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.52)
- Health & Medicine > Therapeutic Area > Neurology (0.46)