Goto

Collaborating Authors

 dysphagia


Deep Learning-Enabled Swallowing Monitoring and Postoperative Recovery Biosensing System

Tsai, Chih-Ning, Yang, Pei-Wen, Huang, Tzu-Yen, Chen, Jung-Chih, Tseng, Hsin-Yi, Wu, Che-Wei, Sarmah, Amrit, Lin, Tzu-En

arXiv.org Artificial Intelligence

This study introduces an innovative 3D printed dry electrode tailored for biosensing in postoperative recovery scenarios. Fabricated through a drop coating process, the electrode incorporates a novel 2D material.


Video-SwinUNet: Spatio-temporal Deep Learning Framework for VFSS Instance Segmentation

Zeng, Chengxi, Yang, Xinyu, Smithard, David, Mirmehdi, Majid, Gambaruto, Alberto M, Burghardt, Tilo

arXiv.org Artificial Intelligence

This paper presents a deep learning framework for medical video segmentation. Convolution neural network (CNN) and transformer-based methods have achieved great milestones in medical image segmentation tasks due to their incredible semantic feature encoding and global information comprehension abilities. However, most existing approaches ignore a salient aspect of medical video data - the temporal dimension. Our proposed framework explicitly extracts features from neighbouring frames across the temporal dimension and incorporates them with a temporal feature blender, which then tokenises the high-level spatio-temporal feature to form a strong global feature encoded via a Swin Transformer. The final segmentation results are produced via a UNet-like encoder-decoder architecture. Our model outperforms other approaches by a significant margin and improves the segmentation benchmarks on the VFSS2022 dataset, achieving a dice coefficient of 0.8986 and 0.8186 for the two datasets tested. Our studies also show the efficacy of the temporal feature blending scheme and cross-dataset transferability of learned capabilities. Code and models are fully available at https://github.com/SimonZeng7108/Video-SwinUNet.


A Prototype Intelligent Assistant to Help Dysphagia Patients Eat Safely At Home

Freed, Michael (SRI International) | Burns, Brian (SRI International) | Heller, Aaron (SRI International) | Sanchez, Daniel (SRI International) | Beaumont-Bowman, Sharon (Brooklyn College)

AAAI Conferences

For millions of people with swallowing disorders, preventing potentially deadly aspiration pneumonia requires following prescribed safe eating strategies. But adherence is poor, and caregivers’ ability to encourage adherence is limited by the onerous and socially aversive need to monitoring another’s eating. We have developed an early prototype for an intelligent assistant that monitors adherence and provides feedback to the patient, and tested monitoring precision with healthy subjects for one strategy called a “chin tuck.” Results indicate that adaptations of current generation machine vision and personal assistant technologies could effectively monitor chin tuck adherence, and suggest the feasibility of a more general assistant that encourages adherence to a wide range of safe eating strategies.