Goto

Collaborating Authors

 Yanik, Erim


Cognitive-Motor Integration in Assessing Bimanual Motor Skills

arXiv.org Artificial Intelligence

Biomedical Engineering Department, Rensselaer Polytechnic Institute, NY, USA Accurate assessment of bimanual motor skills is essential across various professions, yet, traditional methods often rely on subjective assessments or focus solely on motor actions, overlooking the integral role of cognitive processes. This study introduces a novel approach by leveraging deep neural networks (DNNs) to analyze and integrate both cognitive decision-making and motor execution. We tested this methodology by assessing laparoscopic surgery skills within the Fundamentals of Laparoscopic Surgery program, which is a prerequisite for general surgery certification. Utilizing video capture of motor actions and non-invasive functional near-infrared spectroscopy (fNIRS) for measuring neural activations, our approach precisely classifies subjects by expertise level and predicts FLS behavioral performance scores, significantly surpassing traditional single-modality assessments. In this study, we introduce a novel approach by conducting a direct statistical comparative analysis between neural activations and motor actions for assessing bimanual motor skills using DNNs. We explore the synergy of these modalities in multimodal analysis, applied to precision and cognitive-demanding tasks, particularly within the Fundamentals of Laparoscopic Surgery (FLS) program (Figure 1).


One-shot domain adaptation in video-based assessment of surgical skills

arXiv.org Artificial Intelligence

Deep Learning (DL) has achieved automatic and objective assessment of surgical skills. However, the applicability of DL models is often hampered by their substantial data requirements and confinement to specific training domains. This prevents them from transitioning to new tasks with scarce data. Therefore, domain adaptation emerges as a critical element for the practical implementation of DL in real-world scenarios. Herein, we introduce A-VBANet, a novel meta-learning model capable of delivering domain-agnostic surgical skill classification via one-shot learning. A-VBANet has been rigorously developed and tested on five diverse laparoscopic and robotic surgical simulators. Furthermore, we extend its validation to operating room (OR) videos of laparoscopic cholecystectomy. Our model successfully adapts with accuracies up to 99.5% in one-shot and 99.9% in few-shot settings for simulated tasks and 89.7% for laparoscopic cholecystectomy. This research marks the first instance of a domain-agnostic methodology for surgical skill assessment, paving the way for more precise and accessible training evaluation across diverse high-stakes environments such as real-life surgery where data is scarce.


Deep Neural Networks for the Assessment of Surgical Skills: A Systematic Review

arXiv.org Artificial Intelligence

Surgical training in medical school residency programs has followed the apprenticeship model. The learning and assessment process is inherently subjective and time-consuming. Thus, there is a need for objective methods to assess surgical skills. Here, we use the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to systematically survey the literature on the use of Deep Neural Networks for automated and objective surgical skill assessment, with a focus on kinematic data as putative markers of surgical competency. There is considerable recent interest in deep neural networks (DNN) due to the availability of powerful algorithms, multiple datasets, some of which are publicly available, as well as efficient computational hardware to train and host them. We have reviewed 530 papers, of which we selected 25 for this systematic review. Based on this review, we concluded that DNNs are powerful tools for automated, objective surgical skill assessment using both kinematic and video data. The field would benefit from large, publicly available, annotated datasets that are representative of the surgical trainee and expert demographics and multimodal data beyond kinematics and videos.