Goto

Collaborating Authors

 grasper


Neural-Augmented Kelvinlet for Real-Time Soft Tissue Deformation Modeling

Shahbazi, Ashkan, Pereira, Kyvia, Heiselman, Jon S., Akbari, Elaheh, Benson, Annie C., Seifi, Sepehr, Liu, Xinyuan, Johnston, Garrison L., Wu, Jie Ying, Simaan, Nabil, Miga, Michael L., Kolouri, Soheil

arXiv.org Artificial Intelligence

Accurate and efficient modeling of soft-tissue interactions is fundamental for advancing surgical simulation, surgical robotics, and model-based surgical automation. To achieve real-time latency, classical Finite Element Method (FEM) solvers are often replaced with neural approximations; however, naively training such models in a fully data-driven manner without incorporating physical priors frequently leads to poor generalization and physically implausible predictions. We present a novel physics-informed neural simulation framework that enables real-time prediction of soft-tissue deformations under complex single- and multi-grasper interactions. Our approach integrates Kelvinlet-based analytical priors with large-scale FEM data, capturing both linear and nonlinear tissue responses. This hybrid design improves predictive accuracy and physical plausibility across diverse neural architectures while maintaining the low-latency performance required for interactive applications. We validate our method on challenging surgical manipulation tasks involving standard laparoscopic grasping tools, demonstrating substantial improvements in deformation fidelity and temporal stability over existing baselines. These results establish Kelvinlet-augmented learning as a principled and computationally efficient paradigm for real-time, physics-aware soft-tissue simulation in surgical AI.


Enhancing Surgical Documentation through Multimodal Visual-Temporal Transformers and Generative AI

Georgenthum, Hugo, Cosentino, Cristian, Marozzo, Fabrizio, Liò, Pietro

arXiv.org Artificial Intelligence

The automatic summarization of surgical videos is essential for enhancing procedural documentation, supporting surgical training, and facilitating post-operative analysis. This paper presents a novel method at the intersection of artificial intelligence and medicine, aiming to develop machine learning models with direct real-world applications in surgical contexts. We propose a multi-modal framework that leverages recent advancements in computer vision and large language models to generate comprehensive video summaries. % The approach is structured in three key stages. First, surgical videos are divided into clips, and visual features are extracted at the frame level using visual transformers. This step focuses on detecting tools, tissues, organs, and surgical actions. Second, the extracted features are transformed into frame-level captions via large language models. These are then combined with temporal features, captured using a ViViT-based encoder, to produce clip-level summaries that reflect the broader context of each video segment. Finally, the clip-level descriptions are aggregated into a full surgical report using a dedicated LLM tailored for the summarization task. % We evaluate our method on the CholecT50 dataset, using instrument and action annotations from 50 laparoscopic videos. The results show strong performance, achieving 96\% precision in tool detection and a BERT score of 0.74 for temporal context summarization. This work contributes to the advancement of AI-assisted tools for surgical reporting, offering a step toward more intelligent and reliable clinical documentation.


One Patient's Annotation is Another One's Initialization: Towards Zero-Shot Surgical Video Segmentation with Cross-Patient Initialization

Mousavi, Seyed Amir, Ozbulak, Utku, Tozzi, Francesca, Rashidian, Nikdokht, Willaert, Wouter, Vankerschaver, Joris, De Neve, Wesley

arXiv.org Artificial Intelligence

Video object segmentation is an emerging technology that is well-suited for real-time surgical video segmentation, offering valuable clinical assistance in the operating room by ensuring consistent frame tracking. However, its adoption is limited by the need for manual intervention to select the tracked object, making it impractical in surgical settings. In this work, we tackle this challenge with an innovative solution: using previously annotated frames from other patients as the tracking frames. We find that this unconventional approach can match or even surpass the performance of using patients' own tracking frames, enabling more autonomous and efficient AI-assisted surgical workflows. Furthermore, we analyze the benefits and limitations of this approach, highlighting its potential to enhance segmentation accuracy while reducing the need for manual input. Our findings provide insights into key factors influencing performance, offering a foundation for future research on optimizing cross-patient frame selection for real-time surgical video analysis.


Less is More? Revisiting the Importance of Frame Rate in Real-Time Zero-Shot Surgical Video Segmentation

Ozbulak, Utku, Mousavi, Seyed Amir, Tozzi, Francesca, Rashidian, Nikdokht, Willaert, Wouter, De Neve, Wesley, Vankerschaver, Joris

arXiv.org Artificial Intelligence

Real-time video segmentation is a promising feature for AI-assisted surgery, providing intraoperative guidance by identifying surgical tools and anatomical structures. However, deploying state-of-the-art segmentation models, such as SAM2, in real-time settings is computationally demanding, which makes it essential to balance frame rate and segmentation performance. In this study, we investigate the impact of frame rate on zero-shot surgical video segmentation, evaluating SAM2's effectiveness across multiple frame sampling rates for cholecystectomy procedures. Surprisingly, our findings indicate that in conventional evaluation settings, frame rates as low as a single frame per second can outperform 25 FPS, as fewer frames smooth out segmentation inconsistencies. However, when assessed in a real-time streaming scenario, higher frame rates yield superior temporal coherence and stability, particularly for dynamic objects such as surgical graspers. Finally, we investigate human perception of real-time surgical video segmentation among professionals who work closely with such data and find that respondents consistently prefer high FPS segmentation mask overlays, reinforcing the importance of real-time evaluation in AI-assisted surgery.


Surgical Triplet Recognition via Diffusion Model

Liu, Daochang, Hu, Axel, Shah, Mubarak, Xu, Chang

arXiv.org Artificial Intelligence

Surgical triplet recognition is an essential building block to enable next-generation context-aware operating rooms. The goal is to identify the combinations of instruments, verbs, and targets presented in surgical video frames. In this paper, we propose DiffTriplet, a new generative framework for surgical triplet recognition employing the diffusion model, which predicts surgical triplets via iterative denoising. To handle the challenge of triplet association, two unique designs are proposed in our diffusion framework, i.e., association learning and association guidance. During training, we optimize the model in the joint space of triplets and individual components to capture the dependencies among them. At inference, we integrate association constraints into each update of the iterative denoising process, which refines the triplet prediction using the information of individual components. Experiments on the CholecT45 and CholecT50 datasets show the superiority of the proposed method in achieving a new state-of-the-art performance for surgical triplet recognition. Our codes will be released.


Grasper: A Generalist Pursuer for Pursuit-Evasion Problems

Li, Pengdeng, Li, Shuxin, Wang, Xinrun, Cerny, Jakub, Zhang, Youzhi, McAleer, Stephen, Chan, Hau, An, Bo

arXiv.org Artificial Intelligence

Pursuit-evasion games (PEGs) model interactions between a team of pursuers and an evader in graph-based environments such as urban street networks. Recent advancements have demonstrated the effectiveness of the pre-training and fine-tuning paradigm in PSRO to improve scalability in solving large-scale PEGs. However, these methods primarily focus on specific PEGs with fixed initial conditions that may vary substantially in real-world scenarios, which significantly hinders the applicability of the traditional methods. To address this issue, we introduce Grasper, a GeneRAlist purSuer for Pursuit-Evasion pRoblems, capable of efficiently generating pursuer policies tailored to specific PEGs. Our contributions are threefold: First, we present a novel architecture that offers high-quality solutions for diverse PEGs, comprising critical components such as (i) a graph neural network (GNN) to encode PEGs into hidden vectors, and (ii) a hypernetwork to generate pursuer policies based on these hidden vectors. As a second contribution, we develop an efficient three-stage training method involving (i) a pre-pretraining stage for learning robust PEG representations through self-supervised graph learning techniques like GraphMAE, (ii) a pre-training stage utilizing heuristic-guided multi-task pre-training (HMP) where heuristic-derived reference policies (e.g., through Dijkstra's algorithm) regularize pursuer policies, and (iii) a fine-tuning stage that employs PSRO to generate pursuer policies on designated PEGs. Finally, we perform extensive experiments on synthetic and real-world maps, showcasing Grasper's significant superiority over baselines in terms of solution quality and generalizability. We demonstrate that Grasper provides a versatile approach for solving pursuit-evasion problems across a broad range of scenarios, enabling practical deployment in real-world situations.


A Bioinspired Synthetic Nervous System Controller for Pick-and-Place Manipulation

Li, Yanjun, Sukhnandan, Ravesh, Gill, Jeffrey P., Chiel, Hillel J., Webster-Wood, Victoria, Quinn, Roger D.

arXiv.org Artificial Intelligence

The Synthetic Nervous System (SNS) is a biologically inspired neural network (NN). Due to its capability of capturing complex mechanisms underlying neural computation, an SNS model is a candidate for building compact and interpretable NN controllers for robots. Previous work on SNSs has focused on applying the model to the control of legged robots and the design of functional subnetworks (FSNs) to realize dynamical systems. However, the FSN approach has previously relied on the analytical solution of the governing equations, which is difficult for designing more complex NN controllers. Incorporating plasticity into SNSs and using learning algorithms to tune the parameters offers a promising solution for systematic design in this situation. In this paper, we theoretically analyze the computational advantages of SNSs compared with other classical artificial neural networks. We then use learning algorithms to develop compact subnetworks for implementing addition, subtraction, division, and multiplication. We also combine the learning-based methodology with a bioinspired architecture to design an interpretable SNS for the pick-and-place control of a simulated gantry system. Finally, we show that the SNS controller is successfully transferred to a real-world robotic platform without further tuning of the parameters, verifying the effectiveness of our approach.


Towards Surgical Context Inference and Translation to Gestures

Hutchinson, Kay, Li, Zongyu, Reyes, Ian, Alemzadeh, Homa

arXiv.org Artificial Intelligence

Manual labeling of gestures in robot-assisted surgery is labor intensive, prone to errors, and requires expertise or training. We propose a method for automated and explainable generation of gesture transcripts that leverages the abundance of data for image segmentation. Surgical context is detected using segmentation masks by examining the distances and intersections between the tools and objects. Next, context labels are translated into gesture transcripts using knowledge-based Finite State Machine (FSM) and data-driven Long Short Term Memory (LSTM) models. We evaluate the performance of each stage of our method by comparing the results with the ground truth segmentation masks, the consensus context labels, and the gesture labels in the JIGSAWS dataset. Our results show that our segmentation models achieve state-of-the-art performance in recognizing needle and thread in Suturing and we can automatically detect important surgical states with high agreement with crowd-sourced labels (e.g., contact between graspers and objects in Suturing). We also find that the FSM models are more robust to poor segmentation and labeling performance than LSTMs. Our proposed method can significantly shorten the gesture labeling process (~2.8 times).


LapGym -- An Open Source Framework for Reinforcement Learning in Robot-Assisted Laparoscopic Surgery

Scheikl, Paul Maria, Gyenes, Balázs, Younis, Rayan, Haas, Christoph, Neumann, Gerhard, Wagner, Martin, Mathis-Ullrich, Franziska

arXiv.org Artificial Intelligence

Recent advances in reinforcement learning (RL) have increased the promise of introducing cognitive assistance and automation to robot-assisted laparoscopic surgery (RALS). However, progress in algorithms and methods depends on the availability of standardized learning environments that represent skills relevant to RALS. We present LapGym, a framework for building RL environments for RALS that models the challenges posed by surgical tasks, and sofa env, a diverse suite of 12 environments. Motivated by surgical training, these environments are organized into 4 tracks: Spatial Reasoning, Deformable Object Manipulation & Grasping, Dissection, and Thread Manipulation. Each environment is highly parametrizable for increasing difficulty, resulting in a high performance ceiling for new algorithms. We use Proximal Policy Optimization (PPO) to establish a baseline for model-free RL algorithms, investigating the effect of several environment parameters on task difficulty. Finally, we show that many environments and parameter configurations reflect well-known, open problems in RL research, allowing researchers to continue exploring these fundamental problems in a surgical context. We aim to provide a challenging, standard environment suite for further development of RL for RALS, ultimately helping to realize the full potential of cognitive surgical robotics.


SLUGBOT, an Aplysia-inspired Robotic Grasper for Studying Control

Dai, Kevin, Sukhnandan, Ravesh, Bennington, Michael, Whirley, Karen, Bao, Ryan, Li, Lu, Gill, Jeffrey P., Chiel, Hillel J., Webster-Wood, Victoria A.

arXiv.org Artificial Intelligence

Living systems can use a single periphery to perform a variety of tasks and adapt to a dynamic environment. This multifunctionality is achieved through the use of neural circuitry that adaptively controls the reconfigurable musculature. Current robotic systems struggle to flexibly adapt to unstructured environments. Through mimicry of the neuromechanical coupling seen in living organisms, robotic systems could potentially achieve greater autonomy. The tractable neuromechanics of the sea slug $\textit{Aplysia californica's}$ feeding apparatus, or buccal mass, make it an ideal candidate for applying neuromechanical principles to the control of a soft robot. In this work, a robotic grasper was designed to mimic specific morphology of the $\textit{Aplysia}$ feeding apparatus. These include the use of soft actuators akin to biological muscle, a deformable grasping surface, and a similar muscular architecture. A previously developed Boolean neural controller was then adapted for the control of this soft robotic system. The robot was capable of qualitatively replicating swallowing behavior by cyclically ingesting a plastic tube. The robot's normalized translational and rotational kinematics of the odontophore followed profiles observed $\textit{in vivo}$ despite morphological differences. This brings $\textit{Aplysia}$-inspired control $\textit{in roboto}$ one step closer to multifunctional neural control schema $\textit{in vivo}$ and $\textit{in silico}$. Future additions may improve SLUGBOT's viability as a neuromechanical research platform.