Kang, Jin U.
Autonomous Robotic System with Optical Coherence Tomography Guidance for Vascular Anastomosis
Haworth, Jesse, Biswas, Rishi, Opfermann, Justin, Kam, Michael, Wang, Yaning, Pantalone, Desire, Creighton, Francis X., Yang, Robin, Kang, Jin U., Krieger, Axel
Vascular anastomosis, the surgical connection of blood vessels, is essential in procedures such as organ transplants and reconstructive surgeries. The precision required limits accessibility due to the extensive training needed, with manual suturing leading to variable outcomes and revision rates up to 7.9%. Existing robotic systems, while promising, are either fully teleoperated or lack the capabilities necessary for autonomous vascular anastomosis. We present the Micro Smart Tissue Autonomous Robot (micro-STAR), an autonomous robotic system designed to perform vascular anastomosis on small-diameter vessels. The micro-STAR system integrates a novel suturing tool equipped with Optical Coherence Tomography (OCT) fiber-optic sensor and a microcamera, enabling real-time tissue detection and classification. Our system autonomously places sutures and manipulates tissue with minimal human intervention. In an ex vivo study, micro-STAR achieved outcomes competitive with experienced surgeons in terms of leak pressure, lumen reduction, and suture placement variation, completing 90% of sutures without human intervention. This represents the first instance of a robotic system autonomously performing vascular anastomosis on real tissue, offering significant potential for improving surgical precision and expanding access to high-quality care.
Towards Deep Learning Guided Autonomous Eye Surgery Using Microscope and iOCT Images
Kim, Ji Woong, Wei, Shuwen, Zhang, Peiyao, Gehlbach, Peter, Kang, Jin U., Iordachita, Iulian, Kobilarov, Marin
Recent advancements in retinal surgery have paved the way for a modern operating room equipped with a surgical robot, a microscope, and intraoperative optical coherence tomography (iOCT)- a depth sensor widely used in retinal surgery. Integrating these tools raises the fundamental question of how to effectively combine them to enable surgical autonomy. In this work, we tackle this question by developing a unified framework that facilitates real-time autonomous surgical workflows leveraging these devices. The system features: (1) a novel imaging system that integrates the microscope and iOCT in real-time by dynamically tracking the surgical instrument via a small iOCT scanning region, providing real-time depth feedback; (2) implementation of convolutional neural networks (CNN) that automatically detect and segment task-relevant information for surgical autonomy; (3) intuitive selection of goal waypoints within both the microscope and iOCT views through simple mouse-click interactions; and (4) integration of model predictive control (MPC) for trajectory generation, ensuring patient safety by implementing safety-related kinematic constraints. The system's utility is demonstrated by automating subretinal injection (SI), a challenging procedure with high accuracy and depth perception requirements. We validate our system by conducting 30 successful SI trials on pig eyes, achieving mean needle insertion accuracy of 26 micrometers to various subretinal goals and mean duration of 55 seconds. Preliminary comparisons to a human operator performing SI in robot-assisted mode highlight the enhanced safety of our system. Project website is here: https://sites.google.com/view/eyesurgerymicroscopeoct/home
Arc-to-line frame registration method for ultrasound and photoacoustic image-guided intraoperative robot-assisted laparoscopic prostatectomy
Song, Hyunwoo, Yang, Shuojue, Wu, Zijian, Moradi, Hamid, Taylor, Russell H., Kang, Jin U., Salcudean, Septimiu E., Boctor, Emad M.
Purpose: To achieve effective robot-assisted laparoscopic prostatectomy, the integration of transrectal ultrasound (TRUS) imaging system which is the most widely used imaging modelity in prostate imaging is essential. However, manual manipulation of the ultrasound transducer during the procedure will significantly interfere with the surgery. Therefore, we propose an image co-registration algorithm based on a photoacoustic marker method, where the ultrasound / photoacoustic (US/PA) images can be registered to the endoscopic camera images to ultimately enable the TRUS transducer to automatically track the surgical instrument Methods: An optimization-based algorithm is proposed to co-register the images from the two different imaging modalities. The principles of light propagation and an uncertainty in PM detection were assumed in this algorithm to improve the stability and accuracy of the algorithm. The algorithm is validated using the previously developed US/PA image-guided system with a da Vinci surgical robot. Results: The target-registration-error (TRE) is measured to evaluate the proposed algorithm. In both simulation and experimental demonstration, the proposed algorithm achieved a sub-centimeter accuracy which is acceptable in practical clinics. The result is also comparable with our previous approach, and the proposed method can be implemented with a normal white light stereo camera and doesn't require highly accurate localization of the PM. Conclusion: The proposed frame registration algorithm enabled a simple yet efficient integration of commercial US/PA imaging system into laparoscopic surgical setting by leveraging the characteristic properties of acoustic wave propagation and laser excitation, contributing to automated US/PA image-guided surgical intervention applications.