forcep
Supplementary Material for Text Promptable Surgical Instrument Segmentation with Vision-Language Models Zijian Zhou
They are used in our experiments section. OpenAI GPT -4 based prompts The input template for OpenAI GPT -4 is defined as: Please describe the appearance of [class_name] in endoscopic surgery, and change the description to a phrase with subject, and not use colons. The dataset consists of both training and test cases. Each video is recorded at 25 FPS and has annotations for instruments and operation phases. For EndoVis2019, the results are shown in Tab. 1, our method (input size 448) notably surpasses the competition's top performers, with +3% increase in DSC and +2% enhancement in NSD, which demonstrates the superiority of our method.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.46)
Supplementary Material for Text Promptable Surgical Instrument Segmentation with Vision-Language Models Zijian Zhou
They are used in our experiments section. OpenAI GPT -4 based prompts The input template for OpenAI GPT -4 is defined as: Please describe the appearance of [class_name] in endoscopic surgery, and change the description to a phrase with subject, and not use colons. The dataset consists of both training and test cases. Each video is recorded at 25 FPS and has annotations for instruments and operation phases. For EndoVis2019, the results are shown in Tab. 1, our method (input size 448) notably surpasses the competition's top performers, with +3% increase in DSC and +2% enhancement in NSD, which demonstrates the superiority of our method.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.46)
Robotic Constrained Imitation Learning for the Peg Transfer Task in Fundamentals of Laparoscopic Surgery
Kawaharazuka, Kento, Okada, Kei, Inaba, Masayuki
In this study, we present an implementation strategy for a robot that performs peg transfer tasks in Fundamentals of Laparoscopic Surgery (FLS) via imitation learning, aimed at the development of an autonomous robot for laparoscopic surgery. Robotic laparoscopic surgery presents two main challenges: (1) the need to manipulate forceps using ports established on the body surface as fulcrums, and (2) difficulty in perceiving depth information when working with a monocular camera that displays its images on a monitor. Especially, regarding issue (2), most prior research has assumed the availability of depth images or models of a target to be operated on. Therefore, in this study, we achieve more accurate imitation learning with only monocular images by extracting motion constraints from one exemplary motion of skilled operators, collecting data based on these constraints, and conducting imitation learning based on the collected data. We implemented an overall system using two Franka Emika Panda Robot Arms and validated its effectiveness.
- North America > United States > Maryland (0.05)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
Hysteresis Compensation of Flexible Continuum Manipulator using RGBD Sensing and Temporal Convolutional Network
Park, Junhyun, Jang, Seonghyeok, Park, Hyojae, Bae, Seongjun, Hwang, Minho
Flexible continuum manipulators are valued for minimally invasive surgery, offering access to confined spaces through nonlinear paths. However, cable-driven manipulators face control difficulties due to hysteresis from cabling effects such as friction, elongation, and coupling. These effects are difficult to model due to nonlinearity and the difficulties become even more evident when dealing with long and coupled, multi-segmented manipulator. This paper proposes a data-driven approach based on Deep Neural Networks (DNN) to capture these nonlinear and previous states-dependent characteristics of cable actuation. We collect physical joint configurations according to command joint configurations using RGBD sensing and 7 fiducial markers to model the hysteresis of the proposed manipulator. Result on a study comparing the estimation performance of four DNN models show that the Temporal Convolution Network (TCN) demonstrates the highest predictive capability. Leveraging trained TCNs, we build a control algorithm to compensate for hysteresis. Tracking tests in task space using unseen trajectories show that the proposed control algorithm reduces the average position and orientation error by 61.39% (from 13.7mm to 5.29 mm) and 64.04% (from 31.17{\deg} to 11.21{\deg}), respectively. This result implies that the proposed calibrated controller effectively reaches the desired configurations by estimating the hysteresis of the manipulator. Applying this method in real surgical scenarios has the potential to enhance control precision and improve surgical performance.
- Health & Medicine > Health Care Technology (0.68)
- Health & Medicine > Surgery (0.46)
Automatic Tissue Traction with Haptics-Enabled Forceps for Minimally Invasive Surgery
Liu, Tangyou, Wang, Xiaoyi, Katupitiya, Jay, Wang, Jiaole, Wu, Liao
A common limitation of autonomous tissue manipulation in robotic minimally invasive surgery (MIS) is the absence of force sensing and control at the tool level. Recently, our team has developed haptics-enabled forceps that can simultaneously measure the grasping and pulling forces during tissue manipulation. Based on this design, here we further present a method to automate tissue traction with controlled grasping and pulling forces. Specifically, the grasping stage relies on a controlled grasping force, while the pulling stage is under the guidance of a controlled pulling force. Notably, during the pulling process, the simultaneous control of both grasping and pulling forces is also enabled for more precise tissue traction, achieved through force decoupling. The force controller is built upon a static model of tissue manipulation, considering the interaction between the haptics-enabled forceps and soft tissue. The efficacy of this force control approach is validated through a series of experiments comparing targeted, estimated, and actual reference forces. To verify the feasibility of the proposed method in surgical applications, various tissue resections are conducted on ex vivo tissues employing a dual-arm robotic setup. Finally, we discuss the benefits of multi-force control in tissue traction, evidenced through comparative analyses of various ex vivo tissue resections. The results affirm the feasibility of implementing automatic tissue traction using micro-sized forceps with multi-force control, suggesting its potential to promote autonomous MIS. A video demonstrating the experiments can be found at https://youtu.be/8fe8o8IFrjE.
- Oceania > Australia > New South Wales > Sydney (0.14)
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- (7 more...)
Computer Vision for Increased Operative Efficiency via Identification of Instruments in the Neurosurgical Operating Room: A Proof-of-Concept Study
Zachem, Tanner J., Chen, Sully F., Venkatraman, Vishal, Sykes, David AW, Prakash, Ravi, Spellicy, Samantha, Suarez, Alexander D, Ross, Weston, Codd, Patrick J.
Objectives Computer vision (CV) is a field of artificial intelligence that enables machines to interpret and understand images and videos. CV has the potential to be of assistance in the operating room (OR) to track surgical instruments. We built a CV algorithm for identifying surgical instruments in the neurosurgical operating room as a potential solution for surgical instrument tracking and management to decrease surgical waste and opening of unnecessary tools. Methods We collected 1660 images of 27 commonly used neurosurgical instruments. Images were labeled using the VGG Image Annotator and split into 80% training and 20% testing sets in order to train a U-Net Convolutional Neural Network using 5-fold cross validation. Results Our U-Net achieved a tool identification accuracy of 80-100% when distinguishing 25 classes of instruments, with 19/25 classes having accuracy over 90%. The model performance was not adequate for sub classifying Adson, Gerald, and Debakey forceps, which had accuracies of 60-80%. Conclusions We demonstrated the viability of using machine learning to accurately identify surgical instruments. Instrument identification could help optimize surgical tray packing, decrease tool usage and waste, decrease incidence of instrument misplacement events, and assist in timing of routine instrument maintenance. More training data will be needed to increase accuracy across all surgical instruments that would appear in a neurosurgical operating room. Such technology has the potential to be used as a method to be used for proving what tools are truly needed in each type of operation allowing surgeons across the world to do more with less.
- North America > United States > North Carolina > Durham County > Durham (0.04)
- North America > United States > California > Santa Clara County > Cupertino (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur > Alpes-Maritimes > Nice (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Surgery (1.00)
Forceps with direct torque control
INTRODUCTION Minimally Invasive Surgery (MIS) is a modern surgical approach that utilizes advanced techniques and specialized instruments to perform procedures with minimal damage to surrounding tissues. One commonly used tool in MIS is the laparoscopic instrument, which is inserted through small incisions in the body for tissue manipulation or dissection. Conventional laparoscopic forceps use the handle opening angle to control the jaw opening angle. A common limitation of laparoscopic instruments is the ambiguous haptic feedback, which prevents the user from feeling the actual texture or resistance of the tissue being grasped. Surgeons can only guess the amount of applied force through visual cues and proprioception.
Haptics-Enabled Forceps with Multi-Modal Force Sensing: Towards Task-Autonomous Surgery
Liu, Tangyou, Zhang, Tinghua, Katupitiya, Jay, Wang, Jiaole, Wu, Liao
Many robotic surgical systems have been developed with micro-sized biopsy forceps for tissue manipulation. However, these systems often lack force sensing at the tool side. This paper presents a vision-based force sensing method for micro-sized biopsy forceps. A miniature sensing module adaptive to common biopsy forceps is proposed, consisting of a flexure, a camera, and a customised target. The deformation of the flexure is obtained by the camera estimating the pose variation of the top-mounted target. Then, the external force applied to the sensing module is calculated using the flexure's displacement and stiffness matrix. Integrating the sensing module into the biopsy forceps, in conjunction with a single-axial force sensor at the proximal end, we equip the forceps with haptic sensing capabilities. Mathematical equations are derived to estimate the multi-modal force sensing of the haptics-enabled forceps, including pushing/pulling forces (Mode-I) and grasping forces (Mode-II). A series of experiments on phantoms and ex vivo tissues are conducted to verify the feasibility of the proposed design and method. Results indicate that the haptics-enabled forceps can achieve multi-modal force estimation effectively and potentially realize autonomous robotic tissue grasping procedures with controlled forces.
- Oceania > Australia > New South Wales > Sydney (0.14)
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- (10 more...)