puncture
A Deep Learning-Driven Autonomous System for Retinal Vein Cannulation: Validation Using a Chicken Embryo Model
Wang, Yi, Zhang, Peiyao, Esfandiari, Mojtaba, Gehlbach, Peter, Iordachita, Iulian I.
-- Retinal vein cannulation (RVC) is a minimally invasive microsurgical procedure for treating retinal vein occlusion (RVO), a leading cause of vision impairment. However, the small size and fragility of retinal veins, coupled with the need for high-precision, tremor-free needle manipulation, create significant technical challenges. These limitations highlight the need for robotic assistance to improve accuracy and stability. This study presents an automated robotic system with a top-down microscope and B-scan optical coherence tomography (OCT) imaging for precise depth sensing. Deep learning-based models enable real-time needle navigation, contact detection, and vein puncture recognition, using a chicken embryo model as a surrogate for human retinal veins. The experiments demonstrate notable reductions in navigation and puncture times compared to manual methods. Our results demonstrate the potential of integrating advanced imaging and deep learning to automate microsurgical tasks, providing a pathway for safer and more reliable RVC procedures with enhanced precision and reproducibility. I. INTRODUCTION Retinal vein occlusion (RVO) occurs due to the blockage of a retinal vein by a thrombus, leading to transient or permanent vision loss [1]. Current treatments focus on managing complications, but no standardized surgical approach exists for thrombus removal. A 2015 meta-analysis identified RVO as the second most prevalent retinal vascular disease globally, affecting 28.06 million people aged 30-89, including 23.38 million branch RVO (BRVO) and 4.67 million central RVO (CRVO) [2]. Retinal vein cannulation (RVC) involves inserting a micro-needle into the occluded retinal vein, followed by injecting a thrombolytic agent to dissolve the clot [3].
A Feasible Workflow for Retinal Vein Cannulation in Ex Vivo Porcine Eyes with Robotic Assistance
Zhang, Peiyao, Gehlbach, Peter, Kobilarov, Marin, Iordachita, Iulian
A potential Retinal Vein Occlusion (RVO) treatment involves Retinal Vein Cannulation (RVC), which requires the surgeon to insert a microneedle into the affected retinal vein and administer a clot-dissolving drug. This procedure presents significant challenges due to human physiological limitations, such as hand tremors, prolonged tool-holding periods, and constraints in depth perception using a microscope. This study proposes a robot-assisted workflow for RVC to overcome these limitations. The test robot is operated through a keyboard. An intraoperative Optical Coherence Tomography (iOCT) system is used to verify successful venous puncture before infusion. The workflow is validated using 12 ex vivo porcine eyes. These early results demonstrate a successful rate of 10 out of 12 cannulations (83.33%), affirming the feasibility of the proposed workflow.
- North America > United States > Maryland > Baltimore (0.05)
- Asia (0.05)
- Oceania > Australia (0.04)
- Europe (0.04)
- Workflow (1.00)
- Research Report > New Finding (0.67)
Autocompletion of Chief Complaints in the Electronic Health Records using Large Language Models
Islam, K M Sajjadul, Nipu, Ayesha Siddika, Madiraju, Praveen, Deshpande, Priya
The Chief Complaint (CC) is a crucial component of a patient's medical record as it describes the main reason or concern for seeking medical care. It provides critical information for healthcare providers to make informed decisions about patient care. However, documenting CCs can be time-consuming for healthcare providers, especially in busy emergency departments. To address this issue, an autocompletion tool that suggests accurate and well-formatted phrases or sentences for clinical notes can be a valuable resource for triage nurses. In this study, we utilized text generation techniques to develop machine learning models using CC data. In our proposed work, we train a Long Short-Term Memory (LSTM) model and fine-tune three different variants of Biomedical Generative Pretrained Transformers (BioGPT), namely microsoft/biogpt, microsoft/BioGPT-Large, and microsoft/BioGPT-Large-PubMedQA. Additionally, we tune a prompt by incorporating exemplar CC sentences, utilizing the OpenAI API of GPT-4. We evaluate the models' performance based on the perplexity score, modified BERTScore, and cosine similarity score. The results show that BioGPT-Large exhibits superior performance compared to the other models. It consistently achieves a remarkably low perplexity score of 1.65 when generating CC, whereas the baseline LSTM model achieves the best perplexity score of 170. Further, we evaluate and assess the proposed models' performance and the outcome of GPT-4.0. Our study demonstrates that utilizing LLMs such as BioGPT, leads to the development of an effective autocompletion tool for generating CC documentation in healthcare settings.
- North America > United States > Wisconsin > Milwaukee County > Milwaukee (0.04)
- Asia > India (0.04)
Design and Assessment of a Bimanual Haptic Epidural Needle Insertion Simulator
Davidor, Nitsan, Binyamin, Yair, Hayuni, Tamar, Nisky, Ilana
The case experience of anesthesiologists is one of the leading causes of accidental dural punctures and failed epidurals - the most common complications of epidural analgesia used for pain relief during delivery. We designed a bimanual haptic simulator to train anesthesiologists and optimize epidural analgesia skill acquisition. We present an assessment study conducted with 22 anesthesiologists of different competency levels from several Israeli hospitals. Our simulator emulates the forces applied to the epidural (Touhy) needle, held by one hand, and those applied to the Loss of Resistance (LOR) syringe, held by the other one. The resistance is calculated based on a model of the epidural region layers parameterized by the weight of the patient. We measured the movements of both haptic devices and quantified the results' rate (success, failed epidurals, and dural punctures), insertion strategies, and the participants' answers to questionnaires about their perception of the simulation realism. We demonstrated good construct validity by showing that the simulator can distinguish between real-life novices and experts. Face and content validity were examined by studying users' impressions regarding the simulator's realism and fulfillment of purpose. We found differences in strategies between different level anesthesiologists, and suggest trainee-based instruction in advanced training stages.
- Asia > Middle East > Israel > Southern District > Beer-Sheva (0.04)
- South America > Brazil (0.04)
- North America > United States (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (0.88)
- Research Report > Strength High (0.67)
- Health & Medicine > Surgery (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.93)
Micromanipulation in Surgery: Autonomous Needle Insertion Inside the Eye for Targeted Drug Delivery
Kim, Ji Woong, Zhang, Peiyao, Gehlbach, Peter, Iordachita, Iulian, Kobilarov, Marin
We consider a micromanipulation problem in eye surgery, specifically retinal vein cannulation (RVC). RVC involves inserting a microneedle into a retinal vein for the purpose of targeted drug delivery. The procedure requires accurately guiding a needle to a target vein and inserting it while avoiding damage to the surrounding tissues. RVC can be considered similar to the reach or push task studied in robotics manipulation, but with additional constraints related to precision and safety while interacting with soft tissues. Prior works have mainly focused developing robotic hardware and sensors to enhance the surgeons' accuracy, leaving the automation of RVC largely unexplored. In this paper, we present the first autonomous strategy for RVC while relying on a minimal setup: a robotic arm, a needle, and monocular images. Our system exclusively relies on monocular vision to achieve precise navigation, gentle placement on the target vein, and safe insertion without causing tissue damage. Throughout the procedure, we employ machine learning for perception and to identify key surgical events such as needle-vein contact and vein punctures. Detecting these events guides our task and motion planning framework, which generates safe trajectories using model predictive control to complete the procedure. We validate our system through 24 successful autonomous trials on 4 cadaveric pig eyes. We show that our system can navigate to target veins within 22 micrometers of XY accuracy and under 35 seconds, and consistently puncture the target vein without causing tissue damage. Preliminary comparison to a human demonstrates the superior accuracy and reliability of our system.
- North America > United States > Maryland > Baltimore (0.05)
- Oceania > Australia (0.04)
- Europe (0.04)
- Asia (0.04)
Deep Learning Guided Autonomous Surgery: Guiding Small Needles into Sub-Millimeter Scale Blood Vessels
Kim, Ji Woong, Zhang, Peiyao, Gehlbach, Peter, Iordachita, Iulian, Kobilarov, Marin
We propose a general strategy for autonomous guidance and insertion of a needle into a retinal blood vessel. The main challenges underpinning this task are the accurate placement of the needle-tip on the target vein and a careful needle insertion maneuver to avoid double-puncturing the vein, while dealing with challenging kinematic constraints and depth-estimation uncertainty. Following how surgeons perform this task purely based on visual feedback, we develop a system which relies solely on \emph{monocular} visual cues by combining data-driven kinematic and contact estimation, visual-servoing, and model-based optimal control. By relying on both known kinematic models, as well as deep-learning based perception modules, the system can localize the surgical needle tip and detect needle-tissue interactions and venipuncture events. The outputs from these perception modules are then combined with a motion planning framework that uses visual-servoing and optimal control to cannulate the target vein, while respecting kinematic constraints that consider the safety of the procedure. We demonstrate that we can reliably and consistently perform needle insertion in the domain of retinal surgery, specifically in performing retinal vein cannulation. Using cadaveric pig eyes, we demonstrate that our system can navigate to target veins within 22$\mu m$ XY accuracy and perform the entire procedure in less than 35 seconds on average, and all 24 trials performed on 4 pig eyes were successful. Preliminary comparison study against a human operator show that our system is consistently more accurate and safer, especially during safety-critical needle-tissue interactions. To the best of the authors' knowledge, this work accomplishes a first demonstration of autonomous retinal vein cannulation at a clinically-relevant setting using animal tissues.
- Oceania > Australia (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- Europe > Denmark (0.04)
- Asia (0.04)
- Health & Medicine > Therapeutic Area > Ophthalmology/Optometry (0.93)
- Health & Medicine > Surgery (0.69)
MIT researchers are one step closer to perfecting self-repairing robot bees
"Hated in the Nation," an episode of Netflix's dystopian sci-fi series "Black Mirror," predicted it: Thousands of robotic bees buzz from flower to flower, pollinating plants to make up for declining insect populations. And while the episode's robots eventually turn against their human inventors, killing over 387,000 people by ramming their artificial stingers into victims' heads, the MIT scientists working on perfecting today's aerial robots likely believe we don't need to worry about that. Despite the show's foreboding take on robotic bees, researchers at the Massachusetts Institute of Technology are one step closer to perfecting the artificial aerial critters. In a paper published March 15, a group of researchers at MIT showed that using resilient muscle-like actuators and self-repairing technology can vastly improve the robustness of robotic bees. "Insects flying are incredibly difficult to understand," said Kevin Chen, an assistant professor at MIT, head of the institute's Soft and Micro Robotics Laboratory, and the senior author of the paper.
Characterizing 4-string contact interaction using machine learning
Erbin, Harold, Fırat, Atakan Hilmi
The geometry of 4-string contact interaction of closed string field theory is characterized using machine learning. We obtain Strebel quadratic differentials on 4-punctured spheres as a neural network by performing unsupervised learning with a custom-built loss function. This allows us to solve for local coordinates and compute their associated mapping radii numerically. We also train a neural network distinguishing vertex from Feynman region. As a check, 4-tachyon contact term in the tachyon potential is computed and a good agreement with the results in the literature is observed. We argue that our algorithm is manifestly independent of number of punctures and scaling it to characterize the geometry of $n$-string contact interaction is feasible.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Europe > France (0.04)
Finnish innovators look for cure to healthcare challenges
Aalto University and Bayer in February announced they have expanded their collaboration on artificial intelligence-based solutions for enhancing the safety and efficacy of clinical drug research by embarking on a three-year project with HUS Helsinki University Hospital. The methods and algorithms developed as part of the collaboration will be applied to the patient data of the university hospital. "Combining real-world data and clinical research data involves several challenges," told Jussi Leinonen, principal clinical data scientist at Bayer. "With AI, it can be done much faster, more efficiently and also more reliably." The project partners believe artificial intelligence is a means to address numerous challenges associated with drug development, including its resource-intensive nature.
- Europe > Finland > Uusimaa > Helsinki (0.27)
- Europe > Switzerland (0.05)
- Europe > Germany (0.05)
- (4 more...)
The Most Cringe-Inducing Surgical Robots from IEEE's Intelligent Robots Conference
We're still sifting through the more than 1,200 presentations at IROS 2017, IEEE's massive intelligent robots conference held last month in Vancouver. This week we found some terrifying gems: surgical robots that snake up the nose, puncture the breast, and suction intestinal tissue with motions so jarring they will make any patient glad (or longing) to be passed out during the procedure. We previously highlighted 20 of our favorite videos from the conference. Now, with Halloween approaching, we give you: the five most gruesome (maniacal laugh). Sadistically named the "Stormram," this robot punctures the breast in one slow, ominous motion, to extract a tissue sample.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.05)
- Europe > France > Grand Est > Bas-Rhin > Strasbourg (0.05)
- Health & Medicine > Health Care Technology (0.86)
- Health & Medicine > Diagnostic Medicine > Imaging (0.32)