Goto

Collaborating Authors

 Armand, Mehran


Toward Process Controlled Medical Robotic System

arXiv.org Artificial Intelligence

Medical errors, defined as unintended acts either of omission or commission that cause the failure of medical actions, are the third leading cause of death in the United States. The application of autonomy and robotics can alleviate some causes of medical errors by improving accuracy and providing means to preciously follow planned procedures. However, for the robotic applications to improve safety, they must maintain constant operating conditions in the presence of disturbances, and provide reliable measurements, evaluation, and control for each state of the procedure. This article addresses the need for process control in medical robotic systems, and proposes a standardized design cycle toward its automation. Monitoring and controlling the changing conditions in a medical or surgical environment necessitates a clear definition of workflows and their procedural dependencies. We propose integrating process control into medical robotic workflows to identify change in states of the system and environment, possible operations, and transitions to new states. Therefore, the system translates clinician experiences and procedure workflows into machine-interpretable languages. The design cycle using hFSM formulation can be a deterministic process, which opens up possibilities for higher-level automation in medical robotics. Shown in our work, with a standardized design cycle and software paradigm, we pave the way toward controlled workflows that can be automatically generated. Additionally, a modular design for a robotic system architecture that integrates hFSM can provide easy software and hardware integration. This article discusses the system design, software implementation, and example application to Robot-Assisted Transcranial Magnetic Stimulation and robot-assisted femoroplasty. We also provide assessments of these two system examples by testing their robotic tool placement.


SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM

arXiv.org Artificial Intelligence

The advent of large language models (LLM) has led to significant progress in image analysis with potential for future advancements. SAM [Kirillov et al., 2023] is a revolutionary foundation model for image segmentation and has already shown the capability of handling diverse segmentation tasks. SAM especially prevails in zero-shot domain generalization cases compared with the existing elaborate, fine-tuned models trained on specific domains. An important prospect for the application of SAM would be its adaptation to the complex task of segmenting medical images with significant inter-subject variations and a low signal-to-noise ratio. The segmentation task allows separation of different structures in medical images, which are then used to detect the region of interest or reconstruct multi-dimensional anatomical models [Sinha and Dolz, 2021]. The existing AI-based segmentation methods, however, do not fully bridge the domain gap among different imaging modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), or ultrasound (US) [Wang et al., 2020]. The domain gap refers to the difference in the data format across various image modalities, as each modality offers a distinct advantage in visualizing anatomical structures and related pathologies (e.g., tumor, bone fracture). This difference introduces specific challenges for training AI systems to perform common analysis without the need for a comprehensive dataset that includes all relevant domains from various image modalities.


Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous Pelvic Fixation

arXiv.org Artificial Intelligence

Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity -- corridor, activity, view, and frame value -- simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 93.8% on simulated sequences and 67.57% in cadaver across all granularity levels, with up to 88% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.


Design and Fabrication of a Fiber Bragg Grating Shape Sensor for Shape Reconstruction of a Continuum Manipulator

arXiv.org Artificial Intelligence

Continuum dexterous manipulators (CDMs) are suitable for performing tasks in a constrained environment due to their high dexterity and maneuverability. Despite the inherent advantages of CDMs in minimally invasive surgery, real-time control of CDMs' shape during non-constant curvature bending is still challenging. This study presents a novel approach for the design and fabrication of a large deflection fiber Bragg grating (FBG) shape sensor embedded within the lumens inside the walls of a CDM with a large instrument channel. The shape sensor consisted of two fibers, each with three FBG nodes. A shape-sensing model was introduced to reconstruct the centerline of the CDM based on FBG wavelengths. Different experiments, including shape sensor tests and CDM shape reconstruction tests, were conducted to assess the overall accuracy of the shape sensing. The FBG sensor evaluation results revealed the linear curvature-wavelength relationship with the large curvature detection of 0.045 mm at a 90 degrees bending angle and a sensitivity of up to 5.50 nm/mm in each bending direction. The CDM's shape reconstruction experiments in a free environment demonstrated the shape tracking accuracy of 0.216+-0.126 mm for positive/negative deflections. Also, the CDM shape reconstruction error for three cases of bending with obstacles were observed to be 0.436+-0.370 mm for the proximal case, 0.485+-0.418 mm for the middle case, and 0.312+-0.261 mm for the distal case. This study indicates the adequate performance of the FBG sensor and the effectiveness of the model for tracking the shape of the large-deflection CDM with nonconstant-curvature bending for minimally-invasive orthopaedic applications.