Bergeles, Christos
A Generalized Modeling Approach to Liquid-driven Ballooning Membranes
Ismayilov, Mirroyal, Merlin, Jeref, Bergeles, Christos, Lindenroth, Lukas
Soft robotics is advancing the use of flexible materials for adaptable robotic systems. Membrane-actuated soft robots address the limitations of traditional soft robots by using pressurized, extensible membranes to achieve stable, large deformations, yet control and state estimation remain challenging due to their complex deformation dynamics. This paper presents a novel modeling approach for liquid-driven ballooning membranes, employing an ellipsoid approximation to model shape and stretch under planar deformation. Relying solely on intrinsic feedback from pressure data and controlled liquid volume, this approach enables accurate membrane state estimation. We demonstrate the effectiveness of the proposed model for ballooning membrane-based actuators by experimental validation, obtaining the indentation depth error of $RMSE_{h_2}=0.80\;$mm, which is $23\%$ of the indentation range and $6.67\%$ of the unindented actuator height range. For the force estimation, the error range is obtained to be $RMSE_{F}=0.15\;$N which is $10\%$ of the measured force range.
Evaluating Robotic Approach Techniques for the Insertion of a Straight Instrument into a Vitreoretinal Surgery Trocar
Henry, Ross, Huber, Martin, Mablekos-Alexiou, Anestis, Seneci, Carlo, Abdelaziz, Mohamed, Natalius, Hans, da Cruz, Lyndon, Bergeles, Christos
INTRODUCTION Advances in vitreoretinal surgery have enabled interventions involving precisions previously deemed infeasible [1], with certified systems appearing on the market, e.g. the Preceyes Surgical System offering 20μm accuracy [2]. Despite their benefits, systems add a delay to the interventional workflow, which may be hindering their widespread adoption. A source of delay is the time required to get the system's micro-precise tool into the eye via the Trocar Entry Point (TEP). We will compare 3 approaches that use a teleoperation of the tool's position and orientation via the combination of co-manipulation and teleoperation. The The goal is to place a 0.5mm stainless steel rod within a task is complete when the participant deems the docking 1 mm custom trocar inserted into the inferior position of sufficient and extrudes the rod into the phantom via the a Bioniko Fundus Advanced Eye Phantom [4].
Learning-Based Autonomous Navigation, Benchmark Environments and Simulation Framework for Endovascular Interventions
Karstensen, Lennart, Robertshaw, Harry, Hatzl, Johannes, Jackson, Benjamin, Langejürgen, Jens, Breininger, Katharina, Uhl, Christian, Sadati, S. M. Hadi, Booth, Thomas, Bergeles, Christos, Mathis-Ullrich, Franziska
Endovascular interventions are a life-saving treatment for many diseases, yet suffer from drawbacks such as radiation exposure and potential scarcity of proficient physicians. Robotic assistance during these interventions could be a promising support towards these problems. Research focusing on autonomous endovascular interventions utilizing artificial intelligence-based methodologies is gaining popularity. However, variability in assessment environments hinders the ability to compare and contrast the efficacy of different approaches, primarily due to each study employing a unique evaluation framework. In this study, we present deep reinforcement learning-based autonomous endovascular device navigation on three distinct digital benchmark interventions: BasicWireNav, ArchVariety, and DualDeviceNav. The benchmark interventions were implemented with our modular simulation framework stEVE (simulated EndoVascular Environment). Autonomous controllers were trained solely in simulation and evaluated in simulation and on physical test benches with camera and fluoroscopy feedback. Autonomous control for BasicWireNav and ArchVariety reached high success rates and was successfully transferred from the simulated training environment to the physical test benches, while autonomous control for DualDeviceNav reached a moderate success rate. The experiments demonstrate the feasibility of stEVE and its potential for transferring controllers trained in simulation to real-world scenarios. Nevertheless, they also reveal areas that offer opportunities for future research. This study demonstrates the transferability of autonomous controllers from simulation to the real world in endovascular navigation and lowers the entry barriers and increases the comparability of research on endovascular assistance systems by providing open-source training scripts, benchmarks and the stEVE framework.
Semi-Autonomous Laparoscopic Robot Docking with Learned Hand-Eye Information Fusion
Tian, Huanyu, Huber, Martin, Mower, Christopher E., Han, Zhe, Li, Changsheng, Duan, Xingguang, Bergeles, Christos
In this study, we introduce a novel shared-control system for key-hole docking operations, combining a commercial camera with occlusion-robust pose estimation and a hand-eye information fusion technique. This system is used to enhance docking precision and force-compliance safety. To train a hand-eye information fusion network model, we generated a self-supervised dataset using this docking system. After training, our pose estimation method showed improved accuracy compared to traditional methods, including observation-only approaches, hand-eye calibration, and conventional state estimation filters. In real-world phantom experiments, our approach demonstrated its effectiveness with reduced position dispersion (1.23\pm 0.81 mm vs. 2.47 \pm 1.22 mm) and force dispersion (0.78\pm 0.57 N vs. 1.15 \pm 0.97 N) compared to the control group. These advancements in semi-autonomy co-manipulation scenarios enhance interaction and stability. The study presents an anti-interference, steady, and precision solution with potential applications extending beyond laparoscopic surgery to other minimally invasive procedures.
Rethinking Low-quality Optical Flow in Unsupervised Surgical Instrument Segmentation
Wu, Peiran, Liu, Yang, Huo, Jiayu, Zhang, Gongyu, Bergeles, Christos, Sparks, Rachel, Dasgupta, Prokar, Granados, Alejandro, Ourselin, Sebastien
Video-based surgical instrument segmentation plays an important role in robot-assisted surgeries. Unlike supervised settings, unsupervised segmentation relies heavily on motion cues, which are challenging to discern due to the typically lower quality of optical flow in surgical footage compared to natural scenes. This presents a considerable burden for the advancement of unsupervised segmentation techniques. In our work, we address the challenge of enhancing model performance despite the inherent limitations of low-quality optical flow. Our methodology employs a three-pronged approach: extracting boundaries directly from the optical flow, selectively discarding frames with inferior flow quality, and employing a fine-tuning process with variable frame rates. We thoroughly evaluate our strategy on the EndoVis2017 VOS dataset and Endovis2017 Challenge dataset, where our model demonstrates promising results, achieving a mean Intersection-over-Union (mIoU) of 0.75 and 0.72, respectively. Our findings suggest that our approach can greatly decrease the need for manual annotations in clinical environments and may facilitate the annotation process for new datasets. The code is available at https://github.com/wpr1018001/Rethinking-Low-quality-Optical-Flow.git
Excitation Trajectory Optimization for Dynamic Parameter Identification Using Virtual Constraints in Hands-on Robotic System
Tian, Huanyu, Huber, Martin, Mower, Christopher E., Han, Zhe, Li, Changsheng, Duan, Xingguang, Bergeles, Christos
This paper proposes a novel, more computationally efficient method for optimizing robot excitation trajectories for dynamic parameter identification, emphasizing self-collision avoidance. This addresses the system identification challenges for getting high-quality training data associated with co-manipulated robotic arms that can be equipped with a variety of tools, a common scenario in industrial but also clinical and research contexts. Utilizing the Unified Robotics Description Format (URDF) to implement a symbolic Python implementation of the Recursive Newton-Euler Algorithm (RNEA), the approach aids in dynamically estimating parameters such as inertia using regression analyses on data from real robots. The excitation trajectory was evaluated and achieved on par criteria when compared to state-of-the-art reported results which didn't consider self-collision and tool calibrations. Furthermore, physical Human-Robot Interaction (pHRI) admittance control experiments were conducted in a surgical context to evaluate the derived inverse dynamics model showing a 30.1\% workload reduction by the NASA TLX questionnaire.
LBR-Stack: ROS 2 and Python Integration of KUKA FRI for Med and IIWA Robots
Huber, Martin, Mower, Christopher E., Ourselin, Sebastien, Vercauteren, Tom, Bergeles, Christos
The LBR-Stack is a collection of packages that simplify the usage and extend the capabilities of KUKA's Fast Robot Interface (FRI) (Schreiber et al., 2010). It is designed for mission critical hard real-time applications. Supported are the KUKA LBR Med7/14 and KUKA LBR IIWA7/14 robots in the Gazebo simulation (Koenig & Howard, 2004) and for communication with real hardware. A demo video can be found here. An overview of the software architecture is shown in Figure 2. At the LBR-Stack's core are two packages: fri: Integration of KUKA's original FRI client library into CMake.
OpTaS: An Optimization-based Task Specification Library for Trajectory Optimization and Model Predictive Control
Mower, Christopher E., Moura, João, Behabadi, Nazanin Zamani, Vijayakumar, Sethu, Vercauteren, Tom, Bergeles, Christos
This paper presents OpTaS, a task specification Python library for Trajectory Optimization (TO) and Model Predictive Control (MPC) in robotics. Both TO and MPC are increasingly receiving interest in optimal control and in particular handling dynamic environments. While a flurry of software libraries exists to handle such problems, they either provide interfaces that are limited to a specific problem formulation (e.g. TracIK, CHOMP), or are large and statically specify the problem in configuration files (e.g. EXOTica, eTaSL). OpTaS, on the other hand, allows a user to specify custom nonlinear constrained problem formulations in a single Python script allowing the controller parameters to be modified during execution. The library provides interface to several open source and commercial solvers (e.g. IPOPT, SNOPT, KNITRO, SciPy) to facilitate integration with established workflows in robotics. Further benefits of OpTaS are highlighted through a thorough comparison with common libraries. An additional key advantage of OpTaS is the ability to define optimal control tasks in the joint space, task space, or indeed simultaneously. The code for OpTaS is easily installed via pip, and the source code with examples can be found at https://github.com/cmower/optas.
ROS-PyBullet Interface: A Framework for Reliable Contact Simulation and Human-Robot Interaction
Mower, Christopher E., Stouraitis, Theodoros, Moura, João, Rauch, Christian, Yan, Lei, Behabadi, Nazanin Zamani, Gienger, Michael, Vercauteren, Tom, Bergeles, Christos, Vijayakumar, Sethu
Reliable contact simulation plays a key role in the development of (semi-)autonomous robots, especially when dealing with contact-rich manipulation scenarios, an active robotics research topic. Besides simulation, components such as sensing, perception, data collection, robot hardware control, human interfaces, etc. are all key enablers towards applying machine learning algorithms or model-based approaches in real world systems. However, there is a lack of software connecting reliable contact simulation with the larger robotics ecosystem (i.e. ROS, Orocos), for a more seamless application of novel approaches, found in the literature, to existing robotic hardware. In this paper, we present the ROS-PyBullet Interface, a framework that provides a bridge between the reliable contact/impact simulator PyBullet and the Robot Operating System (ROS). Furthermore, we provide additional utilities for facilitating Human-Robot Interaction (HRI) in the simulated environment. We also present several use-cases that highlight the capabilities and usefulness of our framework. Please check our video, source code, and examples included in the supplementary material. Our full code base is open source and can be found at https://github.com/cmower/ros_pybullet_interface.