Ferraguti, Federica
SAR-RARP50: Segmentation of surgical instrumentation and Action Recognition on Robot-Assisted Radical Prostatectomy Challenge
Psychogyios, Dimitrios, Colleoni, Emanuele, Van Amsterdam, Beatrice, Li, Chih-Yang, Huang, Shu-Yu, Li, Yuchong, Jia, Fucang, Zou, Baosheng, Wang, Guotai, Liu, Yang, Boels, Maxence, Huo, Jiayu, Sparks, Rachel, Dasgupta, Prokar, Granados, Alejandro, Ourselin, Sebastien, Xu, Mengya, Wang, An, Wu, Yanan, Bai, Long, Ren, Hongliang, Yamada, Atsushi, Harai, Yuriko, Ishikawa, Yuto, Hayashi, Kazuyuki, Simoens, Jente, DeBacker, Pieter, Cisternino, Francesco, Furnari, Gabriele, Mottrie, Alex, Ferraguti, Federica, Kondo, Satoshi, Kasai, Satoshi, Hirasawa, Kousuke, Kim, Soohee, Lee, Seung Hyun, Lee, Kyu Eun, Kong, Hyoun-Joong, Fu, Kui, Li, Chao, An, Shan, Krell, Stefanie, Bodenstedt, Sebastian, Ayobi, Nicolas, Perez, Alejandra, Rodriguez, Santiago, Puentes, Juanita, Arbelaez, Pablo, Mohareri, Omid, Stoyanov, Danail
Surgical tool segmentation and action recognition are fundamental building blocks in many computer-assisted intervention applications, ranging from surgical skills assessment to decision support systems. Nowadays, learning-based action recognition and segmentation approaches outperform classical methods, relying, however, on large, annotated datasets. Furthermore, action recognition and tool segmentation algorithms are often trained and make predictions in isolation from each other, without exploiting potential cross-task relationships. With the EndoVis 2022 SAR-RARP50 challenge, we release the first multimodal, publicly available, in-vivo, dataset for surgical action recognition and semantic instrumentation segmentation, containing 50 suturing video segments of Robotic Assisted Radical Prostatectomy (RARP). The aim of the challenge is twofold. First, to enable researchers to leverage the scale of the provided dataset and develop robust and highly accurate single-task action recognition and tool segmentation approaches in the surgical domain. Second, to further explore the potential of multitask-based learning approaches and determine their comparative advantage against their single-task counterparts. A total of 12 teams participated in the challenge, contributing 7 action recognition methods, 9 instrument segmentation techniques, and 4 multitask approaches that integrated both action recognition and instrument segmentation. The complete SAR-RARP50 dataset is available at: https://rdr.ucl.ac.uk/projects/SARRARP50_Segmentation_of_surgical_instrumentation_and_Action_Recognition_on_Robot-Assisted_Radical_Prostatectomy_Challenge/191091
Energy Tank-based Control Framework for Satisfying the ISO/TS 15066 Constraint
Benzi, Federico, Ferraguti, Federica, Secchi, Cristian
The technical specification ISO/TS 15066 provides the foundational elements for assessing the safety of collaborative human-robot cells, which are the cornerstone of the modern industrial paradigm. The standard implementation of the ISO/TS 15066 procedure, however, often results in conservative motions of the robot, with consequently low performance of the cell. In this paper, we propose an energy tank-based approach that allows to directly satisfy the energetic bounds imposed by the ISO/TS 15066, thus avoiding the introduction of conservative modeling and assumptions. The proposed approach has been successfully validated in simulation.