Smidt, Finn-Henri
CholecTriplet2022: Show me a tool and tell me the triplet -- an endoscopic vision challenge for surgical action triplet detection
Nwoye, Chinedu Innocent, Yu, Tong, Sharma, Saurav, Murali, Aditya, Alapatt, Deepak, Vardazaryan, Armine, Yuan, Kun, Hajek, Jonas, Reiter, Wolfgang, Yamlahi, Amine, Smidt, Finn-Henri, Zou, Xiaoyang, Zheng, Guoyan, Oliveira, Bruno, Torres, Helena R., Kondo, Satoshi, Kasai, Satoshi, Holm, Felix, Özsoy, Ege, Gui, Shuangchun, Li, Han, Raviteja, Sista, Sathish, Rachana, Poudel, Pranav, Bhattarai, Binod, Wang, Ziheng, Rui, Guo, Schellenberg, Melanie, Vilaça, João L., Czempiel, Tobias, Wang, Zhenkun, Sheet, Debdoot, Thapa, Shrawan Kumar, Berniker, Max, Godau, Patrick, Morais, Pedro, Regmi, Sudarshan, Tran, Thuy Nuong, Fonseca, Jaime, Nölke, Jan-Hinrich, Lima, Estevão, Vazquez, Eduard, Maier-Hein, Lena, Navab, Nassir, Mascagni, Pietro, Seeliger, Barbara, Gonzalez, Cristians, Mutter, Didier, Padoy, Nicolas
Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of
Self-distillation for surgical action recognition
Yamlahi, Amine, Tran, Thuy Nuong, Godau, Patrick, Schellenberg, Melanie, Michael, Dominik, Smidt, Finn-Henri, Noelke, Jan-Hinrich, Adler, Tim, Tizabi, Minu Dietlinde, Nwoye, Chinedu, Padoy, Nicolas, Maier-Hein, Lena
Surgical scene understanding is a key prerequisite for contextaware decision support in the operating room. While deep learning-based approaches have already reached or even surpassed human performance in various fields, the task of surgical action recognition remains a major challenge. With this contribution, we are the first to investigate the concept of self-distillation as a means of addressing class imbalance and potential label ambiguity in surgical video analysis. Our proposed method is a heterogeneous ensemble of three models that use Swin Transfomers as backbone and the concepts of self-distillation and multi-task learning as core design choices. According to ablation studies performed with the CholecT45 challenge data via cross-validation, the biggest performance boost is achieved by the usage of soft labels obtained by self-distillation. External validation of our method on an independent test set was achieved by providing a Docker container of our inference model to the challenge organizers. According to their analysis, our method outperforms all other solutions submitted to the latest challenge in the field. Our approach thus shows the potential of self-distillation for becoming an important tool in medical image analysis applications.