Holm, Felix
Robust Tumor Segmentation with Hyperspectral Imaging and Graph Neural Networks
Lotfy, Mayar, Alperovich, Anna, Giannantonio, Tommaso, Barz, Bjorn, Zhang, Xiaohan, Holm, Felix, Navab, Nassir, Boehm, Felix, Schwamborn, Carolin, Hoffmann, Thomas K., Schuler, Patrick J.
Segmenting the boundary between tumor and healthy tissue during surgical cancer resection poses a significant challenge. In recent years, Hyperspectral Imaging (HSI) combined with Machine Learning (ML) has emerged as a promising solution. However, due to the extensive information contained within the spectral domain, most ML approaches primarily classify individual HSI (super-)pixels, or tiles, without taking into account their spatial context. In this paper, we propose an improved methodology that leverages the spatial context of tiles for more robust and smoother segmentation. To address the irregular shapes of tiles, we utilize Graph Neural Networks (GNNs) to propagate context information across neighboring regions. The features for each tile within the graph are extracted using a Convolutional Neural Network (CNN), which is trained simultaneously with the subsequent GNN. Moreover, we incorporate local image quality metrics into the loss function to enhance the training procedure's robustness against low-quality regions in the training images. We demonstrate the superiority of our proposed method using a clinical ex vivo dataset consisting of 51 HSI images from 30 patients. Despite the limited dataset, the GNN-based model significantly outperforms context-agnostic approaches, accurately distinguishing between healthy and tumor tissues, even in images from previously unseen patients. Furthermore, we show that our carefully designed loss function, accounting for local image quality, results in additional improvements. Our findings demonstrate that context-aware GNN algorithms can robustly find tumor demarcations on HSI images, ultimately contributing to better surgery success and patient outcome.
CholecTriplet2022: Show me a tool and tell me the triplet -- an endoscopic vision challenge for surgical action triplet detection
Nwoye, Chinedu Innocent, Yu, Tong, Sharma, Saurav, Murali, Aditya, Alapatt, Deepak, Vardazaryan, Armine, Yuan, Kun, Hajek, Jonas, Reiter, Wolfgang, Yamlahi, Amine, Smidt, Finn-Henri, Zou, Xiaoyang, Zheng, Guoyan, Oliveira, Bruno, Torres, Helena R., Kondo, Satoshi, Kasai, Satoshi, Holm, Felix, Özsoy, Ege, Gui, Shuangchun, Li, Han, Raviteja, Sista, Sathish, Rachana, Poudel, Pranav, Bhattarai, Binod, Wang, Ziheng, Rui, Guo, Schellenberg, Melanie, Vilaça, João L., Czempiel, Tobias, Wang, Zhenkun, Sheet, Debdoot, Thapa, Shrawan Kumar, Berniker, Max, Godau, Patrick, Morais, Pedro, Regmi, Sudarshan, Tran, Thuy Nuong, Fonseca, Jaime, Nölke, Jan-Hinrich, Lima, Estevão, Vazquez, Eduard, Maier-Hein, Lena, Navab, Nassir, Mascagni, Pietro, Seeliger, Barbara, Gonzalez, Cristians, Mutter, Didier, Padoy, Nicolas
Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of