Štepec, Dejan
Patherea: Cell Detection and Classification for the 2020s
Štepec, Dejan, Jerše, Maja, Đokić, Snežana, Jeruc, Jera, Zidar, Nina, Skočaj, Danijel
This paper presents a Patherea, a framework for point-based cell detection and classification that provides a complete solution for developing and evaluating state-of-the-art approaches. We introduce a large-scale dataset collected to directly replicate a clinical workflow for Ki-67 proliferation index estimation and use it to develop an efficient point-based approach that directly predicts point-based predictions, without the need for intermediate representations. The proposed approach effectively utilizes point proposal candidates with the hybrid Hungarian matching strategy and a flexible architecture that enables the usage of various backbones and (pre)training strategies. We report state-of-the-art results on existing public datasets - Lizard, BRCA-M2C, BCData, and the newly proposed Patherea dataset. We show that the performance on existing public datasets is saturated and that the newly proposed Patherea dataset represents a significantly harder challenge for the recently proposed approaches. We also demonstrate the effectiveness of recently proposed pathology foundational models that our proposed approach can natively utilize and benefit from. We also revisit the evaluation protocol that is used in the broader field of cell detection and classification and identify the erroneous calculation of performance metrics. Patherea provides a benchmarking utility that addresses the identified issues and enables a fair comparison of different approaches. The dataset and the code will be publicly released upon acceptance.
Why is the winner the best?
Eisenmann, Matthias, Reinke, Annika, Weru, Vivienn, Tizabi, Minu Dietlinde, Isensee, Fabian, Adler, Tim J., Ali, Sharib, Andrearczyk, Vincent, Aubreville, Marc, Baid, Ujjwal, Bakas, Spyridon, Balu, Niranjan, Bano, Sophia, Bernal, Jorge, Bodenstedt, Sebastian, Casella, Alessandro, Cheplygina, Veronika, Daum, Marie, de Bruijne, Marleen, Depeursinge, Adrien, Dorent, Reuben, Egger, Jan, Ellis, David G., Engelhardt, Sandy, Ganz, Melanie, Ghatwary, Noha, Girard, Gabriel, Godau, Patrick, Gupta, Anubha, Hansen, Lasse, Harada, Kanako, Heinrich, Mattias, Heller, Nicholas, Hering, Alessa, Huaulmé, Arnaud, Jannin, Pierre, Kavur, Ali Emre, Kodym, Oldřich, Kozubek, Michal, Li, Jianning, Li, Hongwei, Ma, Jun, Martín-Isla, Carlos, Menze, Bjoern, Noble, Alison, Oreiller, Valentin, Padoy, Nicolas, Pati, Sarthak, Payette, Kelly, Rädsch, Tim, Rafael-Patiño, Jonathan, Bawa, Vivek Singh, Speidel, Stefanie, Sudre, Carole H., van Wijnen, Kimberlin, Wagner, Martin, Wei, Donglai, Yamlahi, Amine, Yap, Moi Hoon, Yuan, Chun, Zenk, Maximilian, Zia, Aneeq, Zimmerer, David, Aydogan, Dogu Baran, Bhattarai, Binod, Bloch, Louise, Brüngel, Raphael, Cho, Jihoon, Choi, Chanyeol, Dou, Qi, Ezhov, Ivan, Friedrich, Christoph M., Fuller, Clifton, Gaire, Rebati Raman, Galdran, Adrian, Faura, Álvaro García, Grammatikopoulou, Maria, Hong, SeulGi, Jahanifar, Mostafa, Jang, Ikbeom, Kadkhodamohammadi, Abdolrahim, Kang, Inha, Kofler, Florian, Kondo, Satoshi, Kuijf, Hugo, Li, Mingxing, Luu, Minh Huan, Martinčič, Tomaž, Morais, Pedro, Naser, Mohamed A., Oliveira, Bruno, Owen, David, Pang, Subeen, Park, Jinah, Park, Sung-Hong, Płotka, Szymon, Puybareau, Elodie, Rajpoot, Nasir, Ryu, Kanghyun, Saeed, Numan, Shephard, Adam, Shi, Pengcheng, Štepec, Dejan, Subedi, Ronast, Tochon, Guillaume, Torres, Helena R., Urien, Helene, Vilaça, João L., Wahid, Kareem Abdul, Wang, Haojie, Wang, Jiacheng, Wang, Liansheng, Wang, Xiyue, Wiestler, Benedikt, Wodzinski, Marek, Xia, Fangfang, Xie, Juanying, Xiong, Zhiwei, Yang, Sen, Yang, Yanwu, Zhao, Zixuan, Maier-Hein, Klaus, Jäger, Paul F., Kopp-Schneider, Annette, Maier-Hein, Lena
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The "typical" lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.