Not enough data to create a plot.
Try a different view from the menu above.
Wang, Xiyue
WSI-LLaVA: A Multimodal Large Language Model for Whole Slide Image
Liang, Yuci, Lyu, Xinheng, Ding, Meidan, Chen, Wenting, Zhang, Jipeng, Ren, Yuexiang, He, Xiangjian, Wu, Song, Yang, Sen, Wang, Xiyue, Xing, Xiaohan, Shen, Linlin
Recent advancements in computational pathology have produced patch-level Multi-modal Large Language Models (MLLMs), but these models are limited by their inability to analyze whole slide images (WSIs) comprehensively and their tendency to bypass crucial morphological features that pathologists rely on for diagnosis. To address these challenges, we first introduce WSI-Bench, a large-scale morphology-aware benchmark containing 180k VQA pairs from 9,850 WSIs across 30 cancer types, designed to evaluate MLLMs' understanding of morphological characteristics crucial for accurate diagnosis. Building upon this benchmark, we present WSI-LLaVA, a novel framework for gigapixel WSI understanding that employs a three-stage training approach: WSI-text alignment, feature space alignment, and task-specific instruction tuning. To better assess model performance in pathological contexts, we develop two specialized WSI metrics: WSI-Precision and WSI-Relevance. Experimental results demonstrate that WSI-LLaVA outperforms existing models across all capability dimensions, with a significant improvement in morphological analysis, establishing a clear correlation between morphological understanding and diagnostic accuracy.
Federated contrastive learning models for prostate cancer diagnosis and Gleason grading
Kong, Fei, Xiang, Jinxi, Wang, Xiyue, Wang, Xinran, Yue, Meng, Zhang, Jun, Yang, Sen, Zhao, Junhan, Han, Xiao, Dong, Yuhan, Liu, Yueping
The application effect of artificial intelligence (AI) in the field of medical imaging is remarkable. Robust AI model training requires large datasets, but data collection faces communication, ethics, and privacy protection constraints. Fortunately, federated learning can solve the above problems by coordinating multiple clients to train the model without sharing the original data. In this study, we design a federated contrastive learning framework (FCL) for large-scale pathology images and the heterogeneity challenges. It enhances the model's generalization ability by maximizing the attention consistency between the local client and server models. To alleviate the privacy leakage problem when transferring parameters and verify the robustness of FCL, we use differential privacy to further protect the model by adding noise. We evaluate the effectiveness of FCL on the cancer diagnosis task and Gleason grading task on 19,635 prostate cancer WSIs from multiple clients. In the diagnosis task, the average AUC of 7 clients is 95% when the categories are relatively balanced, and our FCL achieves 97%. In the Gleason grading task, the average Kappa of 6 clients is 0.74, and the Kappa of FCL reaches 0.84. Furthermore, we also validate the robustness of the model on external datasets(one public dataset and two private datasets). In addition, to better explain the classification effect of the model, we show whether the model focuses on the lesion area by drawing a heatmap. Finally, FCL brings a robust, accurate, low-cost AI training model to biomedical research, effectively protecting medical data privacy.
Why is the winner the best?
Eisenmann, Matthias, Reinke, Annika, Weru, Vivienn, Tizabi, Minu Dietlinde, Isensee, Fabian, Adler, Tim J., Ali, Sharib, Andrearczyk, Vincent, Aubreville, Marc, Baid, Ujjwal, Bakas, Spyridon, Balu, Niranjan, Bano, Sophia, Bernal, Jorge, Bodenstedt, Sebastian, Casella, Alessandro, Cheplygina, Veronika, Daum, Marie, de Bruijne, Marleen, Depeursinge, Adrien, Dorent, Reuben, Egger, Jan, Ellis, David G., Engelhardt, Sandy, Ganz, Melanie, Ghatwary, Noha, Girard, Gabriel, Godau, Patrick, Gupta, Anubha, Hansen, Lasse, Harada, Kanako, Heinrich, Mattias, Heller, Nicholas, Hering, Alessa, Huaulmé, Arnaud, Jannin, Pierre, Kavur, Ali Emre, Kodym, Oldřich, Kozubek, Michal, Li, Jianning, Li, Hongwei, Ma, Jun, Martín-Isla, Carlos, Menze, Bjoern, Noble, Alison, Oreiller, Valentin, Padoy, Nicolas, Pati, Sarthak, Payette, Kelly, Rädsch, Tim, Rafael-Patiño, Jonathan, Bawa, Vivek Singh, Speidel, Stefanie, Sudre, Carole H., van Wijnen, Kimberlin, Wagner, Martin, Wei, Donglai, Yamlahi, Amine, Yap, Moi Hoon, Yuan, Chun, Zenk, Maximilian, Zia, Aneeq, Zimmerer, David, Aydogan, Dogu Baran, Bhattarai, Binod, Bloch, Louise, Brüngel, Raphael, Cho, Jihoon, Choi, Chanyeol, Dou, Qi, Ezhov, Ivan, Friedrich, Christoph M., Fuller, Clifton, Gaire, Rebati Raman, Galdran, Adrian, Faura, Álvaro García, Grammatikopoulou, Maria, Hong, SeulGi, Jahanifar, Mostafa, Jang, Ikbeom, Kadkhodamohammadi, Abdolrahim, Kang, Inha, Kofler, Florian, Kondo, Satoshi, Kuijf, Hugo, Li, Mingxing, Luu, Minh Huan, Martinčič, Tomaž, Morais, Pedro, Naser, Mohamed A., Oliveira, Bruno, Owen, David, Pang, Subeen, Park, Jinah, Park, Sung-Hong, Płotka, Szymon, Puybareau, Elodie, Rajpoot, Nasir, Ryu, Kanghyun, Saeed, Numan, Shephard, Adam, Shi, Pengcheng, Štepec, Dejan, Subedi, Ronast, Tochon, Guillaume, Torres, Helena R., Urien, Helene, Vilaça, João L., Wahid, Kareem Abdul, Wang, Haojie, Wang, Jiacheng, Wang, Liansheng, Wang, Xiyue, Wiestler, Benedikt, Wodzinski, Marek, Xia, Fangfang, Xie, Juanying, Xiong, Zhiwei, Yang, Sen, Yang, Yanwu, Zhao, Zixuan, Maier-Hein, Klaus, Jäger, Paul F., Kopp-Schneider, Annette, Maier-Hein, Lena
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The "typical" lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.
CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Graham, Simon, Vu, Quoc Dang, Jahanifar, Mostafa, Weigert, Martin, Schmidt, Uwe, Zhang, Wenhua, Zhang, Jun, Yang, Sen, Xiang, Jinxi, Wang, Xiyue, Rumberger, Josef Lorenz, Baumann, Elias, Hirsch, Peter, Liu, Lihao, Hong, Chenyang, Aviles-Rivero, Angelica I., Jain, Ayushi, Ahn, Heeyoung, Hong, Yiyu, Azzuni, Hussam, Xu, Min, Yaqub, Mohammad, Blache, Marie-Claire, Piégu, Benoît, Vernay, Bertrand, Scherr, Tim, Böhland, Moritz, Löffler, Katharina, Li, Jiachen, Ying, Weiqin, Wang, Chixin, Kainmueller, Dagmar, Schönlieb, Carola-Bibiane, Liu, Shuolin, Talsania, Dhairya, Meda, Yughender, Mishra, Prakash, Ridzuan, Muhammad, Neumann, Oliver, Schilling, Marcel P., Reischl, Markus, Mikut, Ralf, Huang, Banban, Chien, Hsiang-Chin, Wang, Ching-Ping, Lee, Chia-Yen, Lin, Hong-Kun, Liu, Zaiyi, Pan, Xipeng, Han, Chu, Cheng, Jijun, Dawood, Muhammad, Deshpande, Srijay, Bashir, Raja Muhammad Saad, Shephard, Adam, Costa, Pedro, Nunes, João D., Campilho, Aurélio, Cardoso, Jaime S., S, Hrishikesh P, Puthussery, Densen, G, Devika R, C, Jiji V, Zhang, Ye, Fang, Zijie, Lin, Zhifan, Zhang, Yongbing, Lin, Chunhui, Zhang, Liukun, Mao, Lijian, Wu, Min, Vo, Vi Thi-Tuong, Kim, Soo-Hyung, Lee, Taebum, Kondo, Satoshi, Kasai, Satoshi, Dumbhare, Pranay, Phuse, Vedant, Dubey, Yash, Jamthikar, Ankush, Vuong, Trinh Thi Le, Kwak, Jin Tae, Ziaei, Dorsa, Jung, Hyun, Miao, Tianyi, Snead, David, Raza, Shan E Ahmed, Minhas, Fayyaz, Rajpoot, Nasir M.
Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.