Liu, Zaiyi
Domain Generalization for Mammographic Image Analysis with Contrastive Learning
Li, Zheren, Cui, Zhiming, Zhang, Lichi, Wang, Sheng, Lei, Chenjin, Ouyang, Xi, Chen, Dongdong, Zhao, Xiangyu, Gu, Yajia, Liu, Zaiyi, Liu, Chunling, Shen, Dinggang, Cheng, Jie-Zhi
The deep learning technique has been shown to be effectively addressed several image analysis tasks in the computer-aided diagnosis scheme for mammography. The training of an efficacious deep learning model requires large data with diverse styles and qualities. The diversity of data often comes from the use of various scanners of vendors. But, in practice, it is impractical to collect a sufficient amount of diverse data for training. To this end, a novel contrastive learning is developed to equip the deep learning models with better style generalization capability. Specifically, the multi-style and multi-view unsupervised self-learning scheme is carried out to seek robust feature embedding against style diversity as a pretrained model. Afterward, the pretrained network is further fine-tuned to the downstream tasks, e.g., mass detection, matching, BI-RADS rating, and breast density classification. The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets. The experimental results suggest that the proposed domain generalization method can effectively improve performance of four mammographic image tasks on the data from both seen and unseen domains, and outperform many state-of-the-art (SOTA) generalization methods.
Cluster-Induced Mask Transformers for Effective Opportunistic Gastric Cancer Screening on Non-contrast CT Scans
Yuan, Mingze, Xia, Yingda, Chen, Xin, Yao, Jiawen, Wang, Junli, Qiu, Mingyan, Dong, Hexin, Zhou, Jingren, Dong, Bin, Lu, Le, Zhang, Li, Liu, Zaiyi, Zhang, Ling
Gastric cancer is the third leading cause of cancer-related mortality worldwide, but no guideline-recommended screening test exists. Existing methods can be invasive, expensive, and lack sensitivity to identify early-stage gastric cancer. In this study, we explore the feasibility of using a deep learning approach on non-contrast CT scans for gastric cancer detection. We propose a novel cluster-induced Mask Transformer that jointly segments the tumor and classifies abnormality in a multi-task manner. Our model incorporates learnable clusters that encode the texture and shape prototypes of gastric cancer, utilizing self- and cross-attention to interact with convolutional features. In our experiments, the proposed method achieves a sensitivity of 85.0% and specificity of 92.6% for detecting gastric tumors on a hold-out test set consisting of 100 patients with cancer and 148 normal. In comparison, two radiologists have an average sensitivity of 73.5% and specificity of 84.3%. We also obtain a specificity of 97.7% on an external test set with 903 normal cases. Our approach performs comparably to established state-of-the-art gastric cancer screening tools like blood testing and endoscopy, while also being more sensitive in detecting early-stage cancer. This demonstrates the potential of our approach as a novel, non-invasive, low-cost, and accurate method for opportunistic gastric cancer screening.
Devil is in the Queries: Advancing Mask Transformers for Real-world Medical Image Segmentation and Out-of-Distribution Localization
Yuan, Mingze, Xia, Yingda, Dong, Hexin, Chen, Zifan, Yao, Jiawen, Qiu, Mingyan, Yan, Ke, Yin, Xiaoli, Shi, Yu, Chen, Xin, Liu, Zaiyi, Dong, Bin, Zhou, Jingren, Lu, Le, Zhang, Ling, Zhang, Li
Real-world medical image segmentation has tremendous long-tailed complexity of objects, among which tail conditions correlate with relatively rare diseases and are clinically significant. A trustworthy medical AI algorithm should demonstrate its effectiveness on tail conditions to avoid clinically dangerous damage in these out-of-distribution (OOD) cases. In this paper, we adopt the concept of object queries in Mask Transformers to formulate semantic segmentation as a soft cluster assignment. The queries fit the feature-level cluster centers of inliers during training. Therefore, when performing inference on a medical image in real-world scenarios, the similarity between pixels and the queries detects and localizes OOD regions. We term this OOD localization as MaxQuery. Furthermore, the foregrounds of real-world medical images, whether OOD objects or inliers, are lesions. The difference between them is less than that between the foreground and background, possibly misleading the object queries to focus redundantly on the background. Thus, we propose a query-distribution (QD) loss to enforce clear boundaries between segmentation targets and other regions at the query level, improving the inlier segmentation and OOD indication. Our proposed framework is tested on two real-world segmentation tasks, i.e., segmentation of pancreatic and liver tumors, outperforming previous state-of-the-art algorithms by an average of 7.39% on AUROC, 14.69% on AUPR, and 13.79% on FPR95 for OOD localization. On the other hand, our framework improves the performance of inlier segmentation by an average of 5.27% DSC when compared with the leading baseline nnUNet.
CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Graham, Simon, Vu, Quoc Dang, Jahanifar, Mostafa, Weigert, Martin, Schmidt, Uwe, Zhang, Wenhua, Zhang, Jun, Yang, Sen, Xiang, Jinxi, Wang, Xiyue, Rumberger, Josef Lorenz, Baumann, Elias, Hirsch, Peter, Liu, Lihao, Hong, Chenyang, Aviles-Rivero, Angelica I., Jain, Ayushi, Ahn, Heeyoung, Hong, Yiyu, Azzuni, Hussam, Xu, Min, Yaqub, Mohammad, Blache, Marie-Claire, Piégu, Benoît, Vernay, Bertrand, Scherr, Tim, Böhland, Moritz, Löffler, Katharina, Li, Jiachen, Ying, Weiqin, Wang, Chixin, Kainmueller, Dagmar, Schönlieb, Carola-Bibiane, Liu, Shuolin, Talsania, Dhairya, Meda, Yughender, Mishra, Prakash, Ridzuan, Muhammad, Neumann, Oliver, Schilling, Marcel P., Reischl, Markus, Mikut, Ralf, Huang, Banban, Chien, Hsiang-Chin, Wang, Ching-Ping, Lee, Chia-Yen, Lin, Hong-Kun, Liu, Zaiyi, Pan, Xipeng, Han, Chu, Cheng, Jijun, Dawood, Muhammad, Deshpande, Srijay, Bashir, Raja Muhammad Saad, Shephard, Adam, Costa, Pedro, Nunes, João D., Campilho, Aurélio, Cardoso, Jaime S., S, Hrishikesh P, Puthussery, Densen, G, Devika R, C, Jiji V, Zhang, Ye, Fang, Zijie, Lin, Zhifan, Zhang, Yongbing, Lin, Chunhui, Zhang, Liukun, Mao, Lijian, Wu, Min, Vo, Vi Thi-Tuong, Kim, Soo-Hyung, Lee, Taebum, Kondo, Satoshi, Kasai, Satoshi, Dumbhare, Pranay, Phuse, Vedant, Dubey, Yash, Jamthikar, Ankush, Vuong, Trinh Thi Le, Kwak, Jin Tae, Ziaei, Dorsa, Jung, Hyun, Miao, Tianyi, Snead, David, Raza, Shan E Ahmed, Minhas, Fayyaz, Rajpoot, Nasir M.
Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.