Chao, Hanqing
From Histopathology Images to Cell Clouds: Learning Slide Representations with Hierarchical Cell Transformer
Yang, Zijiang, Qiu, Zhongwei, Lin, Tiancheng, Chao, Hanqing, Chang, Wanxing, Yang, Yelin, Zhang, Yunshuo, Jiao, Wenpei, Shen, Yixuan, Liu, Wenbin, Fu, Dongmei, Jin, Dakai, Yan, Ke, Lu, Le, Jiang, Hui, Bian, Yun
It is clinically crucial and potentially very beneficial to be able to analyze and model directly the spatial distributions of cells in histopathology whole slide images (WSI). However, most existing WSI datasets lack cell-level annotations, owing to the extremely high cost over giga-pixel images. Thus, it remains an open question whether deep learning models can directly and effectively analyze WSIs from the semantic aspect of cell distributions. In this work, we construct a large-scale WSI dataset with more than 5 billion cell-level annotations, termed WSI-Cell5B, and a novel hierarchical Cell Cloud Transformer (CCFormer) to tackle these challenges. WSI-Cell5B is based on 6,998 WSIs of 11 cancers from The Cancer Genome Atlas Program, and all WSIs are annotated per cell by coordinates and types. To the best of our knowledge, WSI-Cell5B is the first WSI-level large-scale dataset integrating cell-level annotations. On the other hand, CCFormer formulates the collection of cells in each WSI as a cell cloud and models cell spatial distribution. Specifically, Neighboring Information Embedding (NIE) is proposed to characterize the distribution of cells within the neighborhood of each cell, and a novel Hierarchical Spatial Perception (HSP) module is proposed to learn the spatial relationship among cells in a bottom-up manner. The clinical analysis indicates that WSI-Cell5B can be used to design clinical evaluation metrics based on counting cells that effectively assess the survival risk of patients. Extensive experiments on survival prediction and cancer staging show that learning from cell spatial distribution alone can already achieve state-of-the-art (SOTA) performance, i.e., CCFormer strongly outperforms other competing methods.
Multimodal Neurodegenerative Disease Subtyping Explained by ChatGPT
Reyes, Diego Machado, Chao, Hanqing, Hahn, Juergen, Shen, Li, Yan, Pingkun
Alzheimer's disease (AD) is the most prevalent neurodegenerative disease; yet its currently available treatments are limited to stopping disease progression. Moreover, effectiveness of these treatments is not guaranteed due to the heterogenetiy of the disease. Therefore, it is essential to be able to identify the disease subtypes at a very early stage. Current data driven approaches are able to classify the subtypes at later stages of AD or related disorders, but struggle when predicting at the asymptomatic or prodromal stage. Moreover, most existing models either lack explainability behind the classification or only use a single modality for the assessment, limiting scope of its analysis. Thus, we propose a multimodal framework that uses early-stage indicators such as imaging, genetics and clinical assessments to classify AD patients into subtypes at early stages. Similarly, we build prompts and use large language models, such as ChatGPT, to interpret the findings of our model. In our framework, we propose a tri-modal co-attention mechanism (Tri-COAT) to explicitly learn the cross-modal feature associations. Our proposed model outperforms baseline models and provides insight into key cross-modal feature associations supported by known biological mechanisms.
Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation
Zhang, Jiajin, Chao, Hanqing, Dhurandhar, Amit, Chen, Pin-Yu, Tajer, Ali, Xu, Yangyang, Yan, Pingkun
Domain shift is a common problem in clinical applications, where the training images (source domain) and the test images (target domain) are under different distributions. Unsupervised Domain Adaptation (UDA) techniques have been proposed to adapt models trained in the source domain to the target domain. However, those methods require a large number of images from the target domain for model training. In this paper, we propose a novel method for Few-Shot Unsupervised Domain Adaptation (FSUDA), where only a limited number of unlabeled target domain samples are available for training. To accomplish this challenging task, first, a spectral sensitivity map is introduced to characterize the generalization weaknesses of models in the frequency domain. We then developed a Sensitivity-guided Spectral Adversarial MixUp (SAMix) method to generate target-style images to effectively suppresses the model sensitivity, which leads to improved model generalizability in the target domain. We demonstrated the proposed method and rigorously evaluated its performance on multiple tasks using several public datasets. The source code is available at https: //github.com/RPIDIAL/SAMix.
When Neural Networks Fail to Generalize? A Model Sensitivity Perspective
Zhang, Jiajin, Chao, Hanqing, Dhurandhar, Amit, Chen, Pin-Yu, Tajer, Ali, Xu, Yangyang, Yan, Pingkun
Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions. This paper considers a more realistic yet more challenging scenario,namely Single Domain Generalization (Single-DG), where only a single source domain is available for training. To tackle this challenge, we first try to understand when neural networks fail to generalize? We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity". Based on our analysis, we propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies. Models trained with these hard-to-learn samples can effectively suppress the sensitivity in the frequency space, which leads to improved generalization performance. Extensive experiments on multiple public datasets demonstrate the superiority of our approach, which surpasses the state-of-the-art single-DG methods.
AnaXNet: Anatomy Aware Multi-label Finding Classification in Chest X-ray
Agu, Nkechinyere N., Wu, Joy T., Chao, Hanqing, Lourentzou, Ismini, Sharma, Arjun, Moradi, Mehdi, Yan, Pingkun, Hendler, James
Radiologists usually observe anatomical regions of chest X-ray images as well as the overall image before making a decision. However, most existing deep learning models only look at the entire X-ray image for classification, failing to utilize important anatomical information. In this paper, we propose a novel multi-label chest X-ray classification model that accurately classifies the image finding and also localizes the findings to their correct anatomical regions. Specifically, our model consists of two modules, the detection module and the anatomical dependency module. The latter utilizes graph convolutional networks, which enable our model to learn not only the label dependency but also the relationship between the anatomical regions in the chest X-ray. We further utilize a method to efficiently create an adjacency matrix for the anatomical regions using the correlation of the label across the different regions. Detailed experiments and analysis of our results show the effectiveness of our method when compared to the current state-of-the-art multi-label chest X-ray image classification methods while also providing accurate location information.
Towards a Precipitation Bias Corrector against Noise and Maldistribution
Xu, Xiaoyang, Liu, Yiqun, Chao, Hanqing, Luo, Youcheng, Chu, Hai, Chen, Lei, Zhang, Junping, Ma, Leiming
With broad applications in various public services like aviation management and urban disaster warning, numerical precipitation prediction plays a crucial role in weather forecast. However, constrained by the limitation of observation and conventional meteorological models, the numerical precipitation predictions are often highly biased. To correct this bias, classical correction methods heavily depend on profound experts who have knowledge in aerodynamics, thermodynamics and meteorology. As precipitation can be influenced by countless factors, however, the performances of these expert-driven methods can drop drastically when some un-modeled factors change. To address this issue, this paper presents a data-driven deep learning model which mainly includes two blocks, i.e. a Denoising Autoencoder Block and an Ordinal Regression Block. To the best of our knowledge, it is the first expert-free models for bias correction. The proposed model can effectively correct the numerical precipitation prediction based on 37 basic meteorological data from European Centre for Medium-Range Weather Forecasts (ECMWF). Experiments indicate that compared with several classical machine learning algorithms and deep learning models, our method achieves the best correcting performance and meteorological index, namely the threat scores (TS), obtaining satisfactory visualization effect.