Huang, Yongxiang
Dual Risk Minimization: Towards Next-Level Robustness in Fine-tuning Zero-Shot Models
Li, Kaican, Xie, Weiyan, Huang, Yongxiang, Deng, Didan, Hong, Lanqing, Li, Zhenguo, Silva, Ricardo, Zhang, Nevin L.
Fine-tuning foundation models often compromises their robustness to distribution shifts. To remedy this, most robust fine-tuning methods aim to preserve the pre-trained features. However, not all pre-trained features are robust and those methods are largely indifferent to which ones to preserve. We propose dual risk minimization (DRM), which combines empirical risk minimization with worst-case risk minimization, to better preserve the core features of downstream tasks. In particular, we utilize core-feature descriptions generated by LLMs to induce core-based zero-shot predictions which then serve as proxies to estimate the worst-case risk. DRM balances two crucial aspects of model robustness: expected performance and worst-case performance, establishing a new state of the art on various real-world benchmarks. DRM significantly improves the out-of-distribution performance of CLIP ViT-L/14@336 on ImageNet (75.9 to 77.1), WILDS-iWildCam (47.1 to 51.8), and WILDS-FMoW (50.7 to 53.1); opening up new avenues for robust fine-tuning. Our code is available at https://github.com/vaynexie/DRM .
Towards Fine-Grained Explainability for Heterogeneous Graph Neural Network
Li, Tong, Deng, Jiale, Shen, Yanyan, Qiu, Luyu, Huang, Yongxiang, Cao, Caleb Chen
Recently, Their goal is to learn or search for optimal graph objects that heterogeneous graph neural networks (HGNs) have maximize mutual information with the predictions. While become one of the standard paradigms for modeling rich such explanations answer the question "what is salient to semantics of heterogeneous graphs in various application the prediction", they fail to unveil "how the salient objects domains such as e-commerce, finance, and healthcare (Lv affect the prediction". In particular, there may exist multiple et al. 2021; Wang et al. 2022). In parallel with the proliferation paths in the graph to propagate the information of the salient of HGNs, understanding the reasons behind the objects to the target object and affect its prediction. Without predictions from HGNs is urgently demanded in order to distinguishing these different influential paths, the answer to build trust and confidence in the models for both users and the "how" question remains unclear, which could compromise stakeholders. For example, a customer would be satisfied if the utility of the explanation. This issue becomes more an HGN-based recommender system accompanies recommended prominent when it comes to explaining HGNs due to the items with explanations; a bank manager may want complex semantics of heterogeneous graphs.
Model Debiasing via Gradient-based Explanation on Representation
Zhang, Jindi, Wang, Luning, Su, Dan, Huang, Yongxiang, Cao, Caleb Chen, Chen, Lei
Machine learning systems produce biased results towards certain demographic groups, known as the fairness problem. Recent approaches to tackle this problem learn a latent code (i.e., representation) through disentangled representation learning and then discard the latent code dimensions correlated with sensitive attributes (e.g., gender). Nevertheless, these approaches may suffer from incomplete disentanglement and overlook proxy attributes (proxies for sensitive attributes) when processing real-world data, especially for unstructured data, causing performance degradation in fairness and loss of useful information for downstream tasks. In this paper, we propose a novel fairness framework that performs debiasing with regard to both sensitive attributes and proxy attributes, which boosts the prediction performance of downstream task models without complete disentanglement. The main idea is to, first, leverage gradient-based explanation to find two model focuses, 1) one focus for predicting sensitive attributes and 2) the other focus for predicting downstream task labels, and second, use them to perturb the latent code that guides the training of downstream task models towards fairness and utility goals. We show empirically that our framework works with both disentangled and non-disentangled representation learning methods and achieves better fairness-accuracy trade-off on unstructured and structured datasets than previous state-of-the-art approaches.
A Causal Framework to Unify Common Domain Generalization Approaches
Zhang, Nevin L., Li, Kaican, Gao, Han, Xie, Weiyan, Lin, Zhi, Li, Zhenguo, Wang, Luning, Huang, Yongxiang
Domain generalization (DG) is about learning models that generalize well to new domains that are related to, but different from, the training domain(s). It is a fundamental problem in machine learning and has attracted much attention in recent years. A large number of approaches have been proposed. Different approaches are motivated from different perspectives, making it difficult to gain an overall understanding of the area. In this paper, we propose a causal framework for domain generalization and present an understanding of common DG approaches in the framework. Our work sheds new lights on the following questions: (1) What are the key ideas behind each DG method? (2) Why is it expected to improve generalization to new domains theoretically? (3) How are different DG methods related to each other and what are relative advantages and limitations? By providing a unified perspective on DG, we hope to help researchers better understand the underlying principles and develop more effective approaches for this critical problem in machine learning.
Contrastive Domain Generalization via Logit Attribution Matching
Gao, Han, Li, Kaican, Huang, Yongxiang, Wang, Luning, Cao, Caleb Chen, Zhang, Nevin L.
Domain Generalization (DG) is an important open problem in machine learning. Deep models are susceptible to domain shifts of even minute degrees, which severely compromises their reliability in real applications. To alleviate the issue, most existing methods enforce various invariant constraints across multiple training domains. However,such an approach provides little performance guarantee for novel test domains in general. In this paper, we investigate a different approach named Contrastive Domain Generalization (CDG), which exploits semantic invariance exhibited by strongly contrastive data pairs in lieu of multiple domains. We present a causal DG theory that shows the potential capability of CDG; together with a regularization technique, Logit Attribution Matching (LAM), for realizing CDG. We empirically show that LAM outperforms state-of-the-art DG methods with only a small portion of paired data and that LAM helps models better focus on semantic features which are crucial to DG.
Edge-variational Graph Convolutional Networks for Uncertainty-aware Disease Prediction
Huang, Yongxiang, Chung, Albert C. S.
There is a rising need for computational models that can complementarily leverage data of different modalities while investigating associations between subjects for population-based disease analysis. Despite the success of convolutional neural networks in representation learning for imaging data, it is still a very challenging task. In this paper, we propose a generalizable framework that can automatically integrate imaging data with non-imaging data in populations for uncertainty-aware disease prediction. At its core is a learnable adaptive population graph with variational edges, which we mathematically prove that it is optimizable in conjunction with graph convolutional neural networks. To estimate the predictive uncertainty related to the graph topology, we propose the novel concept of Monte-Carlo edge dropout. Experimental results on four databases show that our method can consistently and significantly improve the diagnostic accuracy for Autism spectrum disorder, Alzheimer's disease, and ocular diseases, indicating its generalizability in leveraging multimodal data for computer-aided diagnosis.