Wu, Shenghao
Deconstructing The Ethics of Large Language Models from Long-standing Issues to New-emerging Dilemmas
Deng, Chengyuan, Duan, Yiqun, Jin, Xin, Chang, Heng, Tian, Yijun, Liu, Han, Zou, Henry Peng, Jin, Yiqiao, Xiao, Yijia, Wang, Yichen, Wu, Shenghao, Xie, Zongxing, Gao, Kuofeng, He, Sihong, Zhuang, Jun, Cheng, Lu, Wang, Haohan
Large Language Models (LLMs) have achieved unparalleled success across diverse language modeling tasks in recent years. However, this progress has also intensified ethical concerns, impacting the deployment of LLMs in everyday contexts. This paper provides a comprehensive survey of ethical challenges associated with LLMs, from longstanding issues such as copyright infringement, systematic bias, and data privacy, to emerging problems like truthfulness and social norms. We critically analyze existing research aimed at understanding, examining, and mitigating these ethical risks. Our survey underscores integrating ethical standards and societal values into the development of LLMs, thereby guiding the development of responsible and ethically aligned language models.
Counterfactual Generative Models for Time-Varying Treatments
Wu, Shenghao, Zhou, Wenbin, Chen, Minshuo, Zhu, Shixiang
Estimating the counterfactual outcome of treatment is essential for decision-making in public health and clinical science, among others. Often, treatments are administered in a sequential, time-varying manner, leading to an exponentially increased number of possible counterfactual outcomes. Furthermore, in modern applications, the outcomes are high-dimensional and conventional average treatment effect estimation fails to capture disparities in individuals. To tackle these challenges, we propose a novel conditional generative framework capable of producing counterfactual samples under time-varying treatment, without the need for explicit density estimation. Our method carefully addresses the distribution mismatch between the observed and counterfactual distributions via a loss function based on inverse probability weighting. We present a thorough evaluation of our method using both synthetic and real-world data. Our results demonstrate that our method is capable of generating high-quality counterfactual samples and outperforms the state-of-the-art baselines.