Deng, Guanzhi
Privacy in LLM-based Recommendation: Recent Advances and Future Directions
Luo, Sichun, Shao, Wei, Yao, Yuxuan, Xu, Jian, Liu, Mingyang, Li, Qintong, He, Bowei, Wang, Maolin, Deng, Guanzhi, Hou, Hanxu, Zhang, Xinyi, Song, Linqi
Nowadays, large language models (LLMs) have been integrated with conventional recommendation models to improve recommendation performance. However, while most of the existing works have focused on improving the model performance, the privacy issue has only received comparatively less attention. In this paper, we review recent advancements in privacy within LLM-based recommendation, categorizing them into privacy attacks and protection mechanisms. Additionally, we highlight several challenges and propose future directions for the community to address these critical problems.
Can LLM Substitute Human Labeling? A Case Study of Fine-grained Chinese Address Entity Recognition Dataset for UAV Delivery
Yao, Yuxuan, Luo, Sichun, Zhao, Haohan, Deng, Guanzhi, Song, Linqi
We present CNER-UAV, a fine-grained \textbf{C}hinese \textbf{N}ame \textbf{E}ntity \textbf{R}ecognition dataset specifically designed for the task of address resolution in \textbf{U}nmanned \textbf{A}erial \textbf{V}ehicle delivery systems. The dataset encompasses a diverse range of five categories, enabling comprehensive training and evaluation of NER models. To construct this dataset, we sourced the data from a real-world UAV delivery system and conducted a rigorous data cleaning and desensitization process to ensure privacy and data integrity. The resulting dataset, consisting of around 12,000 annotated samples, underwent human experts and \textbf{L}arge \textbf{L}anguage \textbf{M}odel annotation. We evaluated classical NER models on our dataset and provided in-depth analysis. The dataset and models are publicly available at \url{https://github.com/zhhvvv/CNER-UAV}.
Evaluator for Emotionally Consistent Chatbots
Liu, Chenxiao, Deng, Guanzhi, Ji, Tao, Tang, Difei, Zheng, Silai
One challenge for evaluating current In this research, we aim to train an evaluator sequence-or dialogue-level chatbots, that can effectively evaluate the emotional such as Empathetic Open-domain consistency of chatbots. Conversation Models, is to determine whether the chatbot performs in an 1.2 Related Work emotionally consistent way. The most recent work only evaluates on the Empathetic dialogues There are studies aspects of context coherence, language (Rashkin et al., 2019; Li et al., 2017; Zhou fluency, response diversity, or logical et al., 2018; Sheen, 2021) that provide self-consistency between dialogues.