Yao, Yifei
Class Incremental Fault Diagnosis under Limited Fault Data via Supervised Contrastive Knowledge Distillation
Zhang, Hanrong, Yao, Yifei, Wang, Zixuan, Su, Jiayuan, Li, Mengxuan, Peng, Peng, Wang, Hongwei
--Class-incremental fault diagnosis requires a model to adapt to new fault classes while retaining previous knowledge. However, limited research exists for imbalanced and long-tailed data. Extracting discriminative features from few-shot fault data is challenging, and adding new fault classes often demands costly model retraining. T o tackle these issues, we introduce a Supervised Contrastive knowledge distiLlation for class Incremental Fault Diagnosis (SCLIFD) framework proposing supervised contrastive knowledge distillation for improved representation learning capability and less forgetting, a novel prioritized exemplar selection method for sample replay to alleviate catastrophic forgetting, and the Random Forest Classifier to address the class imbalance. Extensive experimentation on simulated and real-world industrial datasets across various imbalance ratios demonstrates the superiority of SCLIFD over existing approaches. Data-driven fault diagnosis techniques have gained significant prominence over the past two decades [1-5]. However, most of them necessitate sufficient training data to achieve reliable modeling performance[6-9]. Unfortunately, fault data is typically limited in comparison to normal data. This is because engineering equipment primarily operates under normal conditions, and the probabilities of faults vary across different working environments. Besides, fault simulation experiments are costly and inevitably deviate to some extent from real industrial environments. These possible reasons consequently contribute to class imbalance and a long-tailed distribution among different conditions [10]. The performance of the model typically suffers as it tends to prioritize the normal class, consequently neglecting fault classes or tail classes.
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Zhang, Hanrong, Huang, Jingyuan, Mei, Kai, Yao, Yifei, Wang, Zhenting, Zhan, Chenlu, Wang, Hongwei, Zhang, Yongfeng
Although LLM-based agents, powered by Large Language Models (LLMs), can use external tools and memory mechanisms to solve complex real-world tasks, they may also introduce critical security vulnerabilities. However, the existing literature does not comprehensively evaluate attacks and defenses against LLM-based agents. To address this, we introduce Agent Security Bench (ASB), a comprehensive framework designed to formalize, benchmark, and evaluate the attacks and defenses of LLM-based agents, including 10 scenarios (e.g., e-commerce, autonomous driving, finance), 10 agents targeting the scenarios, over 400 tools, 23 different types of attack/defense methods, and 8 evaluation metrics. Based on ASB, we benchmark 10 prompt injection attacks, a memory poisoning attack, a novel Plan-of-Thought backdoor attack, a mixed attack, and 10 corresponding defenses across 13 LLM backbones with nearly 90,000 testing cases in total. Our benchmark results reveal critical vulnerabilities in different stages of agent operation, including system prompt, user prompt handling, tool usage, and memory retrieval, with the highest average attack success rate of 84.30\%, but limited effectiveness shown in current defenses, unveiling important works to be done in terms of agent security for the community. Our code can be found at https://github.com/agiresearch/ASB.