Qi, Lianyong
TAD-Bench: A Comprehensive Benchmark for Embedding-Based Text Anomaly Detection
Cao, Yang, Yang, Sikun, Li, Chen, Xiang, Haolong, Qi, Lianyong, Liu, Bo, Li, Rongsheng, Liu, Ming
Existing studies often lack Anomaly detection is a critical task in machine systematic evaluations of how different embeddings learning, with applications ranging from fraud detection perform across diverse anomaly types, raising and content moderation to user behavior questions about their generalization capabilities analysis (Pang et al., 2021). Within natural language in complex, real-world scenarios such as multilingual processing (NLP), anomaly detection has become settings or domain-specific anomalies. Recent increasingly relevant for identifying outliers efforts, such as AD-NLP (Bejan et al., 2023) such as harmful content, phishing attempts, and and NLP-ADBench (Li et al., 2024), have significantly spam reviews. However, while AD tasks in structured advanced anomaly detection in NLP. ADdata (e.g., tabular, time series, graphs) (Steinbuss NLP provides valuable insights into different types and Bรถhm, 2021; Blรกzquez-Garcรญa et al., 2021; of anomalies, while NLP-ADBench expands evaluations Qiao et al., 2024) have achieved significant maturity, to a wide range of algorithms and datasets.
DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices
Jia, Yongzhe, Zhang, Xuyun, Hu, Hongsheng, Choo, Kim-Kwang Raymond, Qi, Lianyong, Xu, Xiaolong, Beheshti, Amin, Dou, Wanchun
Federated learning (FL) has emerged as a prominent machine learning paradigm in edge computing environments, enabling edge devices to collaboratively optimize a global model without sharing their private data. However, existing FL frameworks suffer from efficacy deterioration due to the system heterogeneity inherent in edge computing, especially in the presence of domain shifts across local data. In this paper, we propose a heterogeneous FL framework DapperFL, to enhance model performance across multiple domains. In DapperFL, we introduce a dedicated Model Fusion Pruning (MFP) module to produce personalized compact local models for clients to address the system heterogeneity challenges. The MFP module prunes local models with fused knowledge obtained from both local and remaining domains, ensuring robustness to domain shifts. Additionally, we design a Domain Adaptive Regularization (DAR) module to further improve the overall performance of DapperFL. The DAR module employs regularization generated by the pruned model, aiming to learn robust representations across domains. Furthermore, we introduce a specific aggregation algorithm for aggregating heterogeneous local models with tailored architectures and weights. We implement DapperFL on a realworld FL platform with heterogeneous clients. Experimental results on benchmark datasets with multiple domains demonstrate that DapperFL outperforms several state-of-the-art FL frameworks by up to 2.28%, while significantly achieving model volume reductions ranging from 20% to 80%. Our code is available at: https://github.com/jyzgh/DapperFL.
OptIForest: Optimal Isolation Forest for Anomaly Detection
Xiang, Haolong, Zhang, Xuyun, Hu, Hongsheng, Qi, Lianyong, Dou, Wanchun, Dras, Mark, Beheshti, Amin, Xu, Xiaolong
Anomaly detection plays an increasingly important role in various fields for critical tasks such as intrusion detection in cybersecurity, financial risk detection, and human health monitoring. A variety of anomaly detection methods have been proposed, and a category based on the isolation forest mechanism stands out due to its simplicity, effectiveness, and efficiency, e.g., iForest is often employed as a state-of-the-art detector for real deployment. While the majority of isolation forests use the binary structure, a framework LSHiForest has demonstrated that the multi-fork isolation tree structure can lead to better detection performance. However, there is no theoretical work answering the fundamentally and practically important question on the optimal tree structure for an isolation forest with respect to the branching factor. In this paper, we establish a theory on isolation efficiency to answer the question and determine the optimal branching factor for an isolation tree. Based on the theoretical underpinning, we design a practical optimal isolation forest OptIForest incorporating clustering based learning to hash which enables more information to be learned from data for better isolation quality. The rationale of our approach relies on a better bias-variance trade-off achieved by bias reduction in OptIForest. Extensive experiments on a series of benchmarking datasets for comparative and ablation studies demonstrate that our approach can efficiently and robustly achieve better detection performance in general than the state-of-the-arts including the deep learning based methods.