Goto

Collaborating Authors

 Chen, Yingyu


FedPalm: A General Federated Learning Framework for Closed- and Open-Set Palmprint Verification

arXiv.org Artificial Intelligence

Current deep learning (DL)-based palmprint verification models rely on centralized training with large datasets, which raises significant privacy concerns due to biometric data's sensitive and immutable nature. Federated learning~(FL), a privacy-preserving distributed learning paradigm, offers a compelling alternative by enabling collaborative model training without the need for data sharing. However, FL-based palmprint verification faces critical challenges, including data heterogeneity from diverse identities and the absence of standardized evaluation benchmarks. This paper addresses these gaps by establishing a comprehensive benchmark for FL-based palmprint verification, which explicitly defines and evaluates two practical scenarios: closed-set and open-set verification. We propose FedPalm, a unified FL framework that balances local adaptability with global generalization. Each client trains a personalized textural expert tailored to local data and collaboratively contributes to a shared global textural expert for extracting generalized features. To further enhance verification performance, we introduce a Textural Expert Interaction Module that dynamically routes textural features among experts to generate refined side textural features. Learnable parameters are employed to model relationships between original and side features, fostering cross-texture-expert interaction and improving feature discrimination. Extensive experiments validate the effectiveness of FedPalm, demonstrating robust performance across both scenarios and providing a promising foundation for advancing FL-based palmprint verification research.


Trustworthy Hate Speech Detection Through Visual Augmentation

arXiv.org Artificial Intelligence

The surge of hate speech on social media platforms poses a significant challenge, with hate speech detection~(HSD) becoming increasingly critical. Current HSD methods focus on enriching contextual information to enhance detection performance, but they overlook the inherent uncertainty of hate speech. We propose a novel HSD method, named trustworthy hate speech detection method through visual augmentation (TrusV-HSD), which enhances semantic information through integration with diffused visual images and mitigates uncertainty with trustworthy loss. TrusV-HSD learns semantic representations by effectively extracting trustworthy information through multi-modal connections without paired data. Our experiments on public HSD datasets demonstrate the effectiveness of TrusV-HSD, showing remarkable improvements over conventional methods.


EVIL: Evidential Inference Learning for Trustworthy Semi-supervised Medical Image Segmentation

arXiv.org Artificial Intelligence

Recently, uncertainty-aware methods have attracted increasing attention in semi-supervised medical image segmentation. However, since these methods rely heavily on the prediction However, current methods usually suffer from of pseudo label, false predictions will severely degrade the drawback that it is difficult to balance the computational the segmentation performance. To improve the quality of cost, estimation accuracy, and theoretical support in a unified pseudo labels, some uncertainty-aware methods have been framework. To alleviate this problem, we introduce proposed, including Monte Carlo dropout (MC-dropout)- the Dempster-Shafer Theory of Evidence (DST) into semisupervised based [9], Information-Entropy-based [10], and Prediction medical image segmentation, dubbed EVidential Variance-based [11] methods. However, these methods suffer Inference Learning (EVIL). EVIL provides a theoretically from some problems: (1) Although MC-dropout is mathematically guaranteed solution to infer accurate uncertainty quantification guaranteed by Bayesian theory, its training process in a single forward pass. Trustworthy pseudo labels on is costly due to the multiple sampling operations; (2) Due unlabeled data are generated after uncertainty estimation. The to the limited sampling times, MC-dropout can't obtain accurate recently proposed consistency regularization-based training uncertainty quantification; (3) Other two uncertainty paradigm is adopted in our framework, which enforces the estimation methods have advantages in computational cost, consistency on the perturbed predictions to enhance the generalization but they lack theoretical support, leading to unstable pseudo with few labeled data. Experimental results show label generation.