deep face recognition
Supplementary for UniTSFace
We have derived three sample-to-sample based losses in the manuscript, i.e., USS loss, sample-to-sample based softmax, and BCE losses. The experimental evaluations of such marginal losses have been included in Sec. In our work, we choose the cosine function to represent the similarity of two features, i.e., g (x, x The learning rate starts at 0.1 and is reduced by a factor of 10 at the All models in ablation and parameter study were trained on CASIA-WebFace. For Glint360K, we train the models(ResNet-100) for 20 epochs using a batch size of 1024. The UniTSFace under the'Large' protocol of MegaFace Challenge 1 (as shown in Table 4) was trained on Glint360K.
Supplementary for UniTSFace
We have derived three sample-to-sample based losses in the manuscript, i.e., USS loss, sample-to-sample based softmax, and BCE losses. The experimental evaluations of such marginal losses have been included in Sec. In our work, we choose the cosine function to represent the similarity of two features, i.e., g (x, x The learning rate starts at 0.1 and is reduced by a factor of 10 at the All models in ablation and parameter study were trained on CASIA-WebFace. For Glint360K, we train the models(ResNet-100) for 20 epochs using a batch size of 1024. The UniTSFace under the'Large' protocol of MegaFace Challenge 1 (as shown in Table 4) was trained on Glint360K.
ExpFace: Exponential Angular Margin Loss for Deep Face Recognition
Face recognition is an open-set problem requiring high discriminative power to ensure that intra-class distances remain smaller than inter-class distances. Margin-based soft-max losses, such as SphereFace, CosFace, and ArcFace, have been widely adopted to enhance intra-class compactness and inter-class separability, yet they overlook the impact of noisy samples. By examining the distribution of samples in the angular space, we observe that clean samples predominantly cluster in the center region, whereas noisy samples tend to shift toward the peripheral region. Motivated by this observation, we propose the Exponential Angular Margin Loss (ExpFace), which introduces an angular exponential term as the margin. This design applies a larger penalty in the center region and a smaller penalty in the peripheral region within the angular space, thereby emphasizing clean samples while suppressing noisy samples. W e present a unified analysis of ExpFace and classical margin-based softmax losses in terms of margin embedding forms, similarity curves, and gradient curves, showing that ExpFace not only avoids the training instability of SphereFace and the non-monotonicity of ArcFace, but also exhibits a similarity curve that applies penalties in the same manner as the decision boundary in the angular space. Extensive experiments demonstrate that ExpFace achieves state-of-the-art performance. T o facilitate future research, we have released the source code at: https: //github.com/dfr-code/ExpFace.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Asia > Middle East > Jordan (0.04)
Review for NeurIPS paper: Identifying Mislabeled Data using the Area Under the Margin Ranking
The authors use the margin ranking to find the data with noisy labels. As far as know, the concept of margin has long been used for classification task, e.g., face recognition [1], [2], semi-supervised learning [3]. The methods [4], which employ memorization effect to select confident samples (small loss samples), also share similar ideas. The data with small loss has clean labels with high confidence and also has a larger margin ranking. The authors ignore the discussion and comparison about these existing works.
UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face Recognition
Li, Qiufu, Jia, Xi, Zhou, Jiancan, Shen, Linlin, Duan, Jinming
Sample-to-class-based face recognition models can not fully explore the cross-sample relationship among large amounts of facial images, while sample-to-sample-based models require sophisticated pairing processes for training. Furthermore, neither method satisfies the requirements of real-world face verification applications, which expect a unified threshold separating positive from negative facial pairs. In this paper, we propose a unified threshold integrated sample-to-sample based loss (USS loss), which features an explicit unified threshold for distinguishing positive from negative pairs. Inspired by our USS loss, we also derive the sample-to-sample based softmax and BCE losses, and discuss their relationship. Extensive evaluation on multiple benchmark datasets, including MFR, IJB-C, LFW, CFP-FP, AgeDB, and MegaFace, demonstrates that the proposed USS loss is highly efficient and can work seamlessly with sample-to-class-based losses. The embedded loss (USS and sample-to-class Softmax loss) overcomes the pitfalls of previous approaches and the trained facial model UniTSFace exhibits exceptional performance, outperforming state-of-the-art methods, such as CosFace, ArcFace, VPL, AnchorFace, and UNPG. Our code is available.
- Asia > China > Guangdong Province > Shenzhen (0.05)
- Europe > United Kingdom > England > West Midlands > Birmingham (0.04)
- Asia > Middle East > Jordan (0.04)
Master Data Science - Master Data Science
Highlight: In this post, we will be discussing Variational Autoencoders (VAE). In order to fully understand the underlying ideas, we... Highlight: Over the past few years in machine learning we've seen dramatic progress in the field of generative models. Highlights: GANs and classical Deep Learning methods (classification, object detection) are similar, but they are also fundamentally different in nature.... How did famous tennis players respond to the Djokovic visa saga? Sportsmanship in Tennis as revealed by Artificial Intelligence Software.What famous tennis players REALLY think and FEEL? Highlights: Is your goal to do face recognition in photographs or in videos?
#026 VGGFace: Deep Face Recognition in PyTorch by Oxford VGG
Highlights: Is your goal to do face recognition in photographs or in videos? This distinguished paper, 2015, Deep Face Recognition proposed a novel solution to this. Although the period was very fruitful with contributions in the Face Recognition area, VGGFace presented novelties that enabled a large number of citations and worldwide recognition. Here, we will present a paper overview and provide a code in PyTorch to implement it. This paper comes from the famous VGG group at the University of Oxford. The researchers competed with tech giants such as Google.
An Experimental Evaluation on Deepfake Detection using Deep Face Recognition
Ramachandran, Sreeraj, Nadimpalli, Aakash Varma, Rattani, Ajita
Significant advances in deep learning have obtained hallmark accuracy rates for various computer vision applications. However, advances in deep generative models have also led to the generation of very realistic fake content, also known as deepfakes, causing a threat to privacy, democracy, and national security. Most of the current deepfake detection methods are deemed as a binary classification problem in distinguishing authentic images or videos from fake ones using two-class convolutional neural networks (CNNs). These methods are based on detecting visual artifacts, temporal or color inconsistencies produced by deep generative models. However, these methods require a large amount of real and fake data for model training and their performance drops significantly in cross dataset evaluation with samples generated using advanced deepfake generation techniques. In this paper, we thoroughly evaluate the efficacy of deep face recognition in identifying deepfakes, using different loss functions and deepfake generation techniques. Experimental investigations on challenging Celeb-DF and FaceForensics++ deepfake datasets suggest the efficacy of deep face recognition in identifying deepfakes over two-class CNNs and the ocular modality. Reported results suggest a maximum Area Under Curve (AUC) of 0.98 and an Equal Error Rate (EER) of 7.1% in detecting deepfakes using face recognition on the Celeb-DF dataset. This EER is lower by 16.6% compared to the EER obtained for the two-class CNN and the ocular modality on the Celeb-DF dataset. Further on the FaceForensics++ dataset, an AUC of 0.99 and EER of 2.04% were obtained. The use of biometric facial recognition technology has the advantage of bypassing the need for a large amount of fake data for model training and obtaining better generalizability to evolving deepfake creation techniques.
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- Asia > Japan > Honshū > Chūbu > Nagano Prefecture > Nagano (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
SphereFace2: Binary Classification is All You Need for Deep Face Recognition
Wen, Yandong, Liu, Weiyang, Weller, Adrian, Raj, Bhiksha, Singh, Rita
State-of-the-art deep face recognition methods are mostly trained with a softmax-based multi-class classification framework. Despite being popular and effective, these methods still have a few shortcomings that limit empirical performance. In this paper, we first identify the discrepancy between training and evaluation in the existing multi-class classification framework and then discuss the potential limitations caused by the "competitive" nature of softmax normalization. Motivated by these limitations, we propose a novel binary classification training framework, termed SphereFace2. In contrast to existing methods, SphereFace2 circumvents the softmax normalization, as well as the corresponding closed-set assumption. This effectively bridges the gap between training and evaluation, enabling the representations to be improved individually by each binary classification task. Besides designing a specific well-performing loss function, we summarize a few general principles for this "one-vs-all" binary classification framework so that it can outperform current competitive methods. We conduct comprehensive experiments on popular benchmarks to demonstrate that SphereFace2 can consistently outperform current state-of-the-art deep face recognition methods.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > China > Beijing > Beijing (0.04)