Goto

Collaborating Authors

 unitsface


Supplementary for UniTSFace

Neural Information Processing Systems

We have derived three sample-to-sample based losses in the manuscript, i.e., USS loss, sample-to-sample based softmax, and BCE losses. The experimental evaluations of such marginal losses have been included in Sec. In our work, we choose the cosine function to represent the similarity of two features, i.e., g (x, x The learning rate starts at 0.1 and is reduced by a factor of 10 at the All models in ablation and parameter study were trained on CASIA-WebFace. For Glint360K, we train the models(ResNet-100) for 20 epochs using a batch size of 1024. The UniTSFace under the'Large' protocol of MegaFace Challenge 1 (as shown in Table 4) was trained on Glint360K.



UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face Recognition

Neural Information Processing Systems

Sample-to-class-based face recognition models can not fully explore the cross-sample relationship among large amounts of facial images, while sample-to-sample-based models require sophisticated pairing processes for training. Furthermore, neither method satisfies the requirements of real-world face verification applications, which expect a unified threshold separating positive from negative facial pairs. In this paper, we propose a unified threshold integrated sample-to-sample based loss (USS loss), which features an explicit unified threshold for distinguishing positive from negative pairs. Inspired by our USS loss, we also derive the sample-to-sample based softmax and BCE losses, and discuss their relationship. Extensive evaluation on multiple benchmark datasets, including MFR, IJB-C, LFW, CFP-FP, AgeDB, and MegaFace, demonstrates that the proposed USS loss is highly efficient and can work seamlessly with sample-to-class-based losses. The embedded loss (USS and sample-to-class Softmax loss) overcomes the pitfalls of previous approaches and the trained facial model UniTSFace exhibits exceptional performance, outperforming state-of-the-art methods, such as CosFace, ArcFace, VPL, AnchorFace, and UNPG.


Supplementary for UniTSFace

Neural Information Processing Systems

We have derived three sample-to-sample based losses in the manuscript, i.e., USS loss, sample-to-sample based softmax, and BCE losses. The experimental evaluations of such marginal losses have been included in Sec. In our work, we choose the cosine function to represent the similarity of two features, i.e., g (x, x The learning rate starts at 0.1 and is reduced by a factor of 10 at the All models in ablation and parameter study were trained on CASIA-WebFace. For Glint360K, we train the models(ResNet-100) for 20 epochs using a batch size of 1024. The UniTSFace under the'Large' protocol of MegaFace Challenge 1 (as shown in Table 4) was trained on Glint360K.


UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face Recognition Qiufu Li1, 2,6, # Xi Jia 1,2, 3,# Jiancan Zhou

Neural Information Processing Systems

Sample-to-class-based face recognition models can not fully explore the cross-sample relationship among large amounts of facial images, while sample-to-sample-based models require sophisticated pairing processes for training. Furthermore, neither method satisfies the requirements of real-world face verification applications, which expect a unified threshold separating positive from negative facial pairs.


UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face Recognition

Neural Information Processing Systems

Sample-to-class-based face recognition models can not fully explore the cross-sample relationship among large amounts of facial images, while sample-to-sample-based models require sophisticated pairing processes for training. Furthermore, neither method satisfies the requirements of real-world face verification applications, which expect a unified threshold separating positive from negative facial pairs. In this paper, we propose a unified threshold integrated sample-to-sample based loss (USS loss), which features an explicit unified threshold for distinguishing positive from negative pairs. Inspired by our USS loss, we also derive the sample-to-sample based softmax and BCE losses, and discuss their relationship. Extensive evaluation on multiple benchmark datasets, including MFR, IJB-C, LFW, CFP-FP, AgeDB, and MegaFace, demonstrates that the proposed USS loss is highly efficient and can work seamlessly with sample-to-class-based losses. The embedded loss (USS and sample-to-class Softmax loss) overcomes the pitfalls of previous approaches and the trained facial model UniTSFace exhibits exceptional performance, outperforming state-of-the-art methods, such as CosFace, ArcFace, VPL, AnchorFace, and UNPG.


UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face Recognition

Li, Qiufu, Jia, Xi, Zhou, Jiancan, Shen, Linlin, Duan, Jinming

arXiv.org Artificial Intelligence

Sample-to-class-based face recognition models can not fully explore the cross-sample relationship among large amounts of facial images, while sample-to-sample-based models require sophisticated pairing processes for training. Furthermore, neither method satisfies the requirements of real-world face verification applications, which expect a unified threshold separating positive from negative facial pairs. In this paper, we propose a unified threshold integrated sample-to-sample based loss (USS loss), which features an explicit unified threshold for distinguishing positive from negative pairs. Inspired by our USS loss, we also derive the sample-to-sample based softmax and BCE losses, and discuss their relationship. Extensive evaluation on multiple benchmark datasets, including MFR, IJB-C, LFW, CFP-FP, AgeDB, and MegaFace, demonstrates that the proposed USS loss is highly efficient and can work seamlessly with sample-to-class-based losses. The embedded loss (USS and sample-to-class Softmax loss) overcomes the pitfalls of previous approaches and the trained facial model UniTSFace exhibits exceptional performance, outperforming state-of-the-art methods, such as CosFace, ArcFace, VPL, AnchorFace, and UNPG. Our code is available.