Goto

Collaborating Authors

 facenet


DreamID: High-Fidelity and Fast diffusion-based Face Swapping via Triplet ID Group Learning

Ye, Fulong, Hua, Miao, Zhang, Pengze, Li, Xinghui, Sun, Qichao, Zhao, Songtao, He, Qian, Wu, Xinglong

arXiv.org Artificial Intelligence

In this paper, we introduce DreamID, a diffusion-based face swapping model that achieves high levels of ID similarity, attribute preservation, image fidelity, and fast inference speed. Unlike the typical face swapping training process, which often relies on implicit supervision and struggles to achieve satisfactory results. DreamID establishes explicit supervision for face swapping by constructing Triplet ID Group data, significantly enhancing identity similarity and attribute preservation. The iterative nature of diffusion models poses challenges for utilizing efficient image-space loss functions, as performing time-consuming multi-step sampling to obtain the generated image during training is impractical. To address this issue, we leverage the accelerated diffusion model SD Turbo, reducing the inference steps to a single iteration, enabling efficient pixel-level end-to-end training with explicit Triplet ID Group supervision. Additionally, we propose an improved diffusion-based model architecture comprising SwapNet, FaceNet, and ID Adapter. This robust architecture fully unlocks the power of the Triplet ID Group explicit supervision. Finally, to further extend our method, we explicitly modify the Triplet ID Group data during training to fine-tune and preserve specific attributes, such as glasses and face shape. Extensive experiments demonstrate that DreamID outperforms state-of-the-art methods in terms of identity similarity, pose and expression preservation, and image fidelity. Overall, DreamID achieves high-quality face swapping results at 512*512 resolution in just 0.6 seconds and performs exceptionally well in challenging scenarios such as complex lighting, large angles, and occlusions.


Toward Face Biometric De-identification using Adversarial Examples

Ghafourian, Mahdi, Fierrez, Julian, Gomez, Luis Felipe, Vera-Rodriguez, Ruben, Morales, Aythami, Rezgui, Zohra, Veldhuis, Raymond

arXiv.org Artificial Intelligence

The remarkable success of face recognition (FR) has endangered the privacy of internet users particularly in social media. Recently, researchers turned to use adversarial examples as a countermeasure. In this paper, we assess the effectiveness of using two widely known adversarial methods (BIM and ILLC) for de-identifying personal images. We discovered, unlike previous claims in the literature, that it is not easy to get a high protection success rate (suppressing identification rate) with imperceptible adversarial perturbation to the human visual system. Finally, we found out that the transferability of adversarial examples is highly affected by the training parameters of the network with which they are generated.


Addressing Bias in Face Detectors using Decentralised Data collection with incentives

Ahan, M. R., Lehmann, Robin, Blythman, Richard

arXiv.org Artificial Intelligence

Recent developments in machine learning have shown that successful models do not rely only on huge amounts of data but the right kind of data. We show in this paper how this data-centric approach can be facilitated in a decentralized manner to enable efficient data collection for algorithms. Face detectors are a class of models that suffer heavily from bias issues as they have to work on a large variety of different data. We also propose a face detection and anonymization approach using a hybrid Multi-Task Cascaded CNN with FaceNet Embeddings to benchmark multiple datasets to describe and evaluate the bias in the models towards different ethnicities, gender, and age groups along with ways to enrich fairness in a decentralized system of data labeling, correction, and verification by users to create a robust pipeline for model retraining.


#025 FaceNet: A Unified Embedding for Face Recognition and Clustering in PyTorch - Master Data Science 05.01.2022

#artificialintelligence

Highlights: Face recognition represents an active area of research for more than 3 decades. This paper, FaceNet, published in 2015, introduced a lot of novelties and significantly improved the performance of face recognition, verification, and clustering tasks. Here, we explore this interesting framework that become popular for introducing 1) 128-dimensional face embedding vector and 2) triplet loss function. In addition to the theoretical background, we give an outline of how this network can be implemented in PyTorch. FaceNet method developed a novel design for the final layer of the CNN to embed the face image. This, so called, embedding vector is of size 128 elements.


Master Data Science - Master Data Science

#artificialintelligence

Highlight: In this post, we will be discussing Variational Autoencoders (VAE). In order to fully understand the underlying ideas, we... Highlight: Over the past few years in machine learning we've seen dramatic progress in the field of generative models. Highlights: GANs and classical Deep Learning methods (classification, object detection) are similar, but they are also fundamentally different in nature.... How did famous tennis players respond to the Djokovic visa saga? Sportsmanship in Tennis as revealed by Artificial Intelligence Software.What famous tennis players REALLY think and FEEL? Highlights: Is your goal to do face recognition in photographs or in videos?


#026 VGGFace: Deep Face Recognition in PyTorch by Oxford VGG

#artificialintelligence

Highlights: Is your goal to do face recognition in photographs or in videos? This distinguished paper, 2015, Deep Face Recognition proposed a novel solution to this. Although the period was very fruitful with contributions in the Face Recognition area, VGGFace presented novelties that enabled a large number of citations and worldwide recognition. Here, we will present a paper overview and provide a code in PyTorch to implement it. This paper comes from the famous VGG group at the University of Oxford. The researchers competed with tech giants such as Google.


Deep Learning in Practice III: Face Recognition - CouponED

#artificialintelligence

Deep Learning in Practice III: Face Recognition Get started with face recognition using MTCNN and FaceNet with Tensorflow and Keras New Rating: 0.0 out of 50.0 (0 ratings) 22 students Description About the course Welcome to the course on Deep Learning in Practice III on Face Recognition. I am Anis Koubaa, and I will be your instructor in this course. This course is the third course in the series Deep Learning in Practice. It provides a fast and easy-to-follow introduction to face recognition with deep learning using MTCNN for face extraction and FaceNet for face recognition. My two previous courses deal with object classification and transfer learning with Tensorflow and Keras. In this course, you will learn the whole loop of face recognition systems, which starts by extracting the face from an image and localize the face in an image by its bounding box, then we process the extracted face through a convolutional neural network, called FaceNet in our case, to create a fingerprint of the face, which we call face embedding.


Face Recognition System using DEEPFACE(With Python Codes)

#artificialintelligence

Recognition of the face as an identity is a critical aspect in today's world. Facial identification and recognition find its use in many real-life contexts, whether your identity card, passport, or any other credential of significant importance. It has become quite a popular tool these days to authenticate the identity of an individual. This technology is also being used in various sectors and industries to prevent ID fraud and identity theft. Your smartphone also has a face recognition feature to unlock it.


Bias Mitigation of Face Recognition Models Through Calibration

Salvador, Tiago, Cairns, Stephanie, Voleti, Vikram, Marshall, Noah, Oberman, Adam

arXiv.org Machine Learning

Face recognition models suffer from bias: for example, the probability of a false positive (incorrect face match) strongly depends on sensitive attributes like ethnicity. As a result, these models may disproportionately and negatively impact minority groups when used in law enforcement. In this work, we introduce the Bias Mitigation Calibration (BMC) method, which (i) increases model accuracy (improving the state-of-the-art), (ii) produces fairly-calibrated probabilities, (iii) significantly reduces the gap in the false positive rates, and (iv) does not require knowledge of the sensitive attribute.


A Gentle Introduction to Deep Learning for Face Recognition

#artificialintelligence

Face recognition is the problem of identifying and verifying people in a photograph by their face. It is a task that is trivially performed by humans, even under varying light and when faces are changed by age or obstructed with accessories and facial hair. Nevertheless, it is remained a challenging computer vision problem for decades until recently. Deep learning methods are able to leverage very large datasets of faces and learn rich and compact representations of faces, allowing modern models to first perform as-well and later to outperform the face recognition capabilities of humans. In this post, you will discover the problem of face recognition and how deep learning methods can achieve superhuman performance.