deepface
FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face Obfuscation Methods
Khorzooghi, Seyyed Mohammad Sadegh Moosavi, Thota, Poojitha, Singhal, Mohit, Asudeh, Abolfazl, Das, Gautam, Nilizadeh, Shirin
The lack of a common platform and benchmark datasets for evaluating face obfuscation methods has been a challenge, with every method being tested using arbitrary experiments, datasets, and metrics. While prior work has demonstrated that face recognition systems exhibit bias against some demographic groups, there exists a substantial gap in our understanding regarding the fairness of face obfuscation methods. Providing fair face obfuscation methods can ensure equitable protection across diverse demographic groups, especially since they can be used to preserve the privacy of vulnerable populations. To address these gaps, this paper introduces a comprehensive framework, named FairDeFace, designed to assess the adversarial robustness and fairness of face obfuscation methods. The framework introduces a set of modules encompassing data benchmarks, face detection and recognition algorithms, adversarial models, utility detection models, and fairness metrics. FairDeFace serves as a versatile platform where any face obfuscation method can be integrated, allowing for rigorous testing and comparison with other state-of-the-art methods. In its current implementation, FairDeFace incorporates 6 attacks, and several privacy, utility and fairness metrics. Using FairDeFace, and by conducting more than 500 experiments, we evaluated and compared the adversarial robustness of seven face obfuscation methods. This extensive analysis led to many interesting findings both in terms of the degree of robustness of existing methods and their biases against some gender or racial groups. FairDeFace also uses visualization of focused areas for both obfuscation and verification attacks to show not only which areas are mostly changed in the obfuscation process for some demographics, but also why they failed through focus area comparison of obfuscation and verification.
- North America > United States > Texas (0.04)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- (4 more...)
- Research Report > Promising Solution (0.48)
- Research Report > New Finding (0.46)
A Sociotechnical Lens for Evaluating Computer Vision Models: A Case Study on Detecting and Reasoning about Gender and Emotion
Luo, Sha, Kim, Sang Jung, Duan, Zening, Chen, Kaiping
In the evolving landscape of computer vision (CV) technologies, the automatic detection and interpretation of gender and emotion in images is a critical area of study. This paper investigates social biases in CV models, emphasizing the limitations of traditional evaluation metrics such as precision, recall, and accuracy. These metrics often fall short in capturing the complexities of gender and emotion, which are fluid and culturally nuanced constructs. Our study proposes a sociotechnical framework for evaluating CV models, incorporating both technical performance measures and considerations of social fairness. Using a dataset of 5,570 images related to vaccination and climate change, we empirically compared the performance of various CV models, including traditional models like DeepFace and FER, and generative models like GPT-4 Vision. Our analysis involved manually validating the gender and emotional expressions in a subset of images to serve as benchmarks. Our findings reveal that while GPT-4 Vision outperforms other models in technical accuracy for gender classification, it exhibits discriminatory biases, particularly in response to transgender and non-binary personas. Furthermore, the model's emotion detection skew heavily towards positive emotions, with a notable bias towards associating female images with happiness, especially when prompted by male personas. These findings underscore the necessity of developing more comprehensive evaluation criteria that address both validity and discriminatory biases in CV models. Our proposed framework provides guidelines for researchers to critically assess CV tools, ensuring their application in communication research is both ethical and effective. The significant contribution of this study lies in its emphasis on a sociotechnical approach, advocating for CV technologies that support social good and mitigate biases rather than perpetuate them.
- North America > United States > Alaska (0.05)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Iowa (0.04)
- (2 more...)
Build a Deep Face Detection Model with Python and Tensorflow
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib. Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level. The easiest way to install deepface is to download it from PyPI. It's going to install the library itself and its prerequisites as well.
How Deepface is Changing the AI Generator Game with its Own Social Network
In the world of Generative AI, there are many advancements being made in the fields of conversational AI (ChatGPT) and image generation (Dall-E, Stable Diffusion, MidJourney). One particularly useful application of image generation is called "model fine tuning," which allows people to train an AI model to generate images based on their own face. This has given rise to the concept of "AI avatars," or AI-generated artistic images of people. So far the classic model makes users pay for a batch of randomly generated AI avatars, where all creative power belongs to the AI generator itself. The app "Deepface" is revolutionizing the world of generative AI by introducing a social network aspect to the creation of AI avatars.
How to Find False Negatives in Facial Recognition with Neo4j - Sefik Ilkin Serengil
Current cutting-edge facial recognition models offer human-level accuracy. Still, we can improve facial recognition model accuracies if we represent classifications in a graph. In this post, we are going to find false negative classifications of facial recognition models with Neo4j graph database. We have just focused on detecting false positives in facial recognition with Neo4j. False positives are mis-classifications verifying different persons as same person.
Decide Whom Your Child Looks Like with Facial Recognition: Mommy or Daddy? - Sefik Ilkin Serengil
Parents do discuss whom their child is looking like but no one can really be convinced about the result with this discussion. Luckily, we have very powerful facial recognition technology nowadays to learn the real and unbiased answer. In this post, we are going to use deepface to decide a child looking more like to which parent. We normally use facial recognition technology to verify face pairs are same person or different persons. Face pairs are represented as multi-dimensional vectors by facial recognition models such as FaceNet.
Late Fusion with Triplet Margin Objective for Multimodal Ideology Prediction and Analysis
Qiu, Changyuan, Wu, Winston, Zhang, Xinliang Frederick, Wang, Lu
Prior work on ideology prediction has largely focused on single modalities, i.e., text or images. In this work, we introduce the task of multimodal ideology prediction, where a model predicts binary or five-point scale ideological leanings, given a text-image pair with political content. We first collect five new large-scale datasets with English documents and images along with their ideological leanings, covering news articles from a wide range of US mainstream media and social media posts from Reddit and Twitter. We conduct in-depth analyses of news articles and reveal differences in image content and usage across the political spectrum. Furthermore, we perform extensive experiments and ablation studies, demonstrating the effectiveness of targeted pretraining objectives on different model components. Our best-performing model, a late-fusion architecture pretrained with a triplet objective over multimodal content, outperforms the state-of-the-art text-only model by almost 4% and a strong multimodal baseline with no pretraining by over 3%.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > New York (0.04)
- Asia > China > Hong Kong (0.04)
- (10 more...)
- Media > News (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- (2 more...)
Deepface Face recognition
DeepFace is one of the most popular open source for Facial Recognition Library. Facial recognition has been a hot topic for several decades. And while there are different facial recognition libraries available, DeepFace has become widely popular and is used in numerous face recognition applications. DeepFace is the most lightweight face recognition and facial attribute analysis library for Python. The open-sourced DeepFace library includes all leading-edge AI models for face recognition and automatically handles all procedures for facial recognition in the background.
Council Post: Artificial Intelligence: Investment In Innovation Is A Key To Success
I lead digital accelerations for companies across all industries. The Encyclopedia Britannica defines artificial intelligence (AI) as "the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings." In simple terms, AI allows computers to perform tasks generally done by humans. While much is said about what AI is, less is said about what exactly it can achieve for your company. In addition to automating workflows and tasks, AI is fulfilling functions of great importance in the daily processes of some of the best-valued companies worldwide.
How to Detect Emotions in Images using Python
One of the easiest, and yet also the most effective, ways of analyzing how people feel is looking at their facial expressions. Most of the time, our face best describes how we feel in a particular moment. This means that emotion recognition is a simple multiclass classification problem. We need to analyze a person's face and put it in a particular class, where each class represents a particular emotion. In Python, we can use the DeepFace and FER libraries to detect emotions in images.