faceswap
Detecting Facial Image Manipulations with Multi-Layer CNN Models
Montejano, Alejandro Marco, Perez, Angela Sanchez, Barrachina, Javier, Ortiz-Perez, David, Benavent-Lledo, Manuel, Garcia-Rodriguez, Jose
The rapid evolution of digital image manipulation techniques poses significant challenges for content verification, with models such as stable diffusion and mid-journey producing highly realistic, yet synthetic, images that can deceive human perception. This research develops and evaluates convolutional neural networks (CNNs) specifically tailored for the detection of these manipulated images. The study implements a comparative analysis of three progressively complex CNN architectures, assessing their ability to classify and localize manipulations across various facial image modifications. Regularization and optimization techniques were systematically incorporated to improve feature extraction and performance. The results indicate that the proposed models achieve an accuracy of up to 76\% in distinguishing manipulated images from genuine ones, surpassing traditional approaches. This research not only highlights the potential of CNNs in enhancing the robustness of digital media verification tools, but also provides insights into effective architectural adaptations and training strategies for low-computation environments. Future work will build on these findings by extending the architectures to handle more diverse manipulation techniques and integrating multi-modal data for improved detection capabilities.
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Hindi audio-video-Deepfake (HAV-DF): A Hindi language-based Audio-video Deepfake Dataset
Kaur, Sukhandeep, Buhari, Mubashir, Khandelwal, Naman, Tyagi, Priyansh, Sharma, Kiran
Deepfakes offer great potential for innovation and creativity, but they also pose significant risks to privacy, trust, and security. With a vast Hindi-speaking population, India is particularly vulnerable to deepfake-driven misinformation campaigns. Fake videos or speeches in Hindi can have an enormous impact on rural and semi-urban communities, where digital literacy tends to be lower and people are more inclined to trust video content. The development of effective frameworks and detection tools to combat deepfake misuse requires high-quality, diverse, and extensive datasets. The existing popular datasets like FF-DF (FaceForensics++), and DFDC (DeepFake Detection Challenge) are based on English language.. Hence, this paper aims to create a first novel Hindi deep fake dataset, named ``Hindi audio-video-Deepfake'' (HAV-DF). The dataset has been generated using the faceswap, lipsyn and voice cloning methods. This multi-step process allows us to create a rich, varied dataset that captures the nuances of Hindi speech and facial expressions, providing a robust foundation for training and evaluating deepfake detection models in a Hindi language context. It is unique of its kind as all of the previous datasets contain either deepfake videos or synthesized audio. This type of deepfake dataset can be used for training a detector for both deepfake video and audio datasets. Notably, the newly introduced HAV-DF dataset demonstrates lower detection accuracy's across existing detection methods like Headpose, Xception-c40, etc. Compared to other well-known datasets FF-DF, and DFDC. This trend suggests that the HAV-DF dataset presents deeper challenges to detect, possibly due to its focus on Hindi language content and diverse manipulation techniques. The HAV-DF dataset fills the gap in Hindi-specific deepfake datasets, aiding multilingual deepfake detection development.
- North America > United States (0.46)
- Asia > India > Haryana (0.04)
- Asia > Middle East > UAE (0.04)
Reddit Bans 'SFW' Deepfake Group - Channel969
Over the weekend, apparently someday round Sunday, Reddit banned the nominally'SFW' (Protected For Work) deepfakes neighborhood titled r/deepfakesfw. The subreddit was one of many early responses to the social media large's immediate deletion of the unique, AI-porn-ridden r/deepfakes sub in 2018. The (comparatively) boiler-plate legend that now greets (archive snapshot taken Monday, June 13) anybody visiting the sub explains that it'was banned attributable to a violation of Reddit's guidelines towards involuntary pornography'. The r/deepfakesfw sub was not continuously archived by well-liked conservation platforms, however essentially the most latest WayBack Machine snapshot, taken round ten days in the past (on third June 2022), signifies that the sub had 3,095 readers at the moment. That's truly a better variety of subscribers than r/DeepFakesSFW (word the additional's'), which presently has 2,827 readers (archive snapshot taken Monday, June 13, 2022).
- Information Technology > Security & Privacy (0.99)
- Media > News (0.86)
A New and Simpler Deepfake Method That Outperforms Prior Approaches
A collaboration between a Chinese AI research group and US-based researchers has developed what may be the first real innovation in deepfakes technology since the phenomenon emerged four years ago. The new method can perform faceswaps that outperform all other existing frameworks on standard perceptual tests, without needing to exhaustively gather and curate large dedicated datasets and train them for up to a week for just a single identity. For the examples presented in the new paper, models were trained on the entirety of two popular celebrity datasets, on one NVIDIA Tesla P40 GPU for about three days. In this sample from a video in supplementary materials provided by one of the authors of the new paper, Scarlett Johansson's face is transferred onto the source video. CihaNet removes the problem of edge-masking when performing a swap, by forming and enacting deeper relationships between the source and target identities, meaning an end to'obvious borders' and other superimposition glitches that occur in traditional deepfake approaches.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.05)
- Asia > China > Sichuan Province > Chengdu (0.05)
An Experimental Evaluation on Deepfake Detection using Deep Face Recognition
Ramachandran, Sreeraj, Nadimpalli, Aakash Varma, Rattani, Ajita
Significant advances in deep learning have obtained hallmark accuracy rates for various computer vision applications. However, advances in deep generative models have also led to the generation of very realistic fake content, also known as deepfakes, causing a threat to privacy, democracy, and national security. Most of the current deepfake detection methods are deemed as a binary classification problem in distinguishing authentic images or videos from fake ones using two-class convolutional neural networks (CNNs). These methods are based on detecting visual artifacts, temporal or color inconsistencies produced by deep generative models. However, these methods require a large amount of real and fake data for model training and their performance drops significantly in cross dataset evaluation with samples generated using advanced deepfake generation techniques. In this paper, we thoroughly evaluate the efficacy of deep face recognition in identifying deepfakes, using different loss functions and deepfake generation techniques. Experimental investigations on challenging Celeb-DF and FaceForensics++ deepfake datasets suggest the efficacy of deep face recognition in identifying deepfakes over two-class CNNs and the ocular modality. Reported results suggest a maximum Area Under Curve (AUC) of 0.98 and an Equal Error Rate (EER) of 7.1% in detecting deepfakes using face recognition on the Celeb-DF dataset. This EER is lower by 16.6% compared to the EER obtained for the two-class CNN and the ocular modality on the Celeb-DF dataset. Further on the FaceForensics++ dataset, an AUC of 0.99 and EER of 2.04% were obtained. The use of biometric facial recognition technology has the advantage of bypassing the need for a large amount of fake data for model training and obtaining better generalizability to evolving deepfake creation techniques.
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- Asia > Japan > Honshū > Chūbu > Nagano Prefecture > Nagano (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
Detection of face Manipulated Videos using Deep Learning
Faceforensics data was collected by Visual Computing Group which an active research group on computer vision, computer graphics, and machine learning. This data contains 1000 pristine (real) videos that are selectively downloaded from YouTube such that all videos have clear face visibility (videos that are mostly like news-readers reading news). These pristine videos are manipulated by using 3 state-of-art video manipulation techniques such as DeepFakes, FaceSwap, Face2Face.To understand more about the data please refer to this paper. I have downloaded a total of 100 raw videos (49real 51 fake) covering all the categories and these videos are extracted into images. To download and extract the images please go through this Github page and read the instructions carefully. Before building any Machine learning/ Deep learning models we need to understand the data with some Data Analysis. Let's get an idea of how this data is organized:
deepfakes/faceswap
FaceSwap is a tool that utilizes deep learning to recognize and swap faces in pictures and videos. Make sure you check out INSTALL.md before getting started. When faceswapping was first developed and published, the technology was groundbreaking, it was a huge step in AI development. It was also completely ignored outside of academia because the code was confusing and fragmentary. It required a thorough understanding of complicated AI techniques and took a lot of effort to figure it out.