Doctored videos or deepfakes have been one of the key weapons used in propaganda battles for quite some time now. Donald Trump taunting Belgium for remaining in the Paris climate agreement, David Beckham speaking fluently in nine languages, Mao Zedong singing'I will survive' or Jeff Bezos and Elon Musk in a pilot episode of Star Trek… all these videos have gone viral despite being fake, or because they were deepfakes. Last year, Marco Rubio, the Republican senator from Florida, said deepfakes are as potent as nuclear weapons in waging wars in a democracy. "In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today, you just need access to our Internet system, to our banking system, to our electrical grid and infrastructure, and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply," Forbes quoted him as saying.
Yes, these are amazing places. I'm sure you've used one at least once. Yet, while a few types of media are clearly edited, different changes might be harder to spot. You may have heard the term "deepfake videos" recently. It originally came to fruition in 2017 to depict videos and pictures that incorporate deep learning algorithms to create videos and images that look real.
Deepfakes have started to appear everywhere – from viral celebrity face swaps to impersonations of political leaders. Millions got their first taste of the technology when they saw former US president Barack Obama using an expletive to describe then-president Donald Trump, or actor Bill Hader shape shifting on a late-night talk show. Earlier this week, social media went into a frenzy after deepfakes surfaced of actor Tom Cruise in a series of TikTok videos that appear to show him doing a magic trick and playing golf, all with a smoothness that was unsettlingly realistic. This isn't even a super high quality deepfake and I'm willing to bet that it could fool most people. Now imagine the quality of deepfake a government agency could produce.https://t.co/wMFMarEtAi
When compared to unimodal systems, multimodal biometric systems have several advantages, including lower error rate, higher accuracy, and larger population coverage. However, multimodal systems have an increased demand for integrity and privacy because they must store multiple biometric traits associated with each user. In this paper, we present a deep learning framework for feature-level fusion that generates a secure multimodal template from each user's face and iris biometrics. We integrate a deep hashing (binarization) technique into the fusion architecture to generate a robust binary multimodal shared latent representation. Further, we employ a hybrid secure architecture by combining cancelable biometrics with secure sketch techniques and integrate it with a deep hashing framework, which makes it computationally prohibitive to forge a combination of multiple biometrics that pass the authentication. The efficacy of the proposed approach is shown using a multimodal database of face and iris and it is observed that the matching performance is improved due to the fusion of multiple biometrics. Furthermore, the proposed approach also provides cancelability and unlinkability of the templates along with improved privacy of the biometric data. Additionally, we also test the proposed hashing function for an image retrieval application using a benchmark dataset. The main goal of this paper is to develop a method for integrating multimodal fusion, deep hashing, and biometric security, with an emphasis on structural data from modalities like face and iris. The proposed approach is in no way a general biometric security framework that can be applied to all biometric modalities, as further research is needed to extend the proposed framework to other unconstrained biometric modalities.
Media manipulation through images and videos has been around for decades. For example, in WWII Mousollini released a propaganda image of himself on a horse with his horse handler edited out. The goal was to make himself seem more impressive and powerful . These types of tricks can have significant impacts given the scale of people that see images like these, especially in the internet era. DARPA has an entire program constructed just to develop methods for detecting manipulated media through their media forensics (MEDIFOR) .
In 2018, a big fan of Nicholas Cage showed us what The Fellowship of the Ring would look like if Cage starred as Frodo, Aragorn, Gimly, and Legolas. The technology he used was deepfake, a type of application that uses artificial intelligence algorithms to manipulate videos. Deepfakes are mostly known for their capability to swap the faces of actors from one video to another. They first appeared in 2018 and quickly rose to fame after they were used to modify adult videos to feature the faces of Hollywood actors and politicians. In the past couple of years, deepfakes have caused much concern about the rise of a new wave of AI-doctored videos that can spread fake news and enable forgers and scammers.
Encountering altered videos and photoshopped images is almost a rite of passage on the internet. It's rare these days that you'd visit social media and not come across some form of edited content -- whether that be a simple selfie with a filter, a highly embellished meme or a video edited to add a soundtrack or enhance certain elements. But while some forms of media are obviously edited, other alterations may be harder to spot. You may have heard the term "deepfake" in recent years -- it first came about in 2017 to describe videos and images that implement deep learning algorithms to create videos and images that look real. For example, take the moon disaster speech given by former president Richard Nixon when the Apollo 11 team crashed into the lunar surface.
The word deepfake combines the terms "deep learning" and "fake," and is a form of artificial intelligence. In simplistic terms, deepfakes are falsified videos made by means of deep learning, said Paul Barrett, adjunct professor of law at New York University. Deep learning is "a subset of AI," and refers to arrangements of algorithms that can learn and make intelligent decisions on their own. More specifically deepfake refers to manipulated videos, or other digital representations produced by sophisticated artificial intelligence, that produce fabricated images and sounds that appear to be real. But the danger of that is "the technology can be used to make people believe something is real when it is not," said Peter Singer, cybersecurity and defense-focused strategist and senior fellow at New America think tank.
Deepfakes are spreading fast, and while some have playful intentions, others can cause serious harm. We stepped inside this deceptive new world to see what experts are doing to catch this altered content. Chances are you've seen a deepfake; Donald Trump, Barack Obama, and Mark Zuckerberg have all been targets of the computer-generated replications. A deepfake is a video or an audio clip where deep learning models create versions of people saying and doing things that have never actually happened. A good deepfake can chip away at our ability to discern fact from fiction, testing whether seeing is really believing.
Last week at the Black Hat cybersecurity conference in Las Vegas, the Democratic National Committee tried to raise awareness of the dangers of AI-doctored videos by displaying a deepfaked video of DNC Chair Tom Perez. Deepfakes are videos that have been manipulated, using deep learning tools, to superimpose a person's face onto a video of someone else. As the 2020 presidential election draws near, there's increasing concern over the potential threats deepfakes pose to the democratic process. In June, the U.S. Congress House Permanent Select Committee on Intelligence held a hearing to discuss the threats of deefakes and other AI-manipulated media. But there's doubt over whether tech companies are ready to deal with deepfakes.