'Deepfakes' Are Videos Designed to Trick You Into Thinking They're Real. But There's a Way to Detect Them

TIME - Tech

Deepfake videos are hard for untrained eyes to detect because they can be quite realistic. Whether used as personal weapons of revenge, to manipulate financial markets or to destabilize international relations, videos depicting people doing and saying things they never did or said are a fundamental threat to the longstanding idea that "seeing is believing." Most deepfakes are made by showing a computer algorithm many images of a person, and then having it use what it saw to generate new face images. At the same time, their voice is synthesized, so it both looks and sounds like the person has said something new. One of the most famous deepfakes sounds a warning.


Examining a video's changes over time helps flag deepfakes

#artificialintelligence

It used to be that only Hollywood production companies with deep pockets and teams of skilled artists and technicians could make deepfake videos, realistic fabrications appearing to show people doing and saying things they never actually did or said. Not anymore -- software freely available online lets anyone with a computer and some time on their hands create convincing fake videos. Whether used for personal revenge, to harass celebrities or to influence public opinion, deepfakes render untrue the age-old axiom that "seeing is believing." My research team and I at the University of Southern California Information Sciences Institute are developing ways to tell the difference between realistic-looking fakes and genuine videos that show actual events as they happened. Our recent research has found a new and apparently more accurate way to detect deepfake videos.


Detecting 'deepfake' videos in the blink of an eye

#artificialintelligence

A new form of misinformation is poised to spread through online communities as the 2018 midterm election campaigns heat up. Called "deepfakes" after the pseudonymous online account that popularized the technique – which may have chosen its name because the process uses a technical method called "deep learning" – these fake videos look very realistic. So far, people have used deepfake videos in pornography and satire to make it appear that famous people are doing things they wouldn't normally. But it's almost certain deepfakes will appear during the campaign season, purporting to depict candidates saying things or going places the real candidate wouldn't. Because these techniques are so new, people are having trouble telling the difference between real videos and the deepfake videos.


Can AI Detect Deepfakes To Help Ensure Integrity of U.S. 2020 Elections?

IEEE Spectrum Robotics

A perfect storm arising from the world of pornography may threaten the U.S. elections in 2020 with disruptive political scandals having nothing to do with actual affairs. Instead, face-swapping "deepfake" technology that first became popular on porn websites could eventually generate convincing fake videos of politicians saying or doing things that never happened in real life--a scenario that could sow widespread chaos if such videos are not flagged and debunked in time. The thankless task of debunking fake images and videos online has generally fallen upon news reporters, fact-checking websites and some sharp-eyed good Samaritans. But the more recent rise of AI-driven deepfakes that can turn Hollywood celebrities and politicians into digital puppets may require additional fact-checking help from AI-driven detection technologies. An Amsterdam-based startup called Deeptrace Labs aims to become one of the go-to shops for such deepfake detection technologies.


Using The Power Of Blockchain To Combat Deepfake Videos - Liwaiwai

#artificialintelligence

Besides "fake news", there's been another term which has caused many debates around it: deepfake. These seemingly realistic videos that are, in fact, manipulated have become more problematic lately, casting a shadow on the trust people have in media. Can blockchain and artificial intelligence (AI) be used to combat deepfake, to restore the public confidence back into the system? What is "deepfake", a term which combines'deep learning' and'fake'? According to Wikipedia, it's a technique for human image synthesis based on AI.