The Weekly Deepfakes -- Believe at Your Own Risk

#artificialintelligence

Watch -- very closely -- as an ambitious group of A.I. engineers and machine-learning specialists try to mimic reality with such accuracy that you may not be able to tell what's real from what's not. If successful, they'll have created the ultimate deepfake, an ultrarealistic video that makes people appear to say and do things they haven't. Experts warn it may only be a matter of time before someone creates a bogus video that's convincing enough to fool millions of people. Over several months, "The Weekly" embedded with a team of creative young engineers developing the perfect deepfake -- not to manipulate markets or game an election, but to warn the public about the dangers of technology meant to dupe them. The team picked one of the internet's most recognizable personalities, the comedian and podcaster Joe Rogan, who unwittingly provided the inspiration for the engineers' deepfake moonshot.


Detecting Deepfakes by Looking Closely Reveals a Way to Protect Against Them

#artificialintelligence

Deepfake videos are hard for untrained eyes to detect because they can be quite realistic. Whether used as personal weapons of revenge, to manipulate financial markets or to destabilize international relations, videos depicting people doing and saying things they never did or said are a fundamental threat to the longstanding idea that "seeing is believing." Most deepfakes are made by showing a computer algorithm many images of a person, and then having it use what it saw to generate new face images. At the same time, their voice is synthesized, so it both looks and sounds like the person has said something new. Some of my research group's earlier work allowed us to detect deepfake videos that did not include a person's normal amount of eye blinking – but the latest generation of deepfakes has adapted, so our research has continued to advance.


California cracks down on political and pornographic deepfakes

#artificialintelligence

Deepfake videos can be fun, but not when it comes to politcs and pornography. Now, the state of California is doing something about it with two new bills signed into law last week by Governor Gavin Newsom. The first makes it illegal to post any manipulated videos that could, for instance, replace a candidate's face or speech in order to discredit them, within 60 days of an election. The other will allow residents of the state to sue anyone who puts their image into a pornographic video using deepfake technology. Deepfake videos have become more convincing as of late, especially recent ones from Ctrl Shift Face that show comedian/actor Bill Hader's face replaced by Tom Cruise.


'Deepfake' celebrity porn has crept back onto PornHub

#artificialintelligence

The furor around deepfakes, porn videos that use machine learning to convincingly edit celebrities into sex scenes, has largely died down since many hosting sites banned the clips months ago. But deepfakes are still out there, even on sites where they're not technically allowed. Popular streaming site PornHub, which classifies deepfakes as nonconsensual and theoretically doesn't permit them, still hosts dozens of the videos. BuzzFeed's Charlie Warzel wrote on Wednesday that he'd found more than 100 deepfake videos on PornHub, and they weren't particularly well-hidden. Searches like "deepfake" and "fake deeps" brought up dozens of clips.


Google Battles Controversial Deepfakes By Releasing Thousands Of Its Own Deepfakes

#artificialintelligence

How do you defeat "deepfakes"? According to Google, you develop more of them. Google just released a large, free database of deepfake videos to help research develop detection tools. Google collaborated with "Jigsaw", a tech "incubator" founded by Google, and the FaceForesenics Benchmark Program at the Technical University of Munich and the University Federico II of Naples. They worked with several paid actors to create hundreds of real videos and then used popular deepfake technologies to generate thousands of fake videos.