An unnerving new video that appears to show Steve Buscemi's face seamlessly molded onto Jennifer Lawrence's head is yet another example of the worrying advancements of'deepfakes' videos. The clip will sound familiar to anyone who remembers Lawrence's speech backstage at the 2016 Golden Globes – but the words are instead coming out of Buscemi's mouth. Horrified social media viewers have been have been sharing the clip of'Jennifer Buscemi' across the internet this week, with many calling it the stuff of nightmares. An unnerving new video appears to show Steve Buscemi's face seamlessly molded onto Jennifer Lawrence's body The clip was first posted by Reddit user VillainGuy at the beginning of January. It's since been shared thousands of times.
The fight against videos altered by the use of artificial intelligence just got a new ally. According to researchers at UC Berkeley and the University of Southern California, a new algorithm can help spot whether a video has been manipulated via a process known as'deepfaking.' Counter-intuitively, the tool that scientists say will aid them in their crusade against faked videos happens to be the very same tool that helps make the videos in the first place: artificial intelligence. The fight against videos altered by the use of artificial intelligence just got a new ally. Pictured is a grab from a deep fake video where Steve Buscemi's face is superimposed over Jennifer Lawrence's body Deepfakes are so named because they utilize deep learning, a form of artificial intelligence, to create fake videos.
How do you do, fellow kids? Steve Buscemi had some thoughts about the latest viral "deepfake" video that's been floating around. You know the one: It's Steve Buscemi's face creepily pasted onto Jennifer Lawrence's body. Speaking to Stephen Colbert on Wednesday night, a befuddled Buscemi deadpanned, "I've never looked better." He added, "It makes me sad that somebody spent that much time on that."
In the modern age of digitalization, the world is always more than eager to welcome new technologies that offer people unprecedented advantages that make modern life a tad bit easier. Over the course of recent years, however, as more and more enterprises ride the wave of digitalization and continue to integrate technologies such as cloud computing into their digital infrastructure- the use of Artificial Intelligence (AI) and Machine Learning (ML) has risen in the cybersecurity world as a staple in providing security to enterprises in an increasingly complex threat landscape. However, unfortunate as it may be, the AI technology has often been exploited against enterprises, with statistics depicting a bleak picture, with more and more cyber-criminals turning to AI for launching increasingly sophisticated cyberattacks. One such dark side of AI definitely reveals itself in "deepfakes." If you've been following the slightest bit of cybersecurity news, chances are you're familiar with the term "deepfakes."
Deepfake videos could be commonplace and found across the media and online platforms within six months, according to a leading expert. The idea of the videos is to look completely real and show people doing things they never did. These are created by complex computing and artificial intelligence and have caused outrage recently. Moving images can be created from just a single image of a person and US politician Nancy Pelosi, Facebook founder Mark Zuckerberg and even the Mona Lisa have been used in the convincing clips already. The video that kicked off the concern last month was a doctored video of Nancy Pelosi (pictured), the speaker of the US House of Representatives.