Collaborating Authors

Synthetic Media: How deepfakes could soon change our world


You may never have heard the term "synthetic media"-- more commonly known as "deepfakes"-- but our military, law enforcement and intelligence agencies certainly have. They are hyper-realistic video and audio recordings that use artificial intelligence and "deep" learning to create "fake" content or "deepfakes." The U.S. government has grown increasingly concerned about their potential to be used to spread disinformation and commit crimes. That's because the creators of deepfakes have the power to make people say or do anything, at least on our screens. Most Americans have no idea how far the technology has come in just the last four years or the danger, disruption and opportunities that come with it.

The impact of deepfakes: How do you know when a video is real?


In a world where seeing is increasingly no longer believing, experts are warning that society must take a multi-pronged approach to combat the potential harms of computer-generated media. As Bill Whitaker reports this week on 60 Minutes, artificial intelligence can manipulate faces and voices to make it look like someone said something they never said. The result is videos of things that never happened, called "deepfakes." Often, they look so real, people watching can't tell. Just this month, Justin Bieber was tricked by a series of deepfake videos on the social media video platform TikTok that appeared to be of Tom Cruise.

The societal threat is terrifying, but deepfakes needn't provoke deep pessimism - The EE


There can be no doubt that the ability of AI to create fake multi-media content that is utterly convincing to humans represents a real and present threat to society, says Tim Winchcomb, head of technology strategy in wireless and digital services at Cambridge Consultants. The democratisation of manipulation techniques means that YouTubers already aspire to Hollywood-grade visual effects, while malicious individuals across the world stand ready to weaponise their synthetic realities. Yet all is not lost industry players are stepping up to meet the deepfakes challenge, convinced that a collaborative response will allow technology, and ultimately society, to prevail. The term deepfakes is a construct of deep learning essentially multi-layered neural networks and fake, which of course refers to misleading and usually harmful content that purports to represent reality. It can be particularly terrifying that these bogus moving and still images, audio or written text can be created in real-time.

Deepfakes and the 2020 US elections: what (did not) happen Artificial Intelligence

In retrospect, Nisos experts made the right forecast. However, this was a clear minority opinion. Before and after their report, dozens of politicians and institutions drew considerable attention to the approaching danger: 'imagine a scenario where, on the eve of next year's presidential election, the Democratic nominee appears in a video where he or she endorses President Trump. Now, imagine it the other way around.' (Sprangler, 2019). It is fair to say that deepfakes' high potential for disinformation was noticed long before these hypothetical consequences were evoked, mainly because they were revealed to be highly credible. Two examples: 'In an online quiz, 49 percent of people who visited our site said they incorrectly believed Nixon's synthetically altered face was real and 65 percent thought his voice was real' (Panetta et al, 2020), or'Two-thirds of participants believed that one day it would be impossible to discern a real video from a fake one.

Deepfakes in cyberattacks aren't coming. They're already here.


The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. In March, the FBI released a report declaring that malicious actors almost certainly will leverage "synthetic content" for cyber and foreign influence operations in the next 12-18 months. This synthetic content includes deepfakes, audio or video that is either wholly created or altered by artificial intelligence or machine learning to convincingly misrepresent someone as doing or saying something that was not actually done or said. We've all heard the story about the CEO whose voice was imitated convincingly enough to initiate a wire transfer of $243,000. Now, the constant Zoom meetings of the anywhere workforce era have created a wealth of audio and video data that can be fed into a machine learning system to create a compelling duplicate.