Goto

Collaborating Authors

deepfake


AI Synthetic Media: What to expect and what it will mean

#artificialintelligence

AI learns from seen data to make predictions about unseen data. What is utterly remarkable is that prediction can underpin extraordinary creativity and mimicry. These developments have the potential to unleash an explosion of scale creativity -- delivering content design and production tools into the hands of the mass market that have hitherto only been available to large corporations with hefty budgets. Even now -- when we are still in the infancy of AI media generation -- there are demos, apps and subscription-based services to faceswap individuals into movies (see Zao), turn rough sketches into photorealistic images (try the GauGAN demo here), convert one voice into another (see Respeecher), personalise marketing videos (try the Synthesia demo here), age- and emotion-alter images (see Photoshop's new Neural Filters), generate face-synched videos of new or translated scripts (see Canny AI), play a video game with characters speaking any of 10 face-synched languages (see Cyberpunk 2077), and play a text-based adventure game with endless dialogue generated by AI (try out the free version of AI Dungeon here). Moreover, the same AI techniques will spawn new applications in a wide range of fields: advertising, architecture, interior design, gaming, song-writing, web design, education, even software development and pure mathematics -- in fact anywhere where structured or constrained creativity is key.


Global Big Data Conference

#artificialintelligence

Since the dawn of the computer age, humans have viewed the approach of artificial intelligence (AI) with some degree of apprehension. Popular AI depictions often involve killer robots or all-knowing, all-seeing systems bent on destroying the human race. These sentiments have similarly pervaded the news media, which tends to greet breakthroughs in AI with more alarm or hype than measured analysis. In reality, the true concern should be whether these overly-dramatized, dystopian visions pull our attention away from the more nuanced -- yet equally dangerous -- risks posed by the misuse of AI applications that are already available or being developed today. AI permeates our everyday lives, influencing which media we consume, what we buy, where and how we work, and more.


These are the AI risks we should be focusing on

#artificialintelligence

Since the dawn of the computer age, humans have viewed the approach of artificial intelligence (AI) with some degree of apprehension. Popular AI depictions often involve killer robots or all-knowing, all-seeing systems bent on destroying the human race. These sentiments have similarly pervaded the news media, which tends to greet breakthroughs in AI with more alarm or hype than measured analysis. In reality, the true concern should be whether these overly-dramatized, dystopian visions pull our attention away from the more nuanced -- yet equally dangerous -- risks posed by the misuse of AI applications that are already available or being developed today. AI permeates our everyday lives, influencing which media we consume, what we buy, where and how we work, and more. AI technologies are sure to continue disrupting our world, from automating routine office tasks to solving urgent challenges like climate change and hunger.


How Deepfakes could help implant false memories in our minds

#artificialintelligence

The human brain is a complex, miraculous thing. As best we can tell, it's the epitome of biological evolution. But it doesn't come with any security software preinstalled. And that makes it ridiculously easy to hack. We like to imagine the human brain as a giant neural network that speaks its own language.


Explained: Why it is becoming more difficult to detect deepfake videos and what are the implications

#artificialintelligence

Doctored videos or deepfakes have been one of the key weapons used in propaganda battles for quite some time now. Donald Trump taunting Belgium for remaining in the Paris climate agreement, David Beckham speaking fluently in nine languages, Mao Zedong singing'I will survive' or Jeff Bezos and Elon Musk in a pilot episode of Star Trek… all these videos have gone viral despite being fake, or because they were deepfakes. Last year, Marco Rubio, the Republican senator from Florida, said deepfakes are as potent as nuclear weapons in waging wars in a democracy. "In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today, you just need access to our Internet system, to our banking system, to our electrical grid and infrastructure, and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply," Forbes quoted him as saying.


Pinaki Laskar on LinkedIn: #deepfake #neuralnetworks #AI

#artificialintelligence

AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner Deepfakes is an applied form of artificial imagination, synthetic imagination, the artificial simulation of human imagination by special purpose ML/DL or artificial #neuralnetworks. Is Deepfake the future of content creation, A work by Kris McGuffie and Alex Newhouse - Examples of lies and conspiracy theory parotted by GPT-3, shows OpenAI's GPT-3 LM is a deepfake #AI/ML leader in stochastic parroting the text data. Primed with data about QAnon, it produces deepfake news, as lies and conspiracy theories, in mass scale. Will advanced deepfake #technology create a whole new kind of cybercrime, Cybercriminals & fraudsters will weaponise the deepfake technology to commit all sorts of cybercrimes. Such synthetic media are after fake news, the spread of misinformation, the proliferation of fake political news today on socialmedia sites, distrust of reality, mass automation of creative and journalistic jobs, and a complete retreat into machine-generated fantasy world.


Beware: Deepfake Videos can Fool with you Fake Content

#artificialintelligence

Yes, these are amazing places. I'm sure you've used one at least once. Yet, while a few types of media are clearly edited, different changes might be harder to spot. You may have heard the term "deepfake videos" recently. It originally came to fruition in 2017 to depict videos and pictures that incorporate deep learning algorithms to create videos and images that look real.


FBI Issues Guidance on Identifying 'Deepfake' Content - Executive Gov

#artificialintelligence

The FBI has released a guidance aimed at helping cybersecurity professionals and the general public identify cases of "deepfake" which adversaries may use to dissuade public opinion. FBI released the Private Industry Notification guidance on Wednesday in partnership with the Cybersecurity and Infrastructure Security Agency (CISA). According to the guidance, foreign actors are likely to use synthetic content including deepfakes in the coming months as part of influence campaigns and social engineering tactics. Deepfakes or generative adversarial network techniques utilize artificial intelligence and machine learning to manipulate digital content for fraudulent activities. FBI expects malicious actors to use deepfakes to support spearphishing techniques and Business Identity Compromise attacks designed to imitate corporate personas and authority figures.


How to spot deepfakes? Look at light reflection in the eyes

#artificialintelligence

University at Buffalo computer scientists have developed a tool that automatically identifies deepfake photos by analyzing light reflections in the eyes. The tool proved 94% effective with portrait-like photos in experiments described in a paper accepted at the IEEE International Conference on Acoustics, Speech and Signal Processing to be held in June in Toronto, Canada. "The cornea is almost like a perfect semisphere and is very reflective," says the paper's lead author, Siwei Lyu, Ph.D., SUNY Empire Innovation Professor in the Department of Computer Science and Engineering. "So, anything that is coming to the eye with a light emitting from those sources will have an image on the cornea. "The two eyes should have very similar reflective patterns because they're seeing the same thing.


Computer program has near-perfect record spotting deepfakes by examining reflection in the eyes

Daily Mail - Science & tech

Computer scientists have developed a tool that detects deepfake photos with near-perfect accuracy. The system, which analyzes light reflections in a subject's eyes, proved 94 percent effective in experiments. In real portraits, the light reflected in our eyes is generally in the same shape and color, because both eyes are looking at the same thing. Since deepfakes are composites made from many different photos, most omit this crucial detail. Deepfakes became a particular concern during the 2020 US presidential election, raising concerns they'd be use to discredit candidates and spread disinformation.