The CEO of Instagram has defended the company's decision not to take down a deepfaked video of Mark Zuckerberg two weeks after the doctored video was reported. Adam Mosseri told CBS' Gayle King - in his first US television interview since taking over the platform last year - that the company hasn't yet formulated an official policy on AI-altered video called'deepfakes', and until then taking action would be'inappropriate.' Mosseri said, 'I don't feel good about it,' but said there is no rush to remove the video, in part because'the damage is done.' Mosseri's comments about deepfakes come as a response to King's questioning about a faked video of Facebook CEO Mark Zuckerberg taken from an actual interview with CBSN in 2017. The doctored video features a fairly convincing Zuckerberg next to a superimposed CBSN logo talking about how Facebook wields power over its users.
As if the world of deep-faked pictures and video wasn't scary enough, researchers from Samsung's AI center in Moscow have demonstrated an algorithm that can fabricate videos using only one image. In a video demonstration and a paper published in the pre-print journal ArXiv, the researchers show the capabilities of what is described as'one-shot' and'few-shot' machine learning. The results of their system bring to life popular faces like those of surrealist painter Salvador Dali and actress Marilyn Monroe using a single still image. The more images that are fed into the program, the more realistic the resulting video becomes. Though a single image translated into a moving face may look noticeably altered, a sample of 32 images produces a moving picture with near lifelike accuracy.
Adobe researchers have developed an AI tool that could make spotting'deepfakes' a whole lot easier. The tool is able to detect edits to images, such as those that would potentially go unnoticed to the naked eye, especially in doctored deepfake videos. It comes as deepfake videos, which use deep learning to digitally splice fake audio onto the mouth of someone talking, continue to be on the rise. Adobe researchers have developed an AI tool that could make it easier to spot'deepfakes'. Deepfakes are so named because they utilise deep learning, a form of artificial intelligence, to create fake videos.
As concerns grow over the threat of AI-generated'deepfakes' videos, some of the top names in tech have revealed a plan to fight fire with fire. Deepfakes can, quite literally, put words in a person's mouth; in recent examples, footage of celebrities and politicians have been altered to convincingly show them doing or saying they never really did. Facebook, Microsoft, and the Partnership on AI have now teamed up with researchers from a slew of US universities to launch the Deepfake Detection Challenge, which seeks to create a dataset of such videos in order to improve the identification process in the real world. Facebook is commissioning consenting actors for the effort, and says it has set aside $10 million for related research and prizes. As concerns grow over the threat of AI-generated'deepfakes' videos, some of the top names in tech have revealed a plan to fight fire with fire Deepfakes have rapidly grown more realistic in the short time since they first sprung into existence.
"Deepfake" is the name being given to videos created through artificially intelligent deep learning techniques. Also referred to as "face-swapping", the process involves inputting a source video of a person into a computer, and then inputting multiple images and videos of another person. The neural network then learns the movements and expressions of the person in the source video in order to map the other's image onto it to look as if they are carrying out the speech or act. This practice was first used extensively in the production of fake pornography in late 2017 – where the faces of famous female celebrities were swapped in. Research has consistently shown that pornography leads the way in technological adoption and advancement when it comes to communication technologies, from the Polaroid camera to the internet.