Goto

Collaborating Authors

California laws seek to crack down on deepfakes in politics and porn

#artificialintelligence

Deepfakes have been known to make politicians appear to do and say unusual things. While some deepfakes are silly and fun, others are misleading and even abusive. Two new California laws aim to put a stop to these more nefarious video forgeries. California Gov. Gavin Newsom on Thursday signed AB 730, which makes it illegal to distribute manipulated videos that aim to discredit a political candidate and deceive voters within 60 days of an election. He also signed AB 602, which gives Californians the right to sue someone who creates deepfakes that place them in pornographic material without consent.


California introduces legislation to stop political and porn deepfakes

#artificialintelligence

Deepfake videos have the potential to do unprecedented amounts of harm so California has introduced two bills designed to limit them. For those unaware, deepfakes use machine learning technology in order to make a person appear like they're convincingly doing or saying things which they're not. Many celebrities have become victims of deepfake porn. One of the bills signed into law by the state of California last week allows victims to sue anyone who puts their image into a pornographic video without consent. Earlier this year, Facebook CEO Mark Zuckerberg became the victim of a deepfake.


China is trying to prevent deepfakes with new law requiring that videos using AI are prominently marked

#artificialintelligence

The Cyberspace Administration of China (CAC) announced on Friday that it is making it illegal for fake news to be created with deepfake video and audio, according to Reuters. "Deepfakes" are video or audio content that have been manipulated using AI to make it look like someone said or did something they have never done. In its statement, the CAC said "With the adoption of new technologies, such as deepfake, in online video and audio industries, there have been risks in using such content to disrupt social order and violate people's interests, creating political risks and bringing a negative impact to national security and social stability," according to the South China Morning Post reporting on the new regulations. The CAC's regulations, which go into effect on January 1, 2020, require publishers of deepfake content to disclose that a piece of content is, indeed, a deepfake. It also requires content providers to detect deepfake content themselves, according to the South China Morning Post.


China makes it a criminal offense to publish deepfakes or fake news without disclosure

#artificialintelligence

China has released a new government policy designed to prevent the spread of fake news and misleading videos created using artificial intelligence, otherwise known as deepfakes. The new rule, reported earlier today by Reuters, bans the publishing of false information or deepfakes online without proper disclosure that the post in question was created with AI or VR technology. Failure to disclose this is now a criminal offense, the Chinese government says. The rules go into effect on January 1st, 2020, and will be enforced by the Cyberspace Administration of China. "With the adoption of new technologies, such as deepfake, in online video and audio industries, there have been risks in using such content to disrupt social order and violate people's interests, creating political risks and bringing a negative impact to national security and social stability," the CAC said in a notice to online video hosting websites on Friday, according to the South China Morning Post.


Microsoft unveils new tools to identify deepfake videos

Daily Mail - Science & tech

Microsoft has launched a new tool to identify'deepfake' photos and videos that have been created to trick people into believing false information online. Deepfakes – also known as synthetic media – are photos, videos or audio files that have been manipulated using AI to show or say something that isn't real. There were at least 96 'foreign influenced' deep fake campaigns on social media targeting people in 30 countries between 2013 and 2019, according to Microsoft. To combat campaigns using this manipulated form of media, the tech giant has launched a new'Video Authenticator' tool that can analyse a still photo or video and provide a percentage chance that the media source has been manipulated. It works by detecting the blending boundary of the deepfake, and subtle fading or greyscale elements that might not be detectable by the human eye.