Goto

Collaborating Authors

Results


Is Biometrics the New AI Toolset to Execute Cybercrimes?

#artificialintelligence

Two months back, a group of hackers hijacked the facial recognition system by the Chinese government to send fake tax invoices. According to the South China Morning Post report, "Prosecutors in Shanghai said a criminal group duped that platform's identity verification system by using manipulated personal information and high-definition photographs, which were bought from an online black market, so its registered shell company can issue fake tax invoices to clients." The wide availability of image manipulation apps and AI technology has made it possible to successfully exploit and manipulate biometrics to commit frauds. Biometrics is considered one of the best tools to ensure security and detect cybercrimes. The potential of biometrics in authenticating and reducing fraud is imperative and thus it is being widely used in the form of fingerprints, facial recognition, voice recognition, etc.


A growing problem of 'deepfake geography': How AI falsifies satellite images

#artificialintelligence

What may appear to be an image of Tacoma is, in fact, a simulated one, created by transferring visual patterns of Beijing onto a map of a real Tacoma neighborhood.Zhao et al., 2021, Cartography and Geographic Information Science A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space, seem to show widespread fireworks activity. Both images exemplify what a new University of Washington-led study calls "location spoofing." The photos -- created by different people, for different purposes -- are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such "deepfake geography" could become a growing problem.


How AI Falsifies Satellite Images: A Growing Problem of "Deepfake Geography"

#artificialintelligence

What may appear to be an image of Tacoma is, in fact, a simulated one, created by transferring visual patterns of Beijing onto a map of a real Tacoma neighborhood. A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space, seem to show widespread fireworks activity. Both images exemplify what a new University of Washington-led study calls "location spoofing." The photos -- created by different people, for different purposes -- are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such "deepfake geography" could become a growing problem.


The owner of WeChat thinks deepfakes could actually be good

MIT Technology Review

The news: In a new white paper about its plans for AI, translated by China scholars Jeffrey Ding and Caroline Meinhardt, Tencent, the owner of WeChat and one of China's three largest tech giants, emphasizes that deepfake technology is "not just about'faking' and'deceiving,' but a highly creative and groundbreaking technology." It urges regulators to "be prudent" and to avoid clamping down on its potential benefits to society. Why it matters: Tencent says it's already working to advance some of these applications. This will likely spur its competitors to do the same if they haven't yet, and influence the direction of Chinese startups eager to be acquired. As a member of China's "AI national team," which the government created as part of its overall AI strategy, the company also has significant sway among regulators who want to help foster the industry's growth.


SenseTime's AI generates realistic deepfake videos

#artificialintelligence

In late 2019, researchers at Seoul-based Hyperconnect developed a tool (MarioNETte) that could manipulate the facial features of a historical figure, a politician, or a CEO using nothing but a webcam and still images. More recently, a team hailing from Hong Kong-based tech giant SenseTIme, Nanyang Technological University, and the Chinese Academy of Sciences' Institute of Automation proposed a method of editing target portrait footage by taking sequences of audio to synthesize photo-realistic videos. As opposed to MarioNETte, SenseTime's technique is dynamic, meaning it's able to better handle media it hasn't before encountered. And the results are impressive, albeit worrisome in light of recent developments involving deepfakes. The coauthors of the study describing the work note that the task of "many-to-many" audio-to-video translation -- that is, translation that doesn't assume a single identity of source video and the target video -- is challenging.


TikTok quietly building deepfake technology that lets users project their face onto different people

Daily Mail - Science & tech

Chinese social media upstart, TikTok and its counterpart Douyin are turning to technology commonly used for creating deepfakes to power a yet-to-be-released feature. According to a report from TechCrunch, ByteDance, which owns TikTok and China-based Douyin, has been developing a feature that allows users to create videos in which their face is superimposed onto someone else's. The feature, which mirrors other deepfake technology used to doctor videos of politicians and public figures, is being referred to as'Face Swap' within TikTok's own code according to TechCrunch and has not yet been released to users. The face swapping feature, while similar to those long-used by other social media platforms like Snapchat, differs in its ability to realistically superimpose faces on videos according to TechCrunch. 'Face Swap' reportedly works by taking a biometric scan of a users' face from multiple angles - similar to the process of setting up a facial recognition app like Apple's Face ID - and then lets users choose videos that they want to insert their face onto.


China seeks to root out fake news and deepfakes with new online content rules

The Japan Times

BEIJING/SHANGHAI – Chinese regulators have announced new rules governing video and audio content online, including a ban on the publishing and distribution of "fake news" created with technologies such as artificial intelligence and virtual reality. Any use of AI or virtual reality also needs to be clearly marked in a prominent manner and failure to follow the rules could be considered a criminal offense, the Cyberspace Administration of China (CAC) said on its website. The rules, effective Jan. 1, were published publicly on its website on Friday after being issued to online video and audio service providers last week. In particular, the CAC highlighted potential problems caused by deepfake technology, which uses AI to create hyper-realistic videos where a person appears to say or do something they did not. Deepfake technology could "endanger national security, disrupt social stability, disrupt social order and infringe upon the legitimate rights and interests of others," according to a transcript of a press briefing published on the CAC's website.


China is trying to prevent deepfakes with new law requiring that videos using AI are prominently marked

#artificialintelligence

The Cyberspace Administration of China (CAC) announced on Friday that it is making it illegal for fake news to be created with deepfake video and audio, according to Reuters. "Deepfakes" are video or audio content that have been manipulated using AI to make it look like someone said or did something they have never done. In its statement, the CAC said "With the adoption of new technologies, such as deepfake, in online video and audio industries, there have been risks in using such content to disrupt social order and violate people's interests, creating political risks and bringing a negative impact to national security and social stability," according to the South China Morning Post reporting on the new regulations. The CAC's regulations, which go into effect on January 1, 2020, require publishers of deepfake content to disclose that a piece of content is, indeed, a deepfake. It also requires content providers to detect deepfake content themselves, according to the South China Morning Post.


China makes it a criminal offense to publish deepfakes or fake news without disclosure

#artificialintelligence

China has released a new government policy designed to prevent the spread of fake news and misleading videos created using artificial intelligence, otherwise known as deepfakes. The new rule, reported earlier today by Reuters, bans the publishing of false information or deepfakes online without proper disclosure that the post in question was created with AI or VR technology. Failure to disclose this is now a criminal offense, the Chinese government says. The rules go into effect on January 1st, 2020, and will be enforced by the Cyberspace Administration of China. "With the adoption of new technologies, such as deepfake, in online video and audio industries, there have been risks in using such content to disrupt social order and violate people's interests, creating political risks and bringing a negative impact to national security and social stability," the CAC said in a notice to online video hosting websites on Friday, according to the South China Morning Post.


China makes it a criminal offense to publish deepfakes or fake news without disclosure

#artificialintelligence

China has released a new government policy designed to prevent the spread of fake news and misleading videos created using artificial intelligence, otherwise known as deepfakes. The new rule, reported earlier today by Reuters, bans the publishing of false information or deepfakes online without proper disclosure that the post in question was created with AI or VR technology. Failure to disclose this is now a criminal offense, the Chinese government says. The rules go into effect on January 1st, 2020, and will be enforced by the Cyberspace Administration of China. "With the adoption of new technologies, such as deepfake, in online video and audio industries, there have been risks in using such content to disrupt social order and violate people's interests, creating political risks and bringing a negative impact to national security and social stability," the CAC said in a notice to online video hosting websites on Friday, according to the South China Morning Post.