Goto

Collaborating Authors

Global Big Data Conference

#artificialintelligence

Since the dawn of the computer age, humans have viewed the approach of artificial intelligence (AI) with some degree of apprehension. Popular AI depictions often involve killer robots or all-knowing, all-seeing systems bent on destroying the human race. These sentiments have similarly pervaded the news media, which tends to greet breakthroughs in AI with more alarm or hype than measured analysis. In reality, the true concern should be whether these overly-dramatized, dystopian visions pull our attention away from the more nuanced -- yet equally dangerous -- risks posed by the misuse of AI applications that are already available or being developed today. AI permeates our everyday lives, influencing which media we consume, what we buy, where and how we work, and more.


What To Do About Deepfakes

Communications of the ACM

Synthetic media technologies are rapidly advancing, making it easier to generate nonveridical media that look and sound increasingly realistic. So-called "deepfakes" (owing to their reliance on deep learning) often present a person saying or doing something they have not said or done. The proliferation of deepfakesa creates a new challenge to the trustworthiness of visual experience, and has already created negative consequences such as nonconsensual pornography,11 political disinformation,19 and financial fraud.3 Deepfakes can harm viewers by deceiving or intimidating, harm subjects by causing reputational damage, and harm society by undermining societal values such as trust in institutions.7 What can be done to mitigate these harms?


Deepfakes and deep media: A new security battleground

#artificialintelligence

That's troubling not only because these fakes might be used to sway opinions during an election or implicate a person in a crime, but because they've already been abused to generate pornographic material of actors and defraud a major energy producer. In anticipation of this new reality, a coalition of academic institutions, tech firms, and nonprofits are developing ways to spot misleading AI-generated media. Their work suggests that detection tools are a viable short-term solution but that the deepfake arms race is just beginning. The best AI-produced prose used to be closer to Mad Libs than The Grapes of Wrath, but cutting-edge language models can now write with humanlike pith and cogency. San Francisco research firm OpenAI's GPT-2 takes seconds to craft passages in the style of a New Yorker article or brainstorm game scenarios.


In the battle against deepfakes, AI is being pitted against AI

#artificialintelligence

Lying has never looked so good, literally. Concern over increasingly sophisticated technology able to create convincingly faked videos and audio, so-called'deepfakes', is rising around the world. But at the same time they're being developed, technologists are also fighting back against the falsehoods. "The concern is that there will be a growing movement globally to undermine the quality of the information sphere and undermine the quality of discourse necessary in a democracy," Eileen Donahoe, a member of the Transatlantic Commission on Election Integrity, told CNBC in December 2018. She said deepfakes are potentially the next generation of disinformation.


How Relevant is the Turing Test in the Age of Sophisbots?

arXiv.org Machine Learning

Popular culture has contemplated societies of thinking machines for generations, envisioning futures from utopian to dystopian. These futures are, arguably, here now-we find ourselves at the doorstep of technology that can at least simulate the appearance of thinking, acting, and feeling. The real question is: now what?