Researchers are utilising artificial intelligence (AI) to develop an early warning system that can identify manipulated images, deepfake videos and disinformation online in 2020 US election. The project is an effort to combat the rise of coordinated social media campaigns to incite violence, sew discord and threaten the integrity of democratic elections. According to the study, published in the journal Bulletin of the Atomic Scientists, the scalable, automated system uses content-based image retrieval and applies computer vision-based techniques to root out political memes from multiple social networks. "Memes are easy to create and even easier to share. When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm," said study researcher Tim Weninger, Associate Professor at the University of Notre Dame in the US.
Memes and social networks have become weaponized, while many governments seem ill-equipped to understand the new reality of information warfare. How will we fight state-sponsored disinformation and propaganda in the future? In 2011, a university professor with a background in robotics presented an idea that seemed radical at the time. After conducting research backed by DARPA -- the same defense agency that helped spawn the internet -- Dr. Robert Finkelstein proposed the creation of a brand new arm of the US military, a "Meme Control Center." You'll learn about cybersecurity trends to watch and high-momentum startups with the potential to shape the future of security. In internet-speak the word "meme" often refers to an amusing picture that goes viral on social media. More broadly, however, a meme is any idea that spreads, whether that idea is true or false. It is this broader definition of meme that Finklestein had in mind when he proposed the Meme Control Center and his idea of "memetic warfare." From "Tutorial: Military Memetics," by Dr. Robert Finkelstein, presented at Social Media for Defense Summit, 2011 Basically, Dr. Finklestein's Meme Control Center would pump the internet full of "memes" that would benefit the national security of the United States. Finkelstein saw a future in which guns and bombs are replaced by rumor, digital fakery, and social engineering.
WASHINGTON – YouTube said Monday it would remove election-related videos that are "manipulated or doctored" to mislead voters, as part of its efforts to stem online misinformation. The Google-owned video service said it was taking the measures as part of an effort to be a "more reliable source" for news and to promote a "healthy political discourse." Leslie Miller, YouTube's vice president of government affairs and public policy, said in a blog post that the service's community standards prohibit "content that has been technically manipulated or doctored in a way that misleads users … and may pose a serious risk of egregious harm." The policy also bans content that aims to mislead people about voting or the census processes. The move comes amid growing concern about "deepfake" videos altered by using artificial intelligence which can create credible-looking events, but also "shallow" fakes that use more rudimentary techniques to deceive viewers.
But, says Kambhampati, the rapid improvements in deepfake technology means that we will soon have to rely on AI techniques to detect what the human eye cannot. "There is not a 100% foolproof way of identifying deepfakes, not even for AI researchers," Thomas says. "Detection is always going to be an arms race. As people develop more accurate detection algorithms, fakers will develop even more sophisticated frauds." There are non-technical ways to sniff out a deepfake, just like other forms of disinformation. Ask yourself: Who is the person publishing this information?
Deception operations using high-quality fake videos produced with artificial intelligence are the next phase of information warfare operations by nation states aimed at subverting American democracy. Currently, "deepfakes," or human image-synthesized videos, mainly involve the use of celebrity likenesses and voices superimposed on women in porn videos. But the weaponization of deepfakes for political smear campaigns, in commercial operations to discredit businesses, or subversion by foreign intelligence services in disinformation operations is a looming threat. "I believe this is the next wave of attacks against America and Western democracies," said Sen. Marco Rubio (R., Fla.), a member of the Senate Select Committee on Intelligence. Rubio is pushing the U.S. intelligence community to address the danger of deepfake disinformation campaigns from nation states or terrorists before the threat fully emerges.