Goto

Collaborating Authors

Deepfake Attacks Are About to Surge, Experts Warn

#artificialintelligence

Artificial intelligence and the rise of deepfake technology is something cybersecurity researchers have cautioned about for years and now it's officially arrived. Cybercriminals are increasingly sharing, developing and deploying deepfake technologies to bypass biometric security protections, and in crimes including blackmail, identity theft, social engineering-based attacks and more, experts warn. Time to get those cybersecurity defenses ready. Join Threatpost for "Fortifying Your Business Against Ransomware, DDoS & Cryptojacking Attacks" a LIVE roundtable event on Wednesday, May 12 at 2:00 PM EDT for this FREE webinar sponsored by Zoho ManageEngine. A drastic uptick in deepfake technology and service offerings across the Dark Web is the first sign a new wave of fraud is just about to crash in, according to a new report from Recorded Future, which ominously predicted that deepfakes are on the rise among threat actors with an enormous range of goals and interests.


Ai Editorial: Detecting deepfakes to combat identify fraud - Ai

#artificialintelligence

Ai Editorial: Deepfakes supported by AI techniques today are considered to be a growing problem. It is vital to build AI systems that can automated deepfake detection so that risks such as identity fraud can be tackled, writes Ai's Ritesh Gupta Artificial intelligence (AI)-based identity fraud is emerging as a serious issue. Recognition of one's voices and face as a way to validate a person's identity is under scrutiny with the rise of synthetic media and deepfakes. Be it for security-related risks, user privacy concerns or fraudulent transactions, repercussions are being probed at this juncture. Technology to manipulate images, videos and audio files is progressing faster than one's ability to tell what's real from what's been faked.


Fight Fire With Fire: Using Good AI To Combat Bad AI - Liwaiwai

#artificialintelligence

Real-world cases and expert opinions about the present and future of audio deepfakes and AI-powered fraud in particular and how to use AI to build a strong defense against malicious AI. John Dow, the CEO of an unnamed UK-based energy firm, once got a call from his boss that he wishes he'd never answered. Being confident he was talking to the CEO of the firm's German parent company, he followed the instruction to immediately transfer €220,000 (app. Having completed a transaction, John got another call from the boss confirming the reimbursement. However, he noticed that the purported reimbursement hadn't gone through, and the call had been made from an Austrian phone number.


Jumio BrandVoice: Deepfakes: A Closer Look At Look-Alike Technology

#artificialintelligence

In an age when Instagram filters and photoshopping have become standard, it has never been harder for organizations to verify a person's true identity online. Cybercriminals are deliberately using advanced technology to pull the wool over the eyes of organizations and defraud them. Deepfakes have recently emerged as a legitimate and scary fraud vector. A deepfake today uses AI to combine existing imagery to replace someone's likeness, closely replicating both their face and voice. Essentially, a deepfake can impersonate a real person, making them appear to say words they have never even spoken.


In the battle against deepfakes, AI is being pitted against AI

#artificialintelligence

Lying has never looked so good, literally. Concern over increasingly sophisticated technology able to create convincingly faked videos and audio, so-called'deepfakes', is rising around the world. But at the same time they're being developed, technologists are also fighting back against the falsehoods. "The concern is that there will be a growing movement globally to undermine the quality of the information sphere and undermine the quality of discourse necessary in a democracy," Eileen Donahoe, a member of the Transatlantic Commission on Election Integrity, told CNBC in December 2018. She said deepfakes are potentially the next generation of disinformation.