Goto

Collaborating Authors

 balasubramaniyan


'Heart wrenching': AI expert details dangers of deepfakes and tools to detect manipulated content

FOX News

Criminals are taking advantage of AI technology to conduct misinformation campaigns, commit fraud and obstruct justice through deepfake audio and video. While some uses of deepfakes are lighthearted like the pope donning a white Balenciaga puffer jacket or an AI-generated song using vocals from Drake and The Weeknd, they can also sow doubt about the authenticity of legitimate audio and videos. Criminals are taking advantage of the technology to conduct misinformation campaigns, commit fraud and obstruct justice. As artificial intelligence (AI) continues to advance, so does the proliferation of fake content that experts warn could pose a serious threat to various aspects of everyday life if proper controls aren't put in place. AI-manipulated images, videos and audio known as "deepfakes" are often used to create convincing but false representations of people and events.


Podcast: AI finds its voice

MIT Technology Review

Today's voice assistants are still a far cry from the hyper-intelligent thinking machines we've been musing about for decades. And it's because that technology is actually the combination of three different skills: speech recognition, natural language processing and voice generation. Each of these skills already presents huge challenges. In order to master just the natural language processing part? You pretty much have to recreate human-level intelligence. Deep learning, the technology driving the current AI boom, can train machines to become masters at all sorts of tasks. But it can only learn one at a time. And because most AI models train their skillset on thousands or millions of existing examples, they end up replicating patterns within historical data--including the many bad decisions people have made, like marginalizing people of color and women. Still, systems like the board-game champion AlphaZero and the increasingly convincing fake-text generator GPT-3 have stoked the flames of debate regarding when humans will create an artificial general intelligence--machines that can multitask, think, and reason for themselves. In this episode, we explore how machines learn to communicate--and what it means for the humans on the other end of the conversation. This episode was produced by Jennifer Strong, Emma Cillekens, Anthony Green, Karen Hao and Charlotte Jee.


Voice cloning with artificial intelligence can pose new security threats - Somag News

#artificialintelligence

The new method of criminals in the cyber world is the sound cloning process using artificial intelligence. Audio copies are used for fraud. Especially, the theft of the top executives of the companies has been cloned and theft has increased. The new fraud method of cybercriminals is frauds using artificially cloned sounds. Experts say that along with voice cloning, their voices are no longer safe.


Is AI-Enabled Voice Cloning the Next Big Security Scam?

#artificialintelligence

A company that specializes in detecting voice fraud is sounding the alarm over an emerging threat. With the help of AI-powered software, cybercriminals are starting to clone people's voices to commit scams, according to Vijay Balasubramaniyan, CEO of Pindrop. "We've seen only a handful of cases, but the amount of money stolen can reach as high as $17 million," he told PCMag. During a presentation at RSA, Balasubramaniyan said Pindrop has over the past year also investigated about a dozen similar cases involving fraudsters using AI-powered software to "deepfake" someone's voice to perpetrate their scams. "We're starting to see deepfake audios emerge as a way to target particular speakers, especially if you're the CEO of a company, and you have a lot of YouTube content out there," he said.


It's not just phishing emails, now we have to worry about fake calls, too

USATODAY - Tech Top Stories

When your boss calls and tells you to wire $100,000 to a supplier, be on your toes. It could be a fake call. As if "phishing" phony emails weren't enough, on the rise now are "deep fake" audios that can be cloned with near perfection to sound almost perfect, and are easy to create for hackers. "It's on the rise, and something to watch out for," says Vijay Balasubramaniyan, the CEO of Pindrop, a company that offers biometric authentication for enterprise. Balasubramaniyan demonstrated during a security conference how easy it is to take audio from the internet and use machine learning to create recorded phrases into sentences that the human probably never said.


How AI Is Catching Crooks in Call Centers

#artificialintelligence

This article first appeared in Data Sheet, Fortune's daily newsletter on the top tech news. I was in Atlanta Thursday, and for the second time in two weeks I was reminded that Silicon Valley has no monopoly on innovation. I visited a company adjacent to Georgia Tech University called Pindrop, which makes voice authentication and security products used by financial services companies and the like to cut down on fraud. Its AI-driven software listens to customer responses and cuts down on annoying verification questions as well as fraudulent behavior. Pindrop has some mind-blowing capabilities.