Goto

Collaborating Authors

 impersonate




AI can easily impersonate you. This trick helps thwart scammers

PCWorld

AI's rapidly expanding capabilities include convincing impersonations--that is, audio and video that sounds and looks like you. Sometimes these deepfakes can be harmless, part of a joke or meme that involves a celebrity, politician, or other public figure. But as you might guess, scammers also use them to steal money from the unsuspecting. Most of the time, this style of scheme–often called a "grandparent scam"–catches people off-guard. Because they don't realize how easy and sophisticated this technology has become.


Urgent warning over 'Hi Mum' WhatsApp scam: Fraudsters are using AI to mimic children's voices to steal millions of pounds from unsuspecting parents

Daily Mail - Science & tech

For millions of people, WhatsApp is a vital connection to friends and family around the world. But cybersecurity experts have issued a fresh warning over an insidious scam which has already duped users out of almost half a million pounds since the start of 2025. In the so-called'Hi Mum' scam, criminals impersonate a family member to trick their victims into sending them money. Now, fraudsters are even using AI voice impersonation technology to dupe their victims. The scam begins by sending a WhatsApp message saying'Hi Mum' or'Hi Dad' as the sender claims they have lost their phone and have been locked out of their bank account.


LLMs' Leaning in European Elections

Ricciuti, Federico

arXiv.org Artificial Intelligence

The analysis of LLM biases is an active research field. As the application of LLMs in decision-making activities is increasing, their study is critical to better understand their implications on decisional processes. The coherence and the structural preferences that these models are acquiring over several topics could challenge their applications in several fields [4]. The origin of these biases is complicated to study and could come from different steps in LLM training. For example, these biases could be acquired during the pre-training phase, supervised fine-tuning phase, or even during the final alignment phase. This article focuses on understanding the extent of the political biases of LLM through two experiments. The first experiment has the objective of showing the left leaning of multiple LLMs in the context of several virtual European elections, section 4.1. The second experiment shows that LLMs consider "stupidity" and "ignorance" as human characteristics that make voting for the right wing more probable, section 4.2. As different models could exhibit different leans, we tested four of the most used LLMs in both our experiments, Table 1.


A man stalked a professor for six years. Then he used AI chatbots to lure strangers to her home

The Guardian

A man from Massachusetts has agreed to plead guilty to a seven-year cyberstalking campaign that included using artificial intelligence (AI) chatbots to impersonate a university professor and invite men online to her home address for sex. James Florence, 36, used platforms such as CrushOn.ai and JanitorAI, which allow users to design their own chatbots and direct them how to respond to other users during chats, including in sexually suggestive and explicit ways, according to court documents seen by the Guardian. The victim's identity has been kept confidential by law enforcement officials. Florence admitted to using the victim's personal and professional information – including her home address, date of birth and family information to instruct the chatbots to impersonate her and engage in sexual dialogue with users, per court filings. He told the chatbots to answer "yes" in the guise of his victim when a user asked whether she was sexually adventurous and fed the AI responses of what underwear she liked to wear.


Staying One Step Ahead of Hackers When It Comes to AI

WIRED

If you've been creeping around underground tech forums lately, you might have seen advertisements for a new program called WormGPT. The program is an AI-powered tool for cybercriminals to automate the creation of personalized phishing emails; although it sounds a bit like ChatGPT, WormGPT is not your friendly neighborhood AI. ChatGPT launched in November 2022 and, since then, generative AI has taken the world by storm. But few consider how its sudden rise will shape the future of cybersecurity. In 2024, generative AI is poised to facilitate new kinds of transnational--and translingual--cybercrime.


Generalized Attacks on Face Verification Systems

Nazari, Ehsan, Branco, Paula, Jourdan, Guy-Vincent

arXiv.org Artificial Intelligence

Face verification (FV) using deep neural network models has made tremendous progress in recent years, surpassing human accuracy and seeing deployment in various applications such as border control and smartphone unlocking. However, FV systems are vulnerable to Adversarial Attacks, which manipulate input images to deceive these systems in ways usually unnoticeable to humans. This paper provides an in-depth study of attacks on FV systems. We introduce the DodgePersonation Attack that formulates the creation of face images that impersonate a set of given identities while avoiding being identified as any of the identities in a separate, disjoint set. A taxonomy is proposed to provide a unified view of different types of Adversarial Attacks against FV systems, including Dodging Attacks, Impersonation Attacks, and Master Face Attacks. Finally, we propose the ''One Face to Rule Them All'' Attack which implements the DodgePersonation Attack with state-of-the-art performance on a well-known scenario (Master Face Attack) and which can also be used for the new scenarios introduced in this paper. While the state-of-the-art Master Face Attack can produce a set of 9 images to cover 43.82% of the identities in their test database, with 9 images our attack can cover 57.27% to 58.5% of these identifies while giving the attacker the choice of the identity to use to create the impersonation. Moreover, the 9 generated attack images appear identical to a casual observer.


Tubes Among Us: Analog Attack on Automatic Speaker Identification

Ahmed, Shimaa, Wani, Yash, Shamsabadi, Ali Shahin, Yaghini, Mohammad, Shumailov, Ilia, Papernot, Nicolas, Fawaz, Kassem

arXiv.org Artificial Intelligence

Recent years have seen a surge in the popularity of acoustics-enabled personal devices powered by machine learning. Yet, machine learning has proven to be vulnerable to adversarial examples. A large number of modern systems protect themselves against such attacks by targeting artificiality, i.e., they deploy mechanisms to detect the lack of human involvement in generating the adversarial examples. However, these defenses implicitly assume that humans are incapable of producing meaningful and targeted adversarial examples. In this paper, we show that this base assumption is wrong. In particular, we demonstrate that for tasks like speaker identification, a human is capable of producing analog adversarial examples directly with little cost and supervision: by simply speaking through a tube, an adversary reliably impersonates other speakers in eyes of ML models for speaker identification. Our findings extend to a range of other acoustic-biometric tasks such as liveness detection, bringing into question their use in security-critical settings in real life, such as phone banking.


A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification

Guo, Wei, Tondi, Benedetta, Barni, Mauro

arXiv.org Artificial Intelligence

We introduce a new attack against face verification systems based on Deep Neural Networks (DNN). The attack relies on the introduction into the network of a hidden backdoor, whose activation at test time induces a verification error allowing the attacker to impersonate any user. The new attack, named Master Key backdoor attack, operates by interfering with the training phase, so to instruct the DNN to always output a positive verification answer when the face of the attacker is presented at its input. With respect to existing attacks, the new backdoor attack offers much more flexibility, since the attacker does not need to know the identity of the victim beforehand. In this way, he can deploy a Universal Impersonation attack in an open-set framework, allowing him to impersonate any enrolled users, even those that were not yet enrolled in the system when the attack was conceived. We present a practical implementation of the attack targeting a Siamese-DNN face verification system, and show its effectiveness when the system is trained on VGGFace2 dataset and tested on LFW and YTF datasets. According to our experiments, the Master Key backdoor attack provides a high attack success rate even when the ratio of poisoned training data is as small as 0.01, thus raising a new alarm regarding the use of DNN-based face verification systems in security-critical applications.