impostor
FBI warns seniors about billion-dollar scam draining retirement funds, expert says AI driving it
Pete Nicoletti, chief information security officer at Check Point, told Fox News Digital that an FBI-warned scam is now using AI to target seniors. A cybersecurity expert warns that a scam that has been used to drain entire life savings or retirement accounts has become "devastating" for seniors. FBI Los Angeles on July 15 posted a reminder on X about the Phantom Hacker Scam, which has cost Americans over 1 billion since at least 2024, according to the agency. The FBI said the scam targets senior citizens and warns that victims could lose their "life savings." The scam operates in three phases: a "tech support impostor," "financial institution impostor" and a "US government impostor." In the first phase, a tech support impostor will contact victims through text, phone call or email, then direct them to download a program allowing the scammer remote access to their computer.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (0.53)
- Information Technology > Artificial Intelligence > Applied AI (0.36)
Is It Really You? Exploring Biometric Verification Scenarios in Photorealistic Talking-Head Avatar Videos
Pedrouzo-Rodriguez, Laura, Delgado-DeRobles, Pedro, Gomez, Luis F., Tolosana, Ruben, Vera-Rodriguez, Ruben, Morales, Aythami, Fierrez, Julian
Photorealistic talking-head avatars are becoming increasingly common in virtual meetings, gaming, and social platforms. These avatars allow for more immersive communication, but they also introduce serious security risks. One emerging threat is impersonation: an attacker can steal a user's avatar, preserving his appearance and voice, making it nearly impossible to detect its fraudulent usage by sight or sound alone. In this paper, we explore the challenge of biometric verification in such avatar-mediated scenarios. Our main question is whether an individual's facial motion patterns can serve as reliable behavioral biometrics to verify their identity when the avatar's visual appearance is a facsimile of its owner. To answer this question, we introduce a new dataset of realistic avatar videos created using a state-of-the-art one-shot avatar generation model, GAGAvatar, with genuine and impostor avatar videos. We also propose a lightweight, explainable spatio-temporal Graph Convolutional Network architecture with temporal attention pooling, that uses only facial landmarks to model dynamic facial gestures. Experimental results demonstrate that facial motion cues enable meaningful identity verification with AUC values approaching 80%. The proposed benchmark and biometric system are available for the research community in order to bring attention to the urgent need for more advanced behavioral biometric defenses in avatar-based communication systems.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Among Us: A Sandbox for Measuring and Detecting Agentic Deception
Golechha, Satvik, Garriga-Alonso, Adrià
Prior studies on deception in language-based AI agents typically assess whether the agent produces a false statement about a topic, or makes a binary choice prompted by a goal, rather than allowing open-ended deceptive behavior to emerge in pursuit of a longer-term goal. To fix this, we introduce $\textit{Among Us}$, a sandbox social deception game where LLM-agents exhibit long-term, open-ended deception as a consequence of the game objectives. While most benchmarks saturate quickly, $\textit{Among Us}$ can be expected to last much longer, because it is a multi-player game far from equilibrium. Using the sandbox, we evaluate $18$ proprietary and open-weight LLMs and uncover a general trend: models trained with RL are comparatively much better at producing deception than detecting it. We evaluate the effectiveness of methods to detect lying and deception: logistic regression on the activations and sparse autoencoders (SAEs). We find that probes trained on a dataset of ``pretend you're a dishonest model: $\dots$'' generalize extremely well out-of-distribution, consistently obtaining AUROCs over 95% even when evaluated just on the deceptive statement, without the chain of thought. We also find two SAE features that work well at deception detection but are unable to steer the model to lie less. We hope our open-sourced sandbox, game logs, and probes serve to anticipate and mitigate deceptive behavior and capabilities in language-based agents.
- North America > United States > Texas (0.04)
- Europe > Spain (0.04)
Among Them: A game-based framework for assessing persuasion capabilities of LLMs
Idziejczak, Mateusz, Korzavatykh, Vasyl, Stawicki, Mateusz, Chmutov, Andrii, Korcz, Marcin, Błądek, Iwo, Brzezinski, Dariusz
The proliferation of large language models (LLMs) and autonomous AI agents has raised concerns about their potential for automated persuasion and social influence. While existing research has explored isolated instances of LLM-based manipulation, systematic evaluations of persuasion capabilities across different models remain limited. In this paper, we present an Among Us-inspired game framework for assessing LLM deception skills in a controlled environment. The proposed framework makes it possible to compare LLM models by game statistics, as well as quantify in-game manipulation according to 25 persuasion strategies from social psychology and rhetoric. Experiments between 8 popular language models of different types and sizes demonstrate that all tested models exhibit persuasive capabilities, successfully employing 22 of the 25 anticipated techniques. We also find that larger models do not provide any persuasion advantage over smaller models and that longer model outputs are negatively correlated with the number of games won. Our study provides insights into the deception capabilities of LLMs, as well as tools and data for fostering future research on the topic.
- Research Report > Experimental Study (0.49)
- Research Report > New Finding (0.46)
Democrat senator targeted by deepfake impersonator of Ukrainian official on Zoom call: reports
An Ohio-based company sells robotic dogs being used by the Ukrainian military against Russia, which have the ability to be outfitted with flamethrowers. Authorities are investigating a mysterious "deep fake" video call that successfully impersonated a Ukrainian high official. Democratic Sen. Benjamin Cardin announced Wednesday that he had turned over materials to law enforcement after an unknown suspect had tricked him onto a video call via impersonating a foreign official. "In recent days, a malign actor engaged in a deceptive attempt to have a conversation with me by posing as a known individual. After immediately becoming clear that the individual I was engaging with was not who they claimed to be, I ended the call and my office took swift action, alerting the relevant authorities."
AMONGAGENTS: Evaluating Large Language Models in the Interactive Text-Based Social Deduction Game
Chi, Yizhou, Mao, Lingjun, Tang, Zineng
Strategic social deduction games serve as valuable testbeds for evaluating the understanding and inference skills of language models, offering crucial insights into social science, artificial intelligence, and strategic gaming. This paper focuses on creating proxies of human behavior in simulated environments, with Among Us utilized as a tool for studying simulated human behavior. The study introduces a text-based game environment, named AmongAgents, that mirrors the dynamics of Among Us. Players act as crew members aboard a spaceship, tasked with identifying impostors who are sabotaging the ship and eliminating the crew. Within this environment, the behavior of simulated language agents is analyzed. The experiments involve diverse game sequences featuring different configurations of Crewmates and Impostor personality archetypes. Our work demonstrates that state-of-the-art large language models (LLMs) can effectively grasp the game rules and make decisions based on the current context. This work aims to promote further exploration of LLMs in goal-oriented games with incomplete information and complex action spaces, as these settings offer valuable opportunities to assess language model performance in socially driven scenarios.
- Personal > Interview (0.93)
- Research Report (0.82)
Masked Face Recognition with Generative-to-Discriminative Representations
Ge, Shiming, Guo, Weijia, Li, Chenyu, Zhang, Junzheng, Li, Yong, Zeng, Dan
Masked face recognition is important for social good but challenged by diverse occlusions that cause insufficient or inaccurate representations. In this work, we propose a unified deep network to learn generative-to-discriminative representations for facilitating masked face recognition. To this end, we split the network into three modules and learn them on synthetic masked faces in a greedy module-wise pretraining manner. First, we leverage a generative encoder pretrained for face inpainting and finetune it to represent masked faces into category-aware descriptors. Attribute to the generative encoder's ability in recovering context information, the resulting descriptors can provide occlusion-robust representations for masked faces, mitigating the effect of diverse masks. Then, we incorporate a multi-layer convolutional network as a discriminative reformer and learn it to convert the category-aware descriptors into identity-aware vectors, where the learning is effectively supervised by distilling relation knowledge from off-the-shelf face recognition model. In this way, the discriminative reformer together with the generative encoder serves as the pretrained backbone, providing general and discriminative representations towards masked faces. Finally, we cascade one fully-connected layer following by one softmax layer into a feature classifier and finetune it to identify the reformed identity-aware vectors. Extensive experiments on synthetic and realistic datasets demonstrate the effectiveness of our approach in recognizing masked faces.
Enhancing Fingerprint Image Synthesis with GANs, Diffusion Models, and Style Transfer Techniques
Tang, W., Figueroa, D., Liu, D., Johnsson, K., Sopasakis, A.
We present novel approaches involving generative adversarial networks and diffusion models in order to synthesize high quality, live and spoof fingerprint images while preserving features such as uniqueness and diversity. We generate live fingerprints from noise with a variety of methods, and we use image translation techniques to translate live fingerprint images to spoof. To generate different types of spoof images based on limited training data we incorporate style transfer techniques through a cycle autoencoder equipped with a Wasserstein metric along with Gradient Penalty (CycleWGAN-GP) in order to avoid mode collapse and instability. We find that when the spoof training data includes distinct spoof characteristics, it leads to improved live-to-spoof translation. We assess the diversity and realism of the generated live fingerprint images mainly through the Fr\'echet Inception Distance (FID) and the False Acceptance Rate (FAR). Our best diffusion model achieved a FID of 15.78. The comparable WGAN-GP model achieved slightly higher FID while performing better in the uniqueness assessment due to a slightly lower FAR when matched against the training data, indicating better creativity. Moreover, we give example images showing that a DDPM model clearly can generate realistic fingerprint images.
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > Italy (0.04)
Hoodwinked: Deception and Cooperation in a Text-Based Game for Language Models
Are current language models capable of deception and lie detection? We study this question by introducing a text-based game called $\textit{Hoodwinked}$, inspired by Mafia and Among Us. Players are locked in a house and must find a key to escape, but one player is tasked with killing the others. Each time a murder is committed, the surviving players have a natural language discussion then vote to banish one player from the game. We conduct experiments with agents controlled by GPT-3, GPT-3.5, and GPT-4 and find evidence of deception and lie detection capabilities. The killer often denies their crime and accuses others, leading to measurable effects on voting outcomes. More advanced models are more effective killers, outperforming smaller models in 18 of 24 pairwise comparisons. Secondary metrics provide evidence that this improvement is not mediated by different actions, but rather by stronger persuasive skills during discussions. To evaluate the ability of AI agents to deceive humans, we make this game publicly available at h https://hoodwinked.ai/ .
- North America > United States > California (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Unique egg patterns help drongos avoid getting duped by cuckoos
Cuckoos infiltrate the nests of other birds with similar-looking eggs, but drongos have evolved a highly effective way to snuff out the imposters. Their ability to recognise the uniquely patterned marks of their own eggs, like a signature, means they may reject up to 94 per cent of cuckoo eggs. Instead of caring for their own offspring, African cuckoos (Cuculus gularis) lay a single egg in the nests of fork-tailed drongos (Dicrurus adsimilis), tossing out a drongo egg to match the original clutch count. If the young cuckoo is adopted and hatches, it immediately pushes out the remaining drongo eggs to become its hosts' only charge. Jess Lund at the University of Cape Town, South Africa, and her colleagues gathered 192 eggs – including 26 that had been laid by cuckoos – from fork-tailed drongo nests in the forests of southern Zambia.
- Africa > Zambia (0.26)
- Africa > South Africa > Western Cape > Cape Town (0.26)