Goto

Collaborating Authors

 social engineering attack


On the Feasibility of Using MultiModal LLMs to Execute AR Social Engineering Attacks

Bi, Ting, Ye, Chenghang, Yang, Zheyu, Zhou, Ziyi, Tang, Cui, Zhang, Jun, Tao, Zui, Wang, Kailong, Zhou, Liting, Yang, Yang, Yu, Tianlong

arXiv.org Artificial Intelligence

Augmented Reality (AR) and Multimodal Large Language Models (LLMs) are rapidly evolving, providing unprecedented capabilities for human-computer interaction. However, their integration introduces a new attack surface for social engineering. In this paper, we systematically investigate the feasibility of orchestrating AR-driven Social Engineering attacks using Multimodal LLM for the first time, via our proposed SEAR framework, which operates through three key phases: (1) AR-based social context synthesis, which fuses Multimodal inputs (visual, auditory and environmental cues); (2) role-based Multimodal RAG (Retrieval-Augmented Generation), which dynamically retrieves and integrates contextual data while preserving character differentiation; and (3) ReInteract social engineering agents, which execute adaptive multiphase attack strategies through inference interaction loops. To verify SEAR, we conducted an IRB-approved study with 60 participants in three experimental configurations (unassisted, AR+LLM, and full SEAR pipeline) compiling a new dataset of 180 annotated conversations in simulated social scenarios. Our results show that SEAR is highly effective at eliciting high-risk behaviors (e.g., 93.3% of participants susceptible to email phishing). The framework was particularly effective in building trust, with 85% of targets willing to accept an attacker's call after an interaction. Also, we identified notable limitations such as ``occasionally artificial'' due to perceived authenticity gaps. This work provides proof-of-concept for AR-LLM driven social engineering attacks and insights for developing defensive countermeasures against next-generation augmented reality threats.


Defending Against Social Engineering Attacks in the Age of LLMs

Ai, Lin, Kumarage, Tharindu, Bhattacharjee, Amrita, Liu, Zizhou, Hui, Zheng, Davinroy, Michael, Cook, James, Cassani, Laura, Trapeznikov, Kirill, Kirchner, Matthias, Basharat, Arslan, Hoogs, Anthony, Garland, Joshua, Liu, Huan, Hirschberg, Julia

arXiv.org Artificial Intelligence

The proliferation of Large Language Models (LLMs) poses challenges in detecting and mitigating digital deception, as these models can emulate human conversational patterns and facilitate chat-based social engineering (CSE) attacks. This study investigates the dual capabilities of LLMs as both facilitators and defenders against CSE threats. We develop a novel dataset, SEConvo, simulating CSE scenarios in academic and recruitment contexts, and designed to examine how LLMs can be exploited in these situations. Our findings reveal that, while off-the-shelf LLMs generate high-quality CSE content, their detection capabilities are suboptimal, leading to increased operational costs for defense. In response, we propose ConvoSentinel, a modular defense pipeline that improves detection at both the message and the conversation levels, offering enhanced adaptability and cost-effectiveness. The retrieval-augmented module in ConvoSentinel identifies malicious intent by comparing messages to a database of similar conversations, enhancing CSE detection at all stages. Our study highlights the need for advanced strategies to leverage LLMs in cybersecurity.


Cybersecurity threats in FinTech: A systematic review

Javaheri, Danial, Fahmideh, Mahdi, Chizari, Hassan, Lalbakhsh, Pooia, Hur, Junbeom

arXiv.org Artificial Intelligence

The rapid evolution of the Smart-everything movement and Artificial Intelligence (AI) advancements have given rise to sophisticated cyber threats that traditional methods cannot counteract. Cyber threats are extremely critical in financial technology (FinTech) as a data-centric sector expected to provide 24/7 services. This paper introduces a novel and refined taxonomy of security threats in FinTech and conducts a comprehensive systematic review of defensive strategies. Through PRISMA methodology applied to 74 selected studies and topic modeling, we identified 11 central cyber threats, with 43 papers detailing them, and pinpointed 9 corresponding defense strategies, as covered in 31 papers. This in-depth analysis offers invaluable insights for stakeholders ranging from banks and enterprises to global governmental bodies, highlighting both the current challenges in FinTech and effective countermeasures, as well as directions for future research.


Social Engineering Attacks Using Generative AI Increases by 135%

#artificialintelligence

According to a recent report by cyber security firm Darktrace, social engineering attacks leveraging generative AI technology have skyrocketed by 135%. AI is found to be used to hack passwords, leak sensitive information, and scam users across various platforms. Cybercriminals are now turning to advanced AI platforms such as ChatGPT and Midjourney to make their malicious campaigns more believable. This makes it difficult for users to distinguish between legitimate communications and well-crafted scams. The evolving nature of social engineering attacks has led to a surge in concern among employees.


ChatGPT may be a bigger cybersecurity risk than an actual benefit

#artificialintelligence

ChatGPT made a splash with its user-friendly interface and believable AI-generated responses. With a single prompt, ChatGPT provided detailed answers that other AI assistants had not achieved. Powered by a massive dataset that ChatGPT had been trained on, the breadth and variety of topics it could address quickly amazed the tech industry and the public. However the technology sophistication raises inevitable question: what are the drawbacks of ChatGPT and similar technologies? With capabilities to generate a multitude of realistic responses, ChatGPT could be used to create a host of responses capable of tricking an unassuming reader into thinking a real human is behind the content.


AI Advances Elevate Threat Levels - Australian Cyber Security Magazine

#artificialintelligence

Written by Michael McKinnon, CIO, Tesserent. Recent advances in artificial intelligence (AI) have given cybercriminals new tools that elevate the chance of successful cyber attacks. Advancements in AI enable cyber criminals to create increasingly sophisticated and harder-to-detect social engineering attacks. Governments and businesses need to be aware of these risks and must take steps now to mitigate them. Global socioeconomic differences have encouraged the creation of Internet scammers and con artists seeking to escape poverty.


Council Post: AI Vs. AI: The Battle Against Human-Level Cognitive Threats

#artificialintelligence

The world around us is full of arguments and evidence for the benefit of artificial intelligence (AI) in our daily lives. But the specter of AI threats looms large in today's world. While there's plenty of fear around the future of human-level AI, there's debate over whether AI today is truly working in our best interest. But what many people don't know is that AI is already being used by cybercriminals to attack them at scale with cyber threats, like cognitive attacks, that are only possible with AI. One of the greatest threats that AI represents today is how it can be abused by cybercriminals, specifically in its capability to deceive people and trick them into engaging in actions with unwanted or underestimated consequences.


What AI can (and can't) do for organisations' cyber resilience

#artificialintelligence

Technologies such as artificial intelligence (AI), machine learning, the internet of things and quantum computing are expected to unlock unprecedented levels of computing power. These so-called fourth industrial revolution (4IR) technologies will power the future economy and bring new levels of efficiency and automation to businesses and consumers. AI in particular holds enormous promise for organisations battling a scourge of cyber attacks. Over the past few years, cyber attacks have been growing in volume and sophistication. The latest data from Mimecast's State of Email Security 2022 report found that 94% of South African organisations were targeted by e-mail-borne phishing attacks in the past year, and six out of every 10 fell victim to a ransomware attack.


How AI Could be used to Facilitate Crime -- AI Daily - Artificial Intelligence News

#artificialintelligence

For many years, there has been much talk about an "AI Armageddon", where Robots become self-aware, realising humans are the cause of many of the world's problems and kill us all in an attempt to better the world, at our expense. In such a scenario, AI would definitely make our lives worse. However, what is spoken about much less is something which should be greatly concerning to humanity, the possibility of AI being used to facilitate and aid crime; in ways which are very difficult to track and counter. Researchers at University College London have explored this possibility, publishing a paper on the 5th of August this year outlining various "AI-enabled future crimes" which may arise as a result of rapidly advancing technology. According to the research, there are six crimes which are considered the'most concerning', which could be facilitated by AI: three of which are closely related, "audio and video impersonation", "tailored phishing" and "AI-authored fake news".


Why artificial intelligence is key to improving phishing defenses

#artificialintelligence

As attackers constantly evolve their tactics to side-step more traditional defenses, artificial intelligence and machine learning technologies are stepping in to help organizations improve defenses. Technologies like MessageControl offer a critical extra layer of protection, especially when fully integrated into a multi-tenant platform to help inform cross-product detection. A Capgemini Research Institute study found that 69% of senior executive respondents said they would be unable to respond to a cyberattack without artificial intelligence. The same study found two-thirds of organizations plan to employ artificial intelligence by 2020, demonstrating the mandate security leaders face in implementing this technology in a focused and valuable way: at their email perimeters and inside their organizations. By constantly'learning' an organization's environment and user behaviors to get smarter over time, a baseline of normal is created, with deviations from that highlighting potential threats.