Goto

Collaborating Authors

 Cyberwarfare


'Tone deaf': US tech company responsible for global IT outage to cut jobs and use AI

The Guardian

The cybersecurity company that became a household name after causing a massive global IT outage last year has announced it will cut 5% of its workforce in part due to "AI efficiency". In a note to staff earlier this week, released in stock market filings in the US, CrowdStrike's chief executive, George Kurtz, announced that 500 positions, or 5% of its workforce, would be cut globally, citing AI efficiencies created in the business. "We're operating in a market and technology inflection point, with AI reshaping every industry, accelerating threats, and evolving customer needs," he said. Kurtz said AI "flattens our hiring curve, and helps us innovate from idea to product faster", adding it "drives efficiencies across both the front and back office". "AI is a force multiplier throughout the business," he said.


4 ways to arm your employees against cyber threats

ZDNet

While businesses are powered by technology โ€“ email, texts, video calls, file-sharing, communications platforms, and the telephone โ€“ they remain driven by humans. That means human error can be the weakest link in cybersecurity. Last year, data breaches cost businesses across the world an average of 5 million, according to IBM's 2024 Cost of a Data Breach report. Human error can't entirely be eliminated, but enlisting employees in the fight against cyber threats can make a huge difference. Phishing emails are no longer as obviously fraudulent as they were in the past: they can mimic your organization's domain name and email signatures to appear strikingly close to legitimate communications.


Sam Altman's eyeball-scanning ID technology debuts in the US

Engadget

Tools for Humanity, a startup co-founded by Sam Altman, has launched its its World eyeball-scanning identity verification system in the US. During an event in San Francisco, Altman reportedly said that World's technology provides "a way to make sure humans remained central and special in a world where the internet had a lot of AI-driven content." Altman is also one of the founders and is currently the CEO of OpenAI, which is perhaps the most prominent artificial intelligence company today. World was used to be known as Worldcoin until Tools of Humanity decided to focus on the digital ID aspect of the project rather than the cryptocurrency part, because the Biden administration didn't have a friendly stance towards crypto. The project uses basketball-sized spherical objects called the Orb to scan a user's irises, which it then turns into a unique IrisCode for them. It will then use that information to create a World ID for the user that they can use to log into integrated platforms, including Minecraft and Reddit.


The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

MIT Technology Review

That's why we've created the AI Hype Index--a simple, at-a-glance summary of everything you need to know about the state of the industry. AI agents are the AI industry's hypiest new product--intelligent assistants capable of completing tasks without human supervision. But while they can be theoretically useful--Simular AI's S2 agent, for example, intelligently switches between models depending on what it's been told to do--they could also be weaponized to execute cyberattacks. Elsewhere, OpenAI is reported to be throwing its hat into the social media arena, and AI models are getting more adept at making music. Oh, and if the results of the first half-marathon pitting humans against humanoid robots are anything to go by, we won't have to worry about the robot uprising any time soon.


UK regulator wants to ban apps that can make deepfake nude images of children

Engadget

The UK's Children's Commissioner is calling for a ban on AI deepfake apps that create nude or sexual images of children, according to a new report. It states that such "nudification" apps have become so prevalent that many girls have stopped posting photos on social media. And though creating or uploading CSAM images is illegal, apps used to create deepfake nude images are still legal. "Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone -- a stranger, a classmate, or even a friend -- could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps." said Children's Commissioner Dame Rachel de Souza.


GenAI, the future of fraud and why you may be an easy target

FOX News

Don't let fraudsters create a false sense of urgency. If you receive a communication claiming to be from a financial institution, call that institution directly using the official number from its website.


Large Language Models are Unreliable for Cyber Threat Intelligence

arXiv.org Artificial Intelligence

Several recent works have argued that Large Language Models (LLMs) can be used to tame the data deluge in the cybersecurity field, by improving the automation of Cyber Threat Intelligence (CTI) tasks. This work presents an evaluation methodology that other than allowing to test LLMs on CTI tasks when using zero-shot learning, few-shot learning and fine-tuning, also allows to quantify their consistency and their confidence level. We run experiments with three state-of-the-art LLMs and a dataset of 350 threat intelligence reports and present new evidence of potential security risks in relying on LLMs for CTI. We show how LLMs cannot guarantee sufficient performance on real-size reports while also being inconsistent and overconfident. Few-shot learning and fine-tuning only partially improve the results, thus posing doubts about the possibility of using LLMs for CTI scenarios, where labelled datasets are lacking and where confidence is a fundamental factor.


Training Large Language Models for Advanced Typosquatting Detection

arXiv.org Artificial Intelligence

Since the early days of the commercial internet, typosquatting has exploited the simplest of human errors, mistyping a URL, to serve as a potent tool for cybercriminals. Initially observed as an opportunistic tactic, typosquatting involves registering domain names that closely match that of reputable brands, thereby redirecting users to counterfeit websites. This has evolved into a sophisticated form of cyberattack used to conduct phishing schemes, distribute malware, and harvest sensitive data. Now with billions of domain names and TLDs in circulation, the scale and impact of typosquatting have grown exponentially. This poses significant risks to individuals, businesses, and national cybersecurity infrastructure. This whitepaper explores how emerging large language model (LLM) techniques can enhance the detection of typosquatting attempts, ultimately fortifying defenses against one of the internet's most enduring cyber threats. Cybercriminals employ various domain squatting techniques to deceive users and bypass traditional security measures. These methods include but not limited to: Character Substitution: These attacks swap similar looking characters like replacing "o" with "0" in go0gle[.]com to trick users into believing they are visiting the legitimate site. Omission or Addition: This method involves removing or adding a character, creating domains such as gogle[.]com


DiffuPac: Contextual Mimicry in Adversarial Packets Generation via Diffusion Model

Neural Information Processing Systems

In domains of cybersecurity, recent advancements in Machine Learning (ML) and Deep Learning (DL) have significantly enhanced Network Intrusion Detection Systems (NIDS), improving the effectiveness of cybersecurity operations. However, attackers have also leveraged ML/DL to develop sophisticated models that generate adversarial packets capable of evading NIDS detection. Consequently, defenders must study and analyze these models to prepare for the evasion attacks that exploit NIDS detection mechanisms. Unfortunately, conventional generation models often rely on unrealistic assumptions about attackers' knowledge of NIDS components, making them impractical for real-world scenarios. To address this issue, we present DiffuPac, a first-of-its-kind generation model designed to generate adversarial packets that evade detection without relying on specific NIDS components. DiffuPac integrates a pre-trained Bidirectional Encoder Representations from Transformers (BERT) with diffusion model, which, through its capability for conditional denoising and classifier-free guidance, effectively addresses the real-world constraint of limited attacker knowledge. By concatenating malicious packets with contextually relevant normal packets and applying targeted noising only to the malicious packets, DiffuPac seamlessly blends adversarial packets into genuine network traffic. Through evaluations on real-world datasets, we demonstrate that DiffuPac achieves strong evasion capabilities against sophisticated NIDS, outperforming conventional methods by an average of 6.69 percentage points, while preserving the functionality and practicality of the generated adversarial packets.


Reasoning Under Threat: Symbolic and Neural Techniques for Cybersecurity Verification

arXiv.org Artificial Intelligence

Cybersecurity demands rigorous and scalable techniques to ensure system correctness, robustness, and resilience against evolving threats. Automated reasoning, encompassing formal logic, theorem proving, model checking, and symbolic analysis, provides a foundational framework for verifying security properties across diverse domains such as access control, protocol design, vulnerability detection, and adversarial modeling. This survey presents a comprehensive overview of the role of automated reasoning in cybersecurity, analyzing how logical systems, including temporal, deontic, and epistemic logics are employed to formalize and verify security guarantees. We examine SOTA tools and frameworks, explore integrations with AI for neural-symbolic reasoning, and highlight critical research gaps, particularly in scalability, compositionality, and multi-layered security modeling. The paper concludes with a set of well-grounded future research directions, aiming to foster the development of secure systems through formal, automated, and explainable reasoning techniques.