cyber attack
AI firm claims Chinese spies used its tech to automate cyber attacks
The makers of artificial intelligence (AI) chatbot Claude claim to have caught hackers sponsored by the Chinese government using the tool to perform automated cyber attacks against around 30 global organisations. Anthropic said hackers tricked the chatbot into carrying out automated tasks under the guise of carrying out cyber security research. The company claimed in a blog post this was the first reported AI-orchestrated cyber espionage campaign. But sceptics are questioning the accuracy of that claim - and the motive behind it. Anthropic said it discovered the hacking attempts in mid-September.
- Asia > China (0.60)
- South America (0.15)
- North America > Central America (0.15)
- (15 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.88)
The true extent of cyber attacks on UK business - and the weak spots that allow them to happen
The first day of September should have marked the beginning of one of the busiest periods of the year for Jaguar Land Rover. It was a Monday, and the release of new 75 series number plates was expected to produce a surge in demand from eager car buyers. At factories in Solihull and Halewood, as well as at its engine plant in Wolverhampton, staff were expecting to be working flat out. Instead, when the early shift arrived, they were sent home. The production lines have remained idle ever since.
- South America (0.14)
- North America > Central America (0.14)
- Europe > Russia (0.14)
- (16 more...)
- Information Technology > Security & Privacy (1.00)
- Automobiles & Trucks > Manufacturer (1.00)
- Government > Military > Cyberwarfare (0.44)
JLR suppliers 'face bankruptcy' due to hack crisis
JLR suppliers'face bankruptcy' due to hack crisis The past two weeks have been dreadful for Jaguar Land Rover (JLR), and the crisis at the car maker shows no sign of coming to an end. A cyber attack, which first came to light on 1 September, forced the manufacturer to shut down its computer systems and close production lines worldwide. Its factories in Solihull, Halewood, and Wolverhampton are expected to remain idle until at least Wednesday, as the company continues to assess the damage. JLR is thought to have lost at least £50m so far as a result of the stoppage. But experts say the most serious damage is being done to its network of suppliers, many of whom are small and medium sized businesses.
- South America (0.15)
- North America > Central America (0.15)
- Oceania > Australia (0.05)
- (15 more...)
- Automobiles & Trucks > Manufacturer (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.48)
Cybersecurity Assessment of Smart Grid Exposure Using a Machine Learning Based Approach
Given that disturbances to the stable and normal operation of power systems have grown phenomenally, particularly in terms of unauthorized access to confidential and critical data, injection of malicious software, and exploitation of security vulnerabilities in a poorly patched software among others; then developing, as a countermeasure, an assessment solutions with machine learning capabilities to match up in real-time, with the growth and fast pace of these cyber-attacks, is not only critical to the security, reliability and safe operation of power system, but also germane to guaranteeing advanced monitoring and efficient threat detection. Using the Mississippi State University and Oak Ridge National Laboratory dataset, the study used an XGB Classifier modeling approach in machine learning to diagnose and assess power system disturbances, in terms of Attack Events, Natural Events and No-Events. As test results show, the model, in all the three sub-datasets, generally demonstrates good performance on all metrics, as it relates to accurately identifying and classifying all the three power system events.
- North America > United States > Mississippi (0.24)
- North America > United States > North Dakota (0.04)
- Information Technology > Security & Privacy (1.00)
- Energy > Power Industry (1.00)
- Government > Military > Cyberwarfare (0.72)
Learning-based Detection of GPS Spoofing Attack for Quadrotors
Wang, Pengyu, Yang, Zhaohua, Li, Jialu, Shi, Ling
Safety-critical cyber-physical systems (CPS), such as quadrotor UAVs, are particularly prone to cyber attacks, which can result in significant consequences if not detected promptly and accurately. During outdoor operations, the nonlinear dynamics of UAV systems, combined with non-Gaussian noise, pose challenges to the effectiveness of conventional statistical and machine learning methods. To overcome these limitations, we present QUADFormer, an advanced attack detection framework for quadrotor UAVs leveraging a transformer-based architecture. This framework features a residue generator that produces sequences sensitive to anomalies, which are then analyzed by the transformer to capture statistical patterns for detection and classification. Furthermore, an alert mechanism ensures UAVs can operate safely even when under attack. Extensive simulations and experimental evaluations highlight that QUADFormer outperforms existing state-of-the-art techniques in detection accuracy.
Exploring reinforcement learning for incident response in autonomous military vehicles
Madsen, Henrik, Grov, Gudmund, Mancini, Federico, Baksaas, Magnus, Sommervoll, Åvald Åslaugson
Unmanned vehicles able to conduct advanced operations without human intervention are being developed at a fast pace for many purposes. Not surprisingly, they are also expected to significantly change how military operations can be conducted. To leverage the potential of this new technology in a physically and logically contested environment, security risks are to be assessed and managed accordingly. Research on this topic points to autonomous cyber defence as one of the capabilities that may be needed to accelerate the adoption of these vehicles for military purposes. Here, we pursue this line of investigation by exploring reinforcement learning to train an agent that can autonomously respond to cyber attacks on unmanned vehicles in the context of a military operation. We first developed a simple simulation environment to quickly prototype and test some proof-of-concept agents for an initial evaluation. This agent was then applied to a more realistic simulation environment and finally deployed on an actual unmanned ground vehicle for even more realism. A key contribution of our work is demonstrating that reinforcement learning is a viable approach to train an agent that can be used for autonomous cyber defence on a real unmanned ground vehicle, even when trained in a simple simulation environment.
- Europe > Ukraine (0.05)
- Asia > Russia (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.75)
Windows users are exposed to over 600 million cyber attacks every day
Microsoft recently released the Microsoft Digital Defense Report 2024, this year's edition of the company's annual cybersecurity report. In the 114-page document, Microsoft reveals -- among other things -- just how much cyber threats have grown over the past year. Cybercriminals have gained access to better resources, including the incorporation of AI tools to bolster their arsenal. They're now better equipped to create fake images, videos, and audio recordings to trick people, to flood job applications with AI-created "perfect" résumés to physically access companies, and much more. But hackers can also use your use of AI services to attack you.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.81)
Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks
Usman, Yusuf, Upadhyay, Aadesh, Gyawali, Prashnna, Chataut, Robin
In an era where digital threats are increasingly sophisticated, the intersection of Artificial Intelligence and cybersecurity presents both promising defenses and potent dangers. This paper delves into the escalating threat posed by the misuse of AI, specifically through the use of Large Language Models (LLMs). This study details various techniques like the switch method and character play method, which can be exploited by cybercriminals to generate and automate cyber attacks. Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks such as social engineering, malicious code, payload generation, and spyware. By testing these AI generated attacks on live systems, the study assesses their effectiveness and the vulnerabilities they exploit, offering a practical perspective on the risks AI poses to critical infrastructure. We also introduce Occupy AI, a customized, finetuned LLM specifically engineered to automate and execute cyberattacks. This specialized AI driven tool is adept at crafting steps and generating executable code for a variety of cyber threats, including phishing, malware injection, and system exploitation. The results underscore the urgency for ethical AI practices, robust cybersecurity measures, and regulatory oversight to mitigate AI related threats. This paper aims to elevate awareness within the cybersecurity community about the evolving digital threat landscape, advocating for proactive defense strategies and responsible AI development to protect against emerging cyber threats.
- North America > United States > West Virginia > Monongalia County > Morgantown (0.04)
- North America > United States > Texas > Tarrant County > Fort Worth (0.04)
- North America > United States > Texas > Denton County > Denton (0.04)
- (2 more...)
- Research Report > New Finding (0.88)
- Research Report > Experimental Study (0.54)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.67)
Bill Gates hails AI as a 'wonderful' technology that can save humans from climate change and disease - but warns it needs to be used 'by people with good intent'
Tech giant Microsoft is one of the many companies embracing AI. So it's perhaps ironic that Microsoft's co-founder – the multi-billionaire Bill Gates – has given a warning over its potential dangers. Speaking in London this week, Gates called AI a'wonderful' technology that can save humans from climate change and disease. But he warned that it needs to be used'by people with good intent', as it could be used by criminals'engaged in cyber attacks or political interference'. Gates, one of the 10 richest humans in the world, said: 'The defence has to be smarter than the offence.
- North America > United States > Washington > King County > Redmond (0.05)
- North America > United States > California (0.05)
- North America > Canada > Ontario > Middlesex County > London (0.05)
- Information Technology (0.73)
- Energy (0.52)
Unleashing the Power of Unlabeled Data: A Self-supervised Learning Framework for Cyber Attack Detection in Smart Grids
Zeng, Hanyu, Zhou, Pengfei, Lou, Xin, Ng, Zhen Wei, Yau, David K. Y., Winslett, Marianne
Modern power grids are undergoing significant changes driven by information and communication technologies (ICTs), and evolving into smart grids with higher efficiency and lower operation cost. Using ICTs, however, comes with an inevitable side effect that makes the power system more vulnerable to cyber attacks. In this paper, we propose a self-supervised learning-based framework to detect and identify various types of cyber attacks. Different from existing approaches, the proposed framework does not rely on large amounts of well-curated labeled data but makes use of the massive unlabeled data in the wild which are easily accessible. Specifically, the proposed framework adopts the BERT model from the natural language processing domain and learns generalizable and effective representations from the unlabeled sensing data, which capture the distinctive patterns of different attacks. Using the learned representations, together with a very small amount of labeled data, we can train a task-specific classifier to detect various types of cyber attacks. Meanwhile, real-world training datasets are usually imbalanced, i.e., there are only a limited number of data samples containing attacks. In order to cope with such data imbalance, we propose a new loss function, separate mean error (SME), which pays equal attention to the large and small categories to better train the model. Experiment results in a 5-area power grid system with 37 buses demonstrate the superior performance of our framework over existing approaches, especially when a very limited portion of labeled data are available, e.g., as low as 0.002\%. We believe such a framework can be easily adopted to detect a variety of cyber attacks in other power grid scenarios.
- North America > United States > Illinois (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
- Energy > Power Industry (1.00)