security
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack
Deep neural networks face persistent challenges in defending against backdoor attacks, leading to an ongoing battle between attacks and defenses. While existing backdoor defense strategies have shown promising performance on reducing attack success rates, can we confidently claim that the backdoor threat has truly been eliminated from the model? To address it, we re-investigate the characteristics of the backdoored models after defense (denoted as defense models). Surprisingly, we find that the original backdoors still exist in defense models derived from existing post-training defense strategies, and the backdoor existence is measured by a novel metric called backdoor existence coefficient. It implies that the backdoors just lie dormant rather than being eliminated. To further verify this finding, we empirically show that these dormant backdoors can be easily re-activated during inference stage, by manipulating the original trigger with well-designed tiny perturbation using universal adversarial attack.
ProAPT: Projection of APT Threats with Deep Reinforcement Learning
Dehghan, Motahareh, Sadeghiyan, Babak, Khosravian, Erfan, Moghaddam, Alireza Sedighi, Nooshi, Farshid
The highest level in the Endsley situation awareness model is called projection when the status of elements in the environment in the near future is predicted. In cybersecurity situation awareness, the projection for an Advanced Persistent Threat (APT) requires predicting the next step of the APT. The threats are constantly changing and becoming more complex. As supervised and unsupervised learning methods require APT datasets for projecting the next step of APTs, they are unable to identify unknown APT threats. In reinforcement learning methods, the agent interacts with the environment, and so it might project the next step of known and unknown APTs. So far, reinforcement learning has not been used to project the next step for APTs. In reinforcement learning, the agent uses the previous states and actions to approximate the best action of the current state. When the number of states and actions is abundant, the agent employs a neural network which is called deep learning to approximate the best action of each state. In this paper, we present a deep reinforcement learning system to project the next step of APTs. As there exists some relation between attack steps, we employ the Long- Short-Term Memory (LSTM) method to approximate the best action of each state. In our proposed system, based on the current situation, we project the next steps of APT threats.
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- (2 more...)
- Research Report (0.50)
- Workflow (0.46)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.90)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
Attacking the Performance of Machine Learning Systems - Schneier on Security
Abstract: The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While such devices enable us to train large-scale neural networks in datacenters and deploy them on edge devices, their designers' focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully-crafted sponge examples, which are inputs designed to maximise energy consumption and latency, to drive machine learning (ML) systems towards their worst-case performance. Sponge examples are, to our knowledge, the first denial-of-service attack against the ML components of such systems.
- Asia > India > NCT > New Delhi (0.14)
- North America > United States (0.05)
- Asia > India > Uttar Pradesh (0.05)
- Asia > India > NCT > Delhi (0.05)
- Instructional Material > Online (0.40)
- Instructional Material > Course Syllabus & Notes (0.40)
- Information Technology > Security & Privacy (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.51)
- Education > Educational Setting > Online (0.51)
- Information Technology > Software (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications (1.00)
- (2 more...)
Security, Privacy, Trust Speaker Series presents Heng Xu
Heng Xu will give her talk, "The Future of Privacy Research: Lessons from Artificial Intelligence and Machine Learning," as part of the Pamplin College of Business Security, Privacy, and Trust Speaker Series. This event will take place in Pamplin 1045 on April 15, from noon--1 p.m. For a Zoom link, please email sheasw@vt.edu. Xu is a professor of Information Technology and Analytics in the Kogod School of Business at American University, where she also serves as the director of the Kogod Cyber Governance Center. Xu's recent research focuses on cybersecurity management, privacy protection, responsible AI, and fairness in machine learning.
How Metadata Improves Security, Quality, and Transparency
How does Spotify battle against a giant like Apple? With machine learning and AI, Spotify creates value for its users by providing a more personalized and bespoke experience. Let's take a quick look at the layers of aggregate information that are used to enhance their platform: The core data here is in the music – the basic components of songs like the title, artist, and duration. Choosing a song to listen to sets the baseline (and maybe you like it for its bass line). Everything else can be seen as metadata: additional elements about how one listens, how the song is composed, and what other music it sounds like.
Key important tech skills that can shape your future
Today's world is moving with constant changes in different sectors, including business and technology. According to the changing trend, you need to update your skills regardless of your profession. You need to adapt to the latest changes taking place at your work and learning new skills can enhance your career growth. Also, you can boost up many new career chances with this. If you're in a technology profession or planning to become a professional techie then you must need to grasp some important skills. The skills that are most in-demand and help to stay competitive can change your future.
- Information Technology > Artificial Intelligence > Robots (0.50)
- Information Technology > e-Commerce > Financial Technology (0.33)
AI Accountability Framework Created to Guide Use of AI in Security
Europol has announced the development of a new AI accountability framework designed to guide the use of artificial intelligence (AI) tools by security practitioners. The move represents a major milestone in the Accountability Principles for Artificial Intelligence (AP4AI) project, which aims to create a practical toolkit that can directly support AI accountability when used in the internal security domain. The "world-first" framework was developed in consultation with experts from 28 countries, representing law enforcement officials, lawyers and prosecutors, data protection and fundamental rights experts, as well as technical and industry experts. The initiative began in 2021 amid growing interest in and use of AI in security, both by internal cybersecurity teams and law enforcement agencies to tackle cybercrime and other offenses. Research conducted by the AP4AI demonstrated significant public support for this approach; in a survey of more than 5500 citizens across 30 countries, 87% of respondents agreed or strongly agreed that AI should be used to protect children and vulnerable groups and to investigate criminals and criminal organizations.
- Health & Medicine (1.00)
- Media > News (0.69)