Cybersecurity experts said that Machine Learning and Artificial Intelligence have positively and negatively affected cybersecurity. Although relatively new AI security tools are often used to define "good" as opposed to "bad" by comparing the behavior of entities throughout the environment with those living in similar environments. Artificial intelligence algorithms are used to train data to respond to different situations. Artificial Intelligence is helping Cybersecurity to accelerate its technological progress. Security experts, including CISOs with products purporting to use artificial intelligence to dramatically improve the accuracy and efficiency speed of both threat detection and response.
Cyber Physical Systems (CPS) are characterized by their ability to integrate the physical and information or cyber worlds. Their deployment in critical infrastructure have demonstrated a potential to transform the world. However, harnessing this potential is limited by their critical nature and the far reaching effects of cyber attacks on human, infrastructure and the environment. An attraction for cyber concerns in CPS rises from the process of sending information from sensors to actuators over the wireless communication medium, thereby widening the attack surface. Traditionally, CPS security has been investigated from the perspective of preventing intruders from gaining access to the system using cryptography and other access control techniques. Most research work have therefore focused on the detection of attacks in CPS. However, in a world of increasing adversaries, it is becoming more difficult to totally prevent CPS from adversarial attacks, hence the need to focus on making CPS resilient. Resilient CPS are designed to withstand disruptions and remain functional despite the operation of adversaries. One of the dominant methodologies explored for building resilient CPS is dependent on machine learning (ML) algorithms. However, rising from recent research in adversarial ML, we posit that ML algorithms for securing CPS must themselves be resilient. This paper is therefore aimed at comprehensively surveying the interactions between resilient CPS using ML and resilient ML when applied in CPS. The paper concludes with a number of research trends and promising future research directions. Furthermore, with this paper, readers can have a thorough understanding of recent advances on ML-based security and securing ML for CPS and countermeasures, as well as research trends in this active research area.
Cyber-defense systems are being developed to automatically ingest Cyber Threat Intelligence (CTI) that contains semi-structured data and/or text to populate knowledge graphs. A potential risk is that fake CTI can be generated and spread through Open-Source Intelligence (OSINT) communities or on the Web to effect a data poisoning attack on these systems. Adversaries can use fake CTI examples as training input to subvert cyber defense systems, forcing the model to learn incorrect inputs to serve their malicious needs. In this paper, we automatically generate fake CTI text descriptions using transformers. We show that given an initial prompt sentence, a public language model like GPT-2 with fine-tuning, can generate plausible CTI text with the ability of corrupting cyber-defense systems. We utilize the generated fake CTI text to perform a data poisoning attack on a Cybersecurity Knowledge Graph (CKG) and a cybersecurity corpus. The poisoning attack introduced adverse impacts such as returning incorrect reasoning outputs, representation poisoning, and corruption of other dependent AI-based cyber defense systems. We evaluate with traditional approaches and conduct a human evaluation study with cybersecurity professionals and threat hunters. Based on the study, professional threat hunters were equally likely to consider our fake generated CTI as true.
Healthcare cybersecurity is in triage mode. As systems are stretched to the limits by COVID-19 and technology becomes an essential part of everyday patient interactions, hospital and healthcare IT departments have been left to figure out how to make it all work together, safely and securely. Most notably, the connectivity of everything from thermometers to defibrillators is exponentially increasing the attack surface, presenting vulnerabilities IT professionals might not even know are on their networks. Get the whole story and DOWNLOAD the eBook now – on us!] The result has been a newfound attention from ransomware and other malicious actors circling and waiting for the right time to strike. Rather than feeling overwhelmed in the current cybersecurity environment, it's important for healthcare and hospital IT teams to look at security their networks as a constant work in progress, rather than a single project with a start and end point, according to experts Jeff Horne from Ordr and G. Anthony Reina who participated in Threatpost's November webinar on Heathcare Cybersecurity. "This is a proactive space," Reina said. "This is something where you can't just be reactive. You actually have to be going out there, searching for those sorts of things, and so even on the technologies that we have, you know, we're, we're proactive about saying that security is an evolving, you know, kind of technology, It's not something where we're going to be finished." Healthcare IT pros, and security professionals more generally, also need to get a firm handle on what lives their networks and its potential level of exposure. The fine-tuned expertise of healthcare connected machines, along with the enormous cost to upgrade hardware in many instances, leave holes on a network that simply cannot be patched. "Because, from an IT perspective, you cannot manage what you can't see, and from a security perspective, you can't control and protect what you don't know," Horne said. Threatpost's experts explained how healthcare organizations can get out of triage mode and ahead of the next attack. The webinar covers everything from bread and butter patching to a brand-new secure data model which applies federated learning to functions as critical as diagnosing a brain tumor. Alternatively, a lightly edited transcript of the event follows below. Thank you so much for joining. We have an excellent conversation planned on a critically important topic, Healthcare cybersecurity. My name is Becky Bracken, I'll be your host for today's discussion. Before we get started, I want to remind you there's a widget on the upper right-hand corner of your screen where you can submit questions to our panelists at any time. We encourage you to do that. You'll have to answer questions and we want to make sure we're covering topics most interesting to you, OK, sure. Let's just introduce our panelists today. First we have Jeff Horne. Jeff is currently the CSO at Ordr and his priors include SpaceX.
While zero-trust cybersecurity architectures are in trend, everyone in federal IT security seems to be holding onto it. What underpins the move to zero-trust is automation. Experts, including inside and outside the government are addressing for accelerated adoption of cybersecurity automation. Automation tools can identify if a user is accessing a network or a piece of data, and also can automate response sending alerts to analysts. This could save time and money of an agency and allow cybersecurity analysts to focus on analysing data and come up with new security strategies.
We recently launched Elastic Security, combining the threat hunting and analytics tools from Elastic SIEM with the prevention and response features of Elastic Endpoint Security. This combined solution focuses on detecting and flexibly responding to security threats, with machine learning providing core capabilities for real-time protections, detections, and interactive hunting. But why are machine learning tools so important in information security? How is machine learning being applied? In this first of a two-part blog series, we'll motivate the "why" and explore the "how," highlighting malware prevention via supervised machine learning in Elastic Endpoint Security.
A cross-disciplinary team of machine learning, security, policy, and law experts say inconsistent court interpretations of an anti-hacking law have a chilling effect on adversarial machine learning security research and cybersecurity. At question is a portion of the Computer Fraud and Abuse Act (CFAA). A ruling to decide how part of the law is interpreted could shape the future of cybersecurity and adversarial machine learning. If the U.S. Supreme Court takes up an appeal case based on CFAA next year, researchers predict that the court will ultimately choose a narrow definition of the clause related to "exceed authorized access" instead of siding with circuit courts who have taken a broad definition of the law. One circuit court ruling on the subject concluded that a broad view would turn millions of people into unsuspecting criminals.
In May of 2017, a nasty cyber attack hit more than 200,000 computers in 150 countries over the course of just a few days. Dubbed "WannaCry," it exploited a vulnerability that was first discovered by the National Security Agency (NSA) and later stolen and disseminated online. It worked like this: After successfully breaching a computer, WannaCry encrypted that computer's files and rendered them unreadable. In order to recover their imprisoned material, targets of the attack were told they needed to purchase special decryption software. Guess who sold that software? The so-called "ransomware" siege affected individuals as well as large organizations, including the U.K.'s National Health Service, Russian banks, Chinese schools, Spanish telecom giant Telefonica and the U.S.-based delivery service FedEx.
Whether or not your organisation suffers a cyber attack has long been considered a case of'when, not if', with cyber attacks having a huge impact on organisations. In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than $654bn. In 2019, this had increased to an exposure of 4.1 billion records. While the use of artificial intelligence (AI) and machine learning as a primary offensive tool in cyber attacks is not yet mainstream, its use and capabilities are growing and becoming more sophisticated. In time, cyber criminals will, inevitably, take advantage of AI, and such a move will increase threats to digital security and increase the volume and sophistication of cyber attacks.
The cybersecurity skills shortage continues to plague organizations across regions, markets and sectors, and the government sector is no exception. According to (ISC)2, there are only enough cybersecurity pros to fill about 60% of the jobs that are currently open -- which means the workforce will need to grow by roughly 145% to just meet the current global demand. The Government Accountability Office states that the federal government needs a qualified, well-trained cybersecurity workforce to protect vital IT systems, and one senior cybersecurity official at the Department of Homeland Security has described the talent gap as a national security issue. The scarcity of such workers is one reason why securing federal systems is on GAO's High Risk list. Given this situation, chief information security officers who are looking for ways to make their existing resources more effective can make great use of automation and artificial intelligence to supplement and enhance their workforce.