Goto

Collaborating Authors

Results


The Attack Surface Is Expanding. Enter Cyber AI

#artificialintelligence

The stakes are now higher because bad actors are engaging in organized crime, akin to cyberwarfare by nation-states. We've seen hospitals targeted during COVID-19 outbreaks, pipelines unable to deliver fuel, and other highly targeted attacks. The bad actors' new paradigm is to present two extortion threats on stolen enterprise data: holding the data hostage and threatening to leak sensitive information, including customer records and intellectual property. Such threats are especially salient for large organizations, which have the money and data desired by cybercriminals. Moreover, the attack surface for such crimes is ever-expanding as trends such as the adoption of 5G mobile networks and work-from-home policies push enterprise technology beyond its traditional borders.


Why companies should use AI to fight cyberattacks

#artificialintelligence

In any debate, there are always at least two sides. That reasoning also applies to whether or not it is a good idea to use artificial intelligence technology to try stemming the advantages of cybercriminals who are already using AI to improve their success ratio. In an email exchange, I asked Ramprakash Ramamoorthy, director of research at ManageEngine, a division of Zoho Corporation, for his thoughts on the matter. Ramamoorthy is firmly on the affirmative side for using AI to fight cybercrime. He said, "The only way to combat cybercriminals using AI-enhanced attacks is to fight fire with fire and employ AI countermeasures."


Towards automation of threat modeling based on a semantic model of attack patterns and weaknesses

arXiv.org Artificial Intelligence

This works considers challenges of building and usage a formal knowledge base (model), which unites the ATT&CK, CAPEC, CWE, CVE security enumerations. The proposed model can be used to learn relations between attack techniques, attack pattern, weaknesses, and vulnerabilities in order to build various threat landscapes, in particular, for threat modeling. The model is created as an ontology with freely available datasets in the OWL and RDF formats. The use of ontologies is an alternative of structural and graph based approaches to integrate the security enumerations. In this work we consider an approach of threat modeling with the data components of ATT&CK based on the knowledge base and an ontology driven threat modeling framework. Also, some evaluations are made, how it can be possible to use the ontological approach of threat modeling and which challenges this can be faced.


A Hybrid Approach for an Interpretable and Explainable Intrusion Detection System

arXiv.org Artificial Intelligence

Cybersecurity has been a concern for quite a while now. In the latest years, cyberattacks have been increasing in size and complexity, fueled by significant advances in technology. Nowadays, there is an unavoidable necessity of protecting systems and data crucial for business continuity. Hence, many intrusion detection systems have been created in an attempt to mitigate these threats and contribute to a timelier detection. This work proposes an interpretable and explainable hybrid intrusion detection system, which makes use of artificial intelligence methods to achieve better and more long-lasting security. The system combines experts' written rules and dynamic knowledge continuously generated by a decision tree algorithm as new shreds of evidence emerge from network activity.


HoneyCar: A Framework to Configure Honeypot Vulnerabilities on the Internet of Vehicles

arXiv.org Artificial Intelligence

The Internet of Vehicles (IoV), whereby interconnected vehicles communicate with each other and with road infrastructure on a common network, has promising socio-economic benefits but also poses new cyber-physical threats. Data on vehicular attackers can be realistically gathered through cyber threat intelligence using systems like honeypots. Admittedly, configuring honeypots introduces a trade-off between the level of honeypot-attacker interactions and any incurred overheads and costs for implementing and monitoring these honeypots. We argue that effective deception can be achieved through strategically configuring the honeypots to represent components of the IoV and engage attackers to collect cyber threat intelligence. In this paper, we present HoneyCar, a novel decision support framework for honeypot deception in IoV. HoneyCar builds upon a repository of known vulnerabilities of the autonomous and connected vehicles found in the Common Vulnerabilities and Exposure (CVE) data within the National Vulnerability Database (NVD) to compute optimal honeypot configuration strategies. By taking a game-theoretic approach, we model the adversarial interaction as a repeated imperfect-information zero-sum game in which the IoV network administrator chooses a set of vulnerabilities to offer in a honeypot and a strategic attacker chooses a vulnerability of the IoV to exploit under uncertainty. Our investigation is substantiated by examining two different versions of the game, with and without the re-configuration cost to empower the network administrator to determine optimal honeypot configurations. We evaluate HoneyCar in a realistic use case to support decision makers with determining optimal honeypot configuration strategies for strategic deployment in IoV.


Unsolved Problems in ML Safety

arXiv.org Artificial Intelligence

Machine learning (ML) systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, safety for ML should be a leading research priority. In response to emerging safety challenges in ML, such as those introduced by recent large-scale models, we provide a new roadmap for ML Safety and refine the technical problems that the field needs to address. We present four problems ready for research, namely withstanding hazards ("Robustness"), identifying hazards ("Monitoring"), steering ML systems ("Alignment"), and reducing risks to how ML systems are handled ("External Safety"). Throughout, we clarify each problem's motivation and provide concrete research directions.


The Threat of Offensive AI to Organizations

arXiv.org Artificial Intelligence

AI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. In particular, cyber adversaries can use AI (such as machine learning) to enhance their attacks and expand their campaigns. Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations. For example, how does an AI-capable adversary impact the cyber kill chain? Does AI benefit the attacker more than the defender? What are the most significant AI threats facing organizations today and what will be their impact on the future? In this survey, we explore the threat of offensive AI on organizations. First, we present the background and discuss how AI changes the adversary's methods, strategies, goals, and overall attack model. Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks. Finally, through a user study spanning industry and academia, we rank the AI threats and provide insights on the adversaries.


MTH-IDS: A Multi-Tiered Hybrid Intrusion Detection System for Internet of Vehicles

arXiv.org Artificial Intelligence

Modern vehicles, including connected vehicles and autonomous vehicles, nowadays involve many electronic control units connected through intra-vehicle networks to implement various functionalities and perform actions. Modern vehicles are also connected to external networks through vehicle-to-everything technologies, enabling their communications with other vehicles, infrastructures, and smart devices. However, the improving functionality and connectivity of modern vehicles also increase their vulnerabilities to cyber-attacks targeting both intra-vehicle and external networks due to the large attack surfaces. To secure vehicular networks, many researchers have focused on developing intrusion detection systems (IDSs) that capitalize on machine learning methods to detect malicious cyber-attacks. In this paper, the vulnerabilities of intra-vehicle and external networks are discussed, and a multi-tiered hybrid IDS that incorporates a signature-based IDS and an anomaly-based IDS is proposed to detect both known and unknown attacks on vehicular networks. Experimental results illustrate that the proposed system can detect various types of known attacks with 99.99% accuracy on the CAN-intrusion-dataset representing the intra-vehicle network data and 99.88% accuracy on the CICIDS2017 dataset illustrating the external vehicular network data. For the zero-day attack detection, the proposed system achieves high F1-scores of 0.963 and 0.800 on the above two datasets, respectively. The average processing time of each data packet on a vehicle-level machine is less than 0.6 ms, which shows the feasibility of implementing the proposed system in real-time vehicle systems. This emphasizes the effectiveness and efficiency of the proposed IDS.


Cybersecurity 101: Protect your privacy from hackers, spies, and the government

#artificialintelligence

"I have nothing to hide" was once the standard response to surveillance programs utilizing cameras, border checks, and casual questioning by law enforcement. Privacy used to be considered a concept generally respected in many countries with a few changes to rules and regulations here and there often made only in the name of the common good. Things have changed, and not for the better. China's Great Firewall, the UK's Snooper's Charter, the US' mass surveillance and bulk data collection -- compliments of the National Security Agency (NSA) and Edward Snowden's whistleblowing -- Russia's insidious election meddling, and countless censorship and communication blackout schemes across the Middle East are all contributing to a global surveillance state in which privacy is a luxury of the few and not a right of the many. As surveillance becomes a common factor of our daily lives, privacy is in danger of no longer being considered an intrinsic right. Everything from our web browsing to mobile devices and the Internet of Things (IoT) products installed in our homes have the potential to erode our privacy and personal security, and you cannot depend on vendors or ever-changing surveillance rules to keep them intact. Having "nothing to hide" doesn't cut it anymore. We must all do whatever we can to safeguard our personal privacy. Taking the steps outlined below can not only give you some sanctuary from spreading surveillance tactics but also help keep you safe from cyberattackers, scam artists, and a new, emerging issue: misinformation. Data is a vague concept and can encompass such a wide range of information that it is worth briefly breaking down different collections before examining how each area is relevant to your privacy and security. A roundup of the best software and apps for Windows and Mac computers, as well as iOS and Android devices, to keep yourself safe from malware and viruses. Known as PII, this can include your name, physical home address, email address, telephone numbers, date of birth, marital status, Social Security numbers (US)/National Insurance numbers (UK), and other information relating to your medical status, family members, employment, and education. All this data, whether lost in different data breaches or stolen piecemeal through phishing campaigns, can provide attackers with enough information to conduct identity theft, take out loans using your name, and potentially compromise online accounts that rely on security questions being answered correctly. In the wrong hands, this information can also prove to be a gold mine for advertisers lacking a moral backbone.


Cybersecurity in Healthcare: How to Prevent Cybercrime

#artificialintelligence

Because COVID-19 made it difficult for consumers to venture out and run their usual errands, FIs needed to find other ways to provide their services. The only way for them to really keep up with the speedy digitization was through the implementation of AI systems. To further discuss all things AI, PaymentsJournal sat down with Sudhir Jha, Mastercard SVP and head of Brighterion, and Tim Sloane, VP of Payments Innovation at Mercator Advisory Group. Jha believes that there were two fundamentally big changes that occurred in banking during the pandemic: the environment began constantly shifting, and person-to-person interactions were abruptly limited. "Every week, every month, there were different ways that we were trying to react to the pandemic," explained Jha.