Goto

Collaborating Authors

Law and Adversarial Machine Learning

arXiv.org Machine Learning

When machine learning systems fail because of adversarial manipulation, how should society expect the law to respond? Through scenarios grounded in adversarial ML literature, we explore how some aspects of computer crime, copyright, and tort law interface with perturbation, poisoning, model stealing and model inversion attacks to show how some attacks are more likely to result in liability than others. We end with a call for action to ML researchers to invest in transparent benchmarks of attacks and defenses; architect ML systems with forensics in mind and finally, think more about adversarial machine learning in the context of civil liberties. The paper is targeted towards ML researchers who have no legal background.


Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward

arXiv.org Machine Learning

Connected and autonomous vehicles (CAVs) will form the backbone of future next-generation intelligent transportation systems (ITS) providing travel comfort, road safety, along with a number of value-added services. Such a transformation---which will be fuelled by concomitant advances in technologies for machine learning (ML) and wireless communications---will enable a future vehicular ecosystem that is better featured and more efficient. However, there are lurking security problems related to the use of ML in such a critical setting where an incorrect ML decision may not only be a nuisance but can lead to loss of precious lives. In this paper, we present an in-depth overview of the various challenges associated with the application of ML in vehicular networks. In addition, we formulate the ML pipeline of CAVs and present various potential security issues associated with the adoption of ML methods. In particular, we focus on the perspective of adversarial ML attacks on CAVs and outline a solution to defend against adversarial attacks in multiple settings.


Council Post: Five Ways You Can Protect Your Machine Learning Systems

#artificialintelligence

Since its advent, machine learning has altered the world of technology one industry vertical at a time. Starting from the predictive analytics engines that generate recommendations to the artificial intelligence technology used in a myriad of antivirus applications, this is all machine learning at play. But what happens when these systems get confused or, worse, get attacked and purposefully manipulated into making wrong decisions? Thus, like any other technology, it is crucial to analyze machine learning's advancing canvas and the potential risks of misuse that comes with it. First, let's answer the question, "What is machine learning?"


Artificial Intelligence and International Security

#artificialintelligence

There are a number of direct applications of AI relevant for national security purposes, both in the United States and elsewhere. Kevin Kelly notes that in the private sector "the business plans of the next 10,000 startups are easy to forecast: Take X and add AI."1 There is similarly a broad range of applications for AI in national security. Included below are some examples in cybersecurity, information security, economic and financial tools of statecraft, defense, intelligence, homeland security, diplomacy, and development. This is not intended as a comprehensive list of all possible uses of AI in these fields.


Security of Deep Learning Methodologies: Challenges and Opportunities

arXiv.org Artificial Intelligence

University of California, Davis Abstract--Despite the plethora of studies about security vulnerabilities and defenses of deep learning models, security aspects of deep learning methodologies, such as transfer learning, have been rarely studied. In this article, we highlight the security challenges and research opportunities of these methodologies, focusing on vulnerabilities and attacks unique to them. W ith the widespread adaptation of deep neural networks (DNN), their security challenges have received significant attention from both academia and industry, especially for mission critical applications, such as road sign detection for autonomous vehicles, face recognition in authentication systems, and fraud detection in financial systems. There are three major types of attacks on deep learning models, namely adversarial attacks, data poisoning, and exploratory attacks. Particularly, adversarial attacks, which aim to carefully craft inputs that cause the model to misclassify, has been extensively studied and many defence mechanisms have been proposed to alleviate them. These attacks are of paramount importance because they are effective, moderately simple to launch, and often transferable from one model to another. In literature, there are several survey and review papers on deep learning security and defence mechanisms. In this article, we focus on security of a much less explored area of machine learning - machine learning methodologies. Machine learning methodologies have been widely used to mitigate the restrictions and assumptions of a typical machine learning process. A typical DNN training process assumes large labeled dataset(s), access to high computational resources, non-private and centralized data, standard training and hyper-parameter tuning, and fixed task distribution over time. However, these assumptions are often difficult to realize in practice. Notwithstanding the proliferation of these machine learning methodologies, their security aspects have not been comprehensively analyzed, if ever studied. In this article, we focus on potential attacks, security vulnerabilities, and future directions specific to each learning methodology.