Security of Deep Learning Methodologies: Challenges and Opportunities
–arXiv.org Artificial Intelligence
University of California, Davis Abstract--Despite the plethora of studies about security vulnerabilities and defenses of deep learning models, security aspects of deep learning methodologies, such as transfer learning, have been rarely studied. In this article, we highlight the security challenges and research opportunities of these methodologies, focusing on vulnerabilities and attacks unique to them. W ith the widespread adaptation of deep neural networks (DNN), their security challenges have received significant attention from both academia and industry, especially for mission critical applications, such as road sign detection for autonomous vehicles, face recognition in authentication systems, and fraud detection in financial systems. There are three major types of attacks on deep learning models, namely adversarial attacks, data poisoning, and exploratory attacks. Particularly, adversarial attacks, which aim to carefully craft inputs that cause the model to misclassify, has been extensively studied and many defence mechanisms have been proposed to alleviate them. These attacks are of paramount importance because they are effective, moderately simple to launch, and often transferable from one model to another. In literature, there are several survey and review papers on deep learning security and defence mechanisms. In this article, we focus on security of a much less explored area of machine learning - machine learning methodologies. Machine learning methodologies have been widely used to mitigate the restrictions and assumptions of a typical machine learning process. A typical DNN training process assumes large labeled dataset(s), access to high computational resources, non-private and centralized data, standard training and hyper-parameter tuning, and fixed task distribution over time. However, these assumptions are often difficult to realize in practice. Notwithstanding the proliferation of these machine learning methodologies, their security aspects have not been comprehensively analyzed, if ever studied. In this article, we focus on potential attacks, security vulnerabilities, and future directions specific to each learning methodology.
arXiv.org Artificial Intelligence
Dec-8-2019
- Country:
- Asia > Middle East
- Iran (0.04)
- North America > United States
- California > Yolo County > Davis (0.24)
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: