mlmd
Masked Language Model Based Textual Adversarial Example Detection
Zhang, Xiaomei, Zhang, Zhaoxi, Zhong, Qi, Zheng, Xufei, Zhang, Yanjun, Hu, Shengshan, Zhang, Leo Yu
Adversarial attacks are a serious threat to the reliable deployment of machine learning models in safety-critical applications. They can misguide current models to predict incorrectly by slightly modifying the inputs. Recently, substantial work has shown that adversarial examples tend to deviate from the underlying data manifold of normal examples, whereas pre-trained masked language models can fit the manifold of normal NLP data. To explore how to use the masked language model in adversarial detection, we propose a novel textual adversarial example detection method, namely Masked Language Model-based Detection (MLMD), which can produce clearly distinguishable signals between normal examples and adversarial examples by exploring the changes in manifolds induced by the masked language model. MLMD features a plug and play usage (i.e., no need to retrain the victim model) for adversarial defense and it is agnostic to classification tasks, victim model's architectures, and to-be-defended attack methods. We evaluate MLMD on various benchmark textual datasets, widely studied machine learning models, and state-of-the-art (SOTA) adversarial attacks (in total $3*4*4 = 48$ settings). Experimental results show that MLMD can achieve strong performance, with detection accuracy up to 0.984, 0.967, and 0.901 on AG-NEWS, IMDB, and SST-2 datasets, respectively. Additionally, MLMD is superior, or at least comparable to, the SOTA detection defenses in detection accuracy and F1 score. Among many defenses based on the off-manifold assumption of adversarial examples, this work offers a new angle for capturing the manifold change. The code for this work is openly accessible at \url{https://github.com/mlmddetection/MLMDdetection}.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > China > Chongqing Province > Chongqing (0.04)
- (21 more...)
The Most Crucial Component in an ML Pipeline is Invisible - Container Journal
The process of building and training machine learning models is always in the spotlight. There is a lot of talk about different Neural Network architectures, or new frameworks, facilitating the idea-to-implementation transition. Moreover, many developers are putting a lot of effort into developing tools that take care of the peripherals: data management and validation, resource management, service infrastructure, the list goes on. Despite the AI craze, most projects never make it to production. In 2015, Google published a seminal paper called the Hidden Technical Debt in Machine Learning Systems.
Does your Machine Learning pipeline have a pulse?
The process of building and training Machine Learning models is always in the spotlight. There is a lot of talk about different Neural Network architectures, or new frameworks, facilitating the idea-to-implementation transition. While these are the heart of an ML engine, the circulatory system, which enables nutrients to move around and connects everything, is often missing. But what comprises this system? What gives the pipeline its pulse? The most important component in an ML pipeline works silently in the background and provides the glue that binds everything together.
Does your Machine Learning pipeline have a pulse?
The process of building and training Machine Learning models is always in the spotlight. There is a lot of talk about different Neural Network architectures, or new frameworks, facilitating the idea-to-implementation transition. While these are the heart of an ML engine, the circulatory system, which enables nutrients to move around and connects everything, is often missing. But what comprises this system? What gives the pipeline its pulse? The most important component in an ML pipeline works silently in the background and provides the glue that binds everything together.