Goto

Collaborating Authors

Automate Machine Learning Workflows with Pipelines in Python and scikit-learn - Machine Learning Mastery

#artificialintelligence

There are standard workflows in a machine learning project that can be automated. In Python scikit-learn, Pipelines help to to clearly define and automate these workflows. In this post you will discover Pipelines in scikit-learn and how you can automate common machine learning workflows. Automate Machine Learning Workflows with Pipelines in Python and scikit-learn Photo by Brian Cantoni, some rights reserved. There are standard workflows in applied machine learning.


Automate Machine Learning Workflows with Pipelines in Python and scikit-learn

#artificialintelligence

There are standard workflows in a machine learning project that can be automated. In Python scikit-learn, Pipelines help to to clearly define and automate these workflows. In this post you will discover Pipelines in scikit-learn and how you can automate common machine learning workflows. Automate Machine Learning Workflows with Pipelines in Python and scikit-learn Photo by Brian Cantoni, some rights reserved. There are standard workflows in applied machine learning.


Modelling and Quantifying Membership Information Leakage in Machine Learning

arXiv.org Machine Learning

Machine learning models have been shown to be vulnerable to membership inference attacks, i.e., inferring whether individuals' data have been used for training models. The lack of understanding about factors contributing success of these attacks motivates the need for modelling membership information leakage using information theory and for investigating properties of machine learning models and training algorithms that can reduce membership information leakage. We use conditional mutual information leakage to measure the amount of information leakage from the trained machine learning model about the presence of an individual in the training dataset. We devise an upper bound for this measure of information leakage using Kullback--Leibler divergence that is more amenable to numerical computation. We prove a direct relationship between the Kullback--Leibler membership information leakage and the probability of success for a hypothesis-testing adversary examining whether a particular data record belongs to the training dataset of a machine learning model. We show that the mutual information leakage is a decreasing function of the training dataset size and the regularization weight. We also prove that, if the sensitivity of the machine learning model (defined in terms of the derivatives of the fitness with respect to model parameters) is high, more membership information is potentially leaked. This illustrates that complex models, such as deep neural networks, are more susceptible to membership inference attacks in comparison to simpler models with fewer degrees of freedom. We show that the amount of the membership information leakage is reduced by $\mathcal{O}(\log^{1/2}(\delta^{-1})\epsilon^{-1})$ when using Gaussian $(\epsilon,\delta)$-differentially-private additive noises.


Privacy Leakage Avoidance with Switching Ensembles

arXiv.org Machine Learning

W e consider membership inference attacks, one of the main privacy issues in machine learning. These recently developed attacks have been proven successful in determining, with confidence better than a random guess, whether a given sample belongs to the dataset on which the attacked machine learning model was trained. Several approaches have been developed to mitigate this privacy leakage but the tradeoff performance implications of these defensive mechanisms (i.e., accuracy and utility of the defended machine learning model) are not well studied yet. W e propose a novel approach of privacy leakage avoidance with switching ensembles (P ASE), which both protects against current membership inference attacks and does that with very small accuracy penalty, while requiring acceptable increase in training and inference time. W e test our P ASE method, along with the the current state-of-the-art P ATE approach, on three calibration image datasets and analyze their tradeoffs.


Reconciling Utility and Membership Privacy via Knowledge Distillation

arXiv.org Machine Learning

Large capacity machine learning models are prone to membership inference attacks in which an adversary aims to infer whether a particular data sample is a member of the target model's training dataset. Such membership inferences can lead to serious privacy violations as machine learning models are often trained using privacy-sensitive data such as medical records and controversial user opinions. Recently defenses against membership inference attacks are developed, in particular, based on differential privacy and adversarial regularization; unfortunately, such defenses highly impact the classification accuracy of the underlying machine learning models. In this work, we present a new defense against membership inference attacks that preserves the utility of the target machine learning models significantly better than prior defenses. Our defense, called distillation for membership privacy (DMP), leverages knowledge distillation, a model compression technique, to train machine learning models with membership privacy. We use different techniques in the DMP to maximize its membership privacy with minor degradation to utility. DMP works effectively against the attackers with either a whitebox or blackbox access to the target model. We evaluate DMP's performance through extensive experiments on different deep neural networks and using various benchmark datasets. We show that DMP provides a significantly better tradeoff between inference resilience and classification performance than state-of-the-art membership inference defenses. For instance, a DMP-trained DenseNet provides a classification accuracy of 65.3\% for a 54.4\% (54.7\%) blackbox (whitebox) membership inference attack accuracy, while an adversarially regularized DenseNet provides a classification accuracy of only 53.7\% for a (much worse) 68.7\% (69.5\%) blackbox (whitebox) membership inference attack accuracy.