Goto

Collaborating Authors

Reconstructing Training Data from Diverse ML Models by Ensemble Inversion

arXiv.org Artificial Intelligence

Model Inversion (MI), in which an adversary abuses access to a trained Machine Learning (ML) model attempting to infer sensitive information about its original training data, has attracted increasing research attention. During MI, the trained model under attack (MUA) is usually frozen and used to guide the training of a generator, such as a Generative Adversarial Network (GAN), to reconstruct the distribution of the original training data of that model. This might cause leakage of original training samples, and if successful, the privacy of dataset subjects will be at risk if the training data contains Personally Identifiable Information (PII). Therefore, an in-depth investigation of the potentials of MI techniques is crucial for the development of corresponding defense techniques. High-quality reconstruction of training data based on a single model is challenging. However, existing MI literature does not explore targeting multiple models jointly, which may provide additional information and diverse perspectives to the adversary. We propose the ensemble inversion technique that estimates the distribution of original training data by training a generator constrained by an ensemble (or set) of trained models with shared subjects or entities. This technique leads to noticeable improvements of the quality of the generated samples with distinguishable features of the dataset entities compared to MI of a single ML model. We achieve high quality results without any dataset and show how utilizing an auxiliary dataset that's similar to the presumed training data improves the results. The impact of model diversity in the ensemble is thoroughly investigated and additional constraints are utilized to encourage sharp predictions and high activations for the reconstructed samples, leading to more accurate reconstruction of training images.


Integration of adversarial autoencoders with residual dense convolutional networks for inversion of solute transport in non-Gaussian conductivity fields

arXiv.org Machine Learning

Characterization of a non-Gaussian channelized conductivity field in subsurface flow and transport modeling through inverse modeling usually leads to a high-dimensional inverse problem and requires repeated evaluations of the forward model. In this study, we develop a convolutional adversarial autoencoder (CAAE) network to parameterize the high-dimensional non-Gaussian conductivity fields using a low-dimensional latent representation and a deep residual dense convolutional network (DRDCN) to efficiently construct a surrogate model for the forward model. The two networks are both based on a multilevel residual learning architecture called residual-in-residual dense block. The multilevel residual learning strategy and the dense connection structure in the dense block ease the training of deep networks, enabling us to efficiently build deeper networks that have an essentially increased capacity for approximating mappings of very high-complexity. The CCAE and DRDCN networks are incorporated into an iterative local updating ensemble smoother to formulate an inversion framework. The integrated method is demonstrated using a synthetic solute transport model. Results indicate that CAAE is a robust parameterization method for the channelized conductivity fields with Gaussian conductivities within each facies. The DRDCN network is able to obtain an accurate surrogate model of the forward model with high-dimensional and highly-complex concentration fields using relatively limited training data. The CAAE paramterization approach and the DRDCN surrogate method together significantly reduce the number of forward model runs required to achieve accurate inversion results.


Privacy Leakage Avoidance with Switching Ensembles

arXiv.org Machine Learning

W e consider membership inference attacks, one of the main privacy issues in machine learning. These recently developed attacks have been proven successful in determining, with confidence better than a random guess, whether a given sample belongs to the dataset on which the attacked machine learning model was trained. Several approaches have been developed to mitigate this privacy leakage but the tradeoff performance implications of these defensive mechanisms (i.e., accuracy and utility of the defended machine learning model) are not well studied yet. W e propose a novel approach of privacy leakage avoidance with switching ensembles (P ASE), which both protects against current membership inference attacks and does that with very small accuracy penalty, while requiring acceptable increase in training and inference time. W e test our P ASE method, along with the the current state-of-the-art P ATE approach, on three calibration image datasets and analyze their tradeoffs.


Ex-Model: Continual Learning from a Stream of Trained Models

arXiv.org Artificial Intelligence

Learning continually from non-stationary data streams is a challenging research topic of growing popularity in the last few years. Being able to learn, adapt, and generalize continually in an efficient, effective, and scalable way is fundamental for a sustainable development of Artificial Intelligent systems. However, an agent-centric view of continual learning requires learning directly from raw data, which limits the interaction between independent agents, the efficiency, and the privacy of current approaches. Instead, we argue that continual learning systems should exploit the availability of compressed information in the form of trained models. In this paper, we introduce and formalize a new paradigm named "Ex-Model Continual Learning" (ExML), where an agent learns from a sequence of previously trained models instead of raw data. We further contribute with three ex-model continual learning algorithms and an empirical setting comprising three datasets (MNIST, CIFAR-10 and CORe50), and eight scenarios, where the proposed algorithms are extensively tested. Finally, we highlight the peculiarities of the ex-model paradigm and we point out interesting future research directions.


GAMIN: An Adversarial Approach to Black-Box Model Inversion

arXiv.org Machine Learning

Recent works have demonstrated that machine learning models are vulnerable to model inversion attacks, which lead to the exposure of sensitive information contained in their training dataset. While some model inversion attacks have been developed in the past in the black-box attack setting, in which the adversary does not have direct access to the structure of the model, few of these have been conducted so far against complex models such as deep neural networks. In this paper, we introduce GAMIN (for Generative Adversarial Model INversion), a new black-box model inversion attack framework achieving significant results even against deep models such as convolutional neural networks at a reasonable computing cost. GAMIN is based on the continuous training of a surrogate model for the target model under attack and a generator whose objective is to generate inputs resembling those used to train the target model. The attack was validated against various neural networks used as image classifiers. In particular, when attacking models trained on the MNIST dataset, GAMIN is able to extract recognizable digits for up to 60% of labels produced by the target. Attacks against skin classification models trained on the pilot parliament dataset also demonstrated the capacity to extract recognizable features from the targets.