mea
On the interplay of Explainability, Privacy and Predictive Performance with Explanation-assisted Model Extraction
Ezzeddine, Fatima, Akel, Rinad, Sbeity, Ihab, Giordano, Silvia, Langheinrich, Marc, Ayoub, Omran
Machine Learning as a Service (MLaaS) has gained important attraction as a means for deploying powerful predictive models, offering ease of use that enables organizations to leverage advanced analytics without substantial investments in specialized infrastructure or expertise. However, MLaaS platforms must be safeguarded against security and privacy attacks, such as model extraction (MEA) attacks. The increasing integration of explainable AI (XAI) within MLaaS has introduced an additional privacy challenge, as attackers can exploit model explanations particularly counterfactual explanations (CFs) to facilitate MEA. In this paper, we investigate the trade offs among model performance, privacy, and explainability when employing Differential Privacy (DP), a promising technique for mitigating CF facilitated MEA. We evaluate two distinct DP strategies: implemented during the classification model training and at the explainer during CF generation.
- Europe > Switzerland (0.05)
- North America > United States > California (0.04)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.71)
Knowledge Distillation-Based Model Extraction Attack using Private Counterfactual Explanations
Ezzeddine, Fatima, Ayoub, Omran, Giordano, Silvia
In recent years, there has been a notable increase in the deployment of machine learning (ML) models as services (MLaaS) across diverse production software applications. In parallel, explainable AI (XAI) continues to evolve, addressing the necessity for transparency and trustworthiness in ML models. XAI techniques aim to enhance the transparency of ML models by providing insights, in terms of the model's explanations, into their decision-making process. Simultaneously, some MLaaS platforms now offer explanations alongside the ML prediction outputs. This setup has elevated concerns regarding vulnerabilities in MLaaS, particularly in relation to privacy leakage attacks such as model extraction attacks (MEA). This is due to the fact that explanations can unveil insights about the inner workings of the model which could be exploited by malicious users. In this work, we focus on investigating how model explanations, particularly Generative adversarial networks (GANs)-based counterfactual explanations (CFs), can be exploited for performing MEA within the MLaaS platform. We also delve into assessing the effectiveness of incorporating differential privacy (DP) as a mitigation strategy. To this end, we first propose a novel MEA methodology based on Knowledge Distillation (KD) to enhance the efficiency of extracting a substitute model of a target model exploiting CFs. Then, we advise an approach for training CF generators incorporating DP to generate private CFs. We conduct thorough experimental evaluations on real-world datasets and demonstrate that our proposed KD-based MEA can yield a high-fidelity substitute model with reduced queries with respect to baseline approaches. Furthermore, our findings reveal that the inclusion of a privacy layer impacts the performance of the explainer, the quality of CFs, and results in a reduction in the MEA performance.
- North America > United States > California (0.05)
- Europe > Switzerland (0.05)
- Oceania > Fiji (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Precise Extraction of Deep Learning Models via Side-Channel Attacks on Edge/Endpoint Devices
Lee, Younghan, Jun, Sohee, Cho, Yungi, Han, Woorim, Moon, Hyungon, Paek, Yunheung
With growing popularity, deep learning (DL) models are becoming larger-scale, and only the companies with vast training datasets and immense computing power can manage their business serving such large models. Most of those DL models are proprietary to the companies who thus strive to keep their private models safe from the model extraction attack (MEA), whose aim is to steal the model by training surrogate models. Nowadays, companies are inclined to offload the models from central servers to edge/endpoint devices. As revealed in the latest studies, adversaries exploit this opportunity as new attack vectors to launch side-channel attack (SCA) on the device running victim model and obtain various pieces of the model information, such as the model architecture (MA) and image dimension (ID). Our work provides a comprehensive understanding of such a relationship for the first time and would benefit future MEA studies in both offensive and defensive sides in that they may learn which pieces of information exposed by SCA are more important than the others. Our analysis additionally reveals that by grasping the victim model information from SCA, MEA can get highly effective and successful even without any prior knowledge of the model. Finally, to evince the practicality of our analysis results, we empirically apply SCA, and subsequently, carry out MEA under realistic threat assumptions. The results show up to 5.8 times better performance than when the adversary has no model information about the victim model.
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > South Korea > Ulsan > Ulsan (0.04)
Machine Learning deployments garner speed in MEA - Intelligent CIO Africa
To make decisions more quickly and accurately, enterprises in the Middle East and Africa (MEA) are increasingly turning to Machine Learning, arguably today's most practical application of Artificial Intelligence (AI). How should CIOs and IT leaders ensure success and ROI from Machine Learning deployments in their organisations? Machine Learning is a type of AI that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine Learning algorithms use historical data as input to predict new output values. In addition, Machine Learning systems apply algorithms to data to glean insights into that data without explicit programming: It's about using data to answer questions.
- Africa > Middle East (0.38)
- Europe > Middle East (0.36)
- Asia > Middle East > Israel (0.14)
- (3 more...)
Artificial Intelligence (AI) Chips Market to grow by USD 73.49 billion
The artificial intelligence (AI) chips market report offers a comprehensive analysis of the strategies adopted by vendors and the trends, drivers, and challenges affecting the market size. The report identifies the increasing adoption of AI chips in data centers as one of the major factors driving the growth of the market. The report also provides information on other latest trends and drivers impacting the overall market environment. The Artificial Intelligence (AI) Chips Market is segmented by product (ASICs, GPUs, CPUs, and FPGAs) and geography (North America, Europe, APAC, South America, and MEA). The convergence of AI and IoT will be crucial in fueling the growth of the market over the forecast period.
- South America (0.30)
- North America (0.30)
- Europe (0.30)
- Marketing (0.56)
- Media > News (0.54)
- Banking & Finance > Trading (0.34)
Learning Bayesian Network Structure from Massive Datasets: The "Sparse Candidate" Algorithm
Friedman, Nir, Nachman, Iftach, Pe'er, Dana
Learning Bayesian networks is often cast as an optimization problem, where the computational task is to find a structure that maximizes a statistically motivated score. By and large, existing learning tools address this optimization problem using standard heuristic search techniques. Since the search space is extremely large, such search procedures can spend most of the time examining candidates that are extremely unreasonable. This problem becomes critical when we deal with data sets that are large either in the number of instances, or the number of attributes. In this paper, we introduce an algorithm that achieves faster learning by restricting the search space. This iterative algorithm restricts the parents of each variable to belong to a small subset of candidates. We then search for a network that satisfies these constraints. The learned network is then used for selecting better candidates for the next iteration. We evaluate this algorithm both on synthetic and real-life data. Our results show that it is significantly faster than alternative search procedures without loss of quality in the learned structures.
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- Europe > Netherlands (0.04)
- Asia > Middle East > Jordan (0.04)