Goto

Collaborating Authors

Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks

arXiv.org Machine Learning

As the will to deploy neural networks models on embedded systems grows, and considering the related memory footprint and energy consumption issues, finding lighter solutions to store neural networks such as weight quantization and more efficient inference methods become major research topics. Parallel to that, adversarial machine learning has risen recently with an impressive and significant attention, unveiling some critical flaws of machine learning models, especially neural networks. In particular, perturbed inputs called adversarial examples have been shown to fool a model into making incorrect predictions. In this article, we investigate the adversarial robustness of quantized neural networks under different threat models for a classical supervised image classification task. We show that quantization does not offer any robust protection, results in severe form of gradient masking and advance some hypotheses to explain it. However, we experimentally observe poor transferability capacities which we explain by quantization value shift phenomenon and gradient misalignment and explore how these results can be exploited with an ensemble-based defense.


Machine learning aids banks in model compliance

#artificialintelligence

Banks are increasingly turning to machine learning to cope with stricter risk-modeling regulations. "Even if you have a simple econometric model which you can explain to the regulator, you can also use your machine learning model as an alternative model and say, 'OK, I have checked and tested my other model with this machine learning model,' " says Mostafa Mostafavi of Credit Suisse.


When to Trust Your Model: Model-Based Policy Optimization

Neural Information Processing Systems

Designing effective model-based reinforcement learning algorithms is difficult because the ease of data generation must be weighed against the bias of model-generated data. In this paper, we study the role of model usage in policy optimization both theoretically and empirically. We first formulate and analyze a model-based reinforcement learning algorithm with a guarantee of monotonic improvement at each step. In practice, this analysis is overly pessimistic and suggests that real off-policy data is always preferable to model-generated on-policy data, but we show that an empirical estimate of model generalization can be incorporated into such analysis to justify model usage. Motivated by this analysis, we then demonstrate that a simple procedure of using short model-generated rollouts branched from real data has the benefits of more complicated model-based algorithms without the usual pitfalls.


3 Main Approaches to Machine Learning Models - KDnuggets

#artificialintelligence

In September 2018, I published a blog about my forthcoming book on The Mathematical Foundations of Data Science. The central question we address is: How can we bridge the gap between mathematics needed for Artificial Intelligence (Deep Learning and Machine learning) with that taught in high schools (up to ages 17/18)? In this post, we present a chapter from this book called "A Taxonomy of Machine Learning Models." The book is now available for an early bird discount released as chapters. If you are interested in getting early discounted copies, please contact ajit.jaokar at feynlabs.ai.


jiweil/Neural-Dialogue-Generation

@machinelearnbot

This project is maintained by Jiwei Li. This repo will continue to be updated. Decoding given a pre-trained generative model. The pre-trained model doesn't have to be a vanila Seq2Seq model (for example, it can be a trained model from adversarial learning). To run the adversarial-reinforcement learning model, a pretrained generative model and a pretrained discriminative model are needed.