Goto

Collaborating Authors

Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks

arXiv.org Machine Learning

As the will to deploy neural networks models on embedded systems grows, and considering the related memory footprint and energy consumption issues, finding lighter solutions to store neural networks such as weight quantization and more efficient inference methods become major research topics. Parallel to that, adversarial machine learning has risen recently with an impressive and significant attention, unveiling some critical flaws of machine learning models, especially neural networks. In particular, perturbed inputs called adversarial examples have been shown to fool a model into making incorrect predictions. In this article, we investigate the adversarial robustness of quantized neural networks under different threat models for a classical supervised image classification task. We show that quantization does not offer any robust protection, results in severe form of gradient masking and advance some hypotheses to explain it. However, we experimentally observe poor transferability capacities which we explain by quantization value shift phenomenon and gradient misalignment and explore how these results can be exploited with an ensemble-based defense.


Explaining the Almon Distributed Lag Model

#artificialintelligence

That post drew quite a number of email requests for more information about the Almon estimator, and how it fits into the overall scheme of things. In addition, Almon's approach to modelling distributed lags has been used very effectively more recently in the estimation of the so-called MIDAS model. The MIDAS model (developed by Eric Ghysels and his colleagues – e.g., see Ghysels et al., 2004) is designed to handle regression analysis using data with different observation frequencies. The acronym, "MIDAS", stands for "Mixed-Data Sampling". The MIDAS model can be implemented in R, for instance (e.g., see here), as well as in EViews.


Understanding The Accuracy-Interpretability Trade-Off

#artificialintelligence

In today's article we discussed about the trade off between model accuracy and model interpretability in the context of Machine Learning. Less flexible models are more interpretable and thus are more suitable in the inference context where we are mostly interested in understanding the relationship between the inputs and the output. On the other hand, more flexible models are way less interpretable but the results can be more accurate. Depending on the problem we are working on, we may have to pick the model that best serves our use case. We should however have in mind that in most of the cases, we have to find the sweet spot between model accuracy and model interpretability.


Beginners Baseline Model for Machine Learning Project

#artificialintelligence

What is a Baseline Model? We can define the baseline model as a reference to the actual model. The baseline model should be a simple model that acts as a comparison and is easy to explain. Moreover, the baseline model should be based on the dataset to create the actual model. Why do we want to have a baseline model in our project?


Plus-sized model 'cried when asked to be cover model'

BBC News

US plus-size model Tess Holliday says she cried when asked to be on the front cover of Cosmopolitan's UK magazine.