Goto

Collaborating Authors

Chen

AAAI Conferences

Accelerating deep neural networks (DNNs) has been attracting increasing attention as it can benefit a wide range of applications, e.g., enabling mobile systems with limited computing resources to own powerful visual recognition ability. A practical strategy to this goal usually relies on a two-stage process: operating on the trained DNNs (e.g., approximating the convolutional filters with tensor decomposition) and fine-tuning the amended network, leading to difficulty in balancing the trade-off between acceleration and maintaining recognition performance. In this work, aiming at a general and comprehensive way for neural network acceleration, we develop a Wavelet-like Auto-Encoder (WAE) that decomposes the original input image into two low-resolution channels (sub-images) and incorporate the WAE into the classification neural networks for joint training. The two decomposed channels, in particular, are encoded to carry the low-frequency information (e.g., image profiles) and high-frequency (e.g., image details or noises), respectively, and enable reconstructing the original input image through the decoding process. Then, we feed the low-frequency channel into a standard classification network such as VGG or ResNet and employ a very lightweight network to fuse with the high-frequency channel to obtain the classification result. Compared to existing DNN acceleration solutions, our framework has the following advantages: i) it is tolerant to any existing convolutional neural networks for classification without amending their structures; ii) the WAE provides an interpretable way to preserve the main components of the input image for classification.


Glossary of Deep Learning: Autoencoder – Deeper Learning – Medium

#artificialintelligence

An Autoencoder is neural network capable of unsupervised feature learning. Neural networks are typically used for supervised learning problems, trying to predict a target vector y from input vectors x. An Autoencoder network, however, tries to predict x from x, without the need for labels. Here the challenge is recreating the essence of the original input from compressed, noisy or corrupted data. The idea behind the Autoencoder is to build a network with a narrow hidden layer between Encoder and Decoder that serves as a compressed representation of the input data.


Automated Directed Fairness Testing

arXiv.org Artificial Intelligence

Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatically validate the fairness of arbitrary machine-learning models? For a given machine-learning model and a set of sensitive input parameters, our AEQUITAS approach automatically discovers discriminatory inputs that highlight fairness violation. At the core of AEQUITAS are three novel strategies to employ probabilistic search over the input space with the objective of uncovering fairness violation. Our AEQUITAS approach leverages inherent robustness property in common machine-learning models to design and implement scalable test generation methodologies. An appealing feature of our generated test inputs is that they can be systematically added to the training set of the underlying model and improve its fairness. To this end, we design a fully automated module that guarantees to improve the fairness of the underlying model. We implemented AEQUITAS and we have evaluated it on six state-of-the-art classifiers, including a classifier that was designed with fairness constraints. We show that AEQUITAS effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of the respective models using the generated test inputs. In our evaluation, AEQUITAS generates up to 70% discriminatory inputs (w.r.t. the total number of inputs generated) and leverages these inputs to improve the fairness up to 94%.


KumarAbhirup/iconic-input

#artificialintelligence

Yet, there is just one input component. Currently, it just has one Component as illustrated. Later, more input components and their iconic designs will be added. This repository accepts all types of added innovation. If you want to contribute to this project, just fork this project and drop a pull request.


What's the point of using linear input projections with LSTMs? • /r/MachineLearning

@machinelearnbot

I'm asking because all the various gates of the LSTM do their own linear projection, and obviously any two matrix multiplies can be represented by just one (assuming dimensions agree), so what exactly is being gained? I can identify a few possibilities. One, it's a single input projection instead of the three that would be used in the LSTM, so there's a sort of weight sharing there. Two, depending on the dimensionality of the input projection matrix, one can effectively impose additional low rank structure on the matrix that would otherwise arise in the various LSTM gates (i.e. And third, given that the overall problem is non-convex, then the addition of the input projection layer may change the optimization landscape.