Goto

Collaborating Authors


Multi-Layer Feature Reduction for Tree Structured Group Lasso via Hierarchical Projection

Neural Information Processing Systems

Tree structured group Lasso (TGL) is a powerful technique in uncovering the tree structured sparsity over the features, where each node encodes a group of features. It has been applied successfully in many real-world applications. However, with extremely large feature dimensions, solving TGL remains a significant challenge due to its highly complicated regularizer. In this paper, we propose a novel Multi-Layer Feature reduction method (MLFre) to quickly identify the inactive nodes (the groups of features with zero coefficients in the solution) hierarchically in a top-down fashion, which are guaranteed to be irrelevant to the response. Thus, we can remove the detected nodes from the optimization without sacrificing accuracy.


Israel's Multi-Layer Defense System Fully Operational Soon

U.S. News

File - This Monday, Dec. 21, 2015 file photograph provided by the Israeli Ministry of Defense shows a launch of David's Sling missile defense system. A senior Israeli air force official says a joint U.S.-Israeli missile interceptor will be operational soon, completing the country's multi-layer defense system. He said Monday that David's Sling, meant to counter medium-range missiles possessed by Iranian-backed Hezbollah militants in Lebanon, will be operational in early April.


Deep Learning via Multilayer Perceptron Classifier - DZone Big Data

#artificialintelligence

Deep learning which is currently a hot topic in the academia and industries tends to work better with deeper architectures and large networks. The application of deep learning in many computationally intensive problems is getting a lot of attention and a wide adoption. For example, computer vision, object recognition, image segmentation, and even machine learning classification. Some practitioners also refer to Deep learning as Deep Neural Networks (DNN), whereas a DNN is an Artificial Neural Network (ANN) with multiple hidden layers of units between the input and output layers. Similar to shallow ANNs, DNNs can model complex non-linear relationships [1]. The DNN architectures for example for object detection and parsing, generates compositional models where the object is expressed as a layered composition of image primitives. The extra layers enable composition of features from lower layers, giving the potential of modeling complex data with fewer units than a similarly performing shallow network.


On Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks

arXiv.org Machine Learning

Parsimonious representations in data modeling are ubiquitous and central for processing information. Motivated by the recent Multi-Layer Convolutional Sparse Coding (ML-CSC) model, we herein generalize the traditional Basis Pursuit regression problem to a multi-layer setting, introducing similar sparse enforcing penalties at different representation layers in a symbiotic relation between synthesis and analysis sparse priors. We propose and analyze different iterative algorithms to solve this new problem in practice. We prove that the presented multi-layer Iterative Soft Thresholding (ML-ISTA) and multi-layer Fast ISTA (ML-FISTA) converge to the global optimum of our multi-layer formulation at a rate of $\mathcal{O}(1/k)$ and $\mathcal{O}(1/k^2)$, respectively. We further show how these algorithms effectively implement particular recurrent neural networks that generalize feed-forward architectures without any increase in the number of parameters. We demonstrate the different architectures resulting from unfolding the iterations of the proposed multi-layer pursuit algorithms, providing a principled way to construct deep recurrent CNNs from feed-forward ones. We demonstrate the emerging constructions by training them in an end-to-end manner, consistently improving the performance of classical networks without introducing extra filters or parameters.