Efficient Neural Architecture Transformation Search in Channel-Level for Object Detection

Neural Information Processing Systems

Recently, Neural Architecture Search has achieved great success in large-scale image classification. In contrast, there have been limited works focusing on architecture search for object detection, mainly because the costly ImageNet pretraining is always required for detectors. Training from scratch, as a substitute, demands more epochs to converge and brings no computation saving. To overcome this obstacle, we introduce a practical neural architecture transformation search(NATS) algorithm for object detection in this paper. Instead of searching and constructing an entire network, NATS explores the architecture space on the base of existing network and reusing its weights.


Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations

Neural Information Processing Systems

We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs.


Structural Robustness for Deep Learning Architectures

#artificialintelligence

Deep Networks have been shown to provide state-of-the-art performance in many machine learning challenges. Unfortunately, they are susceptible to various types of noise, including adversarial attacks and corrupted inputs. In this work we introduce a formal definition of robustness which can be viewed as a localized Lipschitz constant of the network function, quantified in the domain of the data to be classified. We compare this notion of robustness to existing ones, and study its connections with methods in the literature. We evaluate this metric by performing experiments on various competitive vision datasets.


Sustainable Deep Learning Architectures require Manageability

#artificialintelligence

This is a very important consideration that is often overlooked by many in the field of Artificial Intelligence (AI). I suspect there are very few academic researchers who understand this aspect. The work performed in academe is distinctly different from the work required to make a product that is sustainable and economically viable. It is the difference between computer code that is written to demonstrate a new discovery and code that is written to support the operations of a company. The former kind turns to be exploratory and throwaway while the the latter kind tends to be exploitive and requires sustainability.