Neural Architecture Search in Embedding Space

arXiv.org Machine Learning

The neural architecture search (NAS) algorithm with reinforcement learning can be a powerful and novel framework for the automatic discovering process of neural architectures. However, its application is restricted by noncontinuous and high-dimensional search spaces, which result in difficulty in optimization. To resolve these problems, we proposed NAS in embedding space (NASES), which is a novel framework. Unlike other NAS with reinforcement learning approaches that search over a discrete and high-dimensional architecture space, this approach enables reinforcement learning to search in an embedding space by using architecture encoders and decoders. The current experiment demonstrated that the performance of the final architecture network using the NASES procedure is comparable with that of other popular NAS approaches for the image classification task on CIFAR-10. The beneficial-performance and effectiveness of NASES was impressive even when only the architecture-embedding searching and pre-training controller were applied without other NAS tricks such as parameter sharing. Specifically, considerable reduction in searches was achieved by reducing the average number of searching to 100 architectures to achieve a final architecture for the NASES procedure. Introduction Deep neural networks have enabled advances in image recognition, sequential pattern recognition, recommendation systems, and various tasks in the past decades.


Utility Analysis of Network Architectures for 3D Point Cloud Processing

arXiv.org Machine Learning

Note that most widely used benchmark datasets for point cloud classification only contain foreground objects. Therefore, we generate a new dataset, where each point cloud contains both the foreground object and the background. In this new dataset, the background is composed of points that carry no relevant information of the foreground. We will introduce details in Section 5. Metric 3, rotation robustness: The rotation robustness is proposed to measure whether a DNN uses similar subsets of two point clouds to compute the intermediate-layer feature, if the two point clouds have the same shape but different orientations. Let X θ 1 and X θ 2 denote the point clouds that have the same global shape but different orientations θ 1 and θ 2. To quantify the similarity of the attention on the two point clouds, we compute the Jensen-Shannon divergence between the distributions of the perturbed inputs ˆ X θ 1 X θ 1 δ 1 and ˆ X θ 2 X θ 2 δ 2. ˆ X θ 1 and ˆ X θ 2 denote the perturbed inputs, which are computed to measure information discarding in Equation (1).



CNN Architectures, a Deep-dive

#artificialintelligence

VGG Net is a plain and straight forward CNN architecture among all other. Thought it looks simple, it do outperform many complex architectures. It is the 1st runner-up in ImageNet Challenge in 2014. As shown above, there are totally 6 VGGNet Architectures. Among them, VGG-16 and VGG-19 are popular.


Principled Neural Architecture Learning - Intel AI

#artificialintelligence

A neural architecture, which is the structure and connectivity of the network, is typically either hand-crafted or searched by optimizing some specific objective criterion (e.g., classification accuracy). Since the space of all neural architectures is huge, search methods are usually heuristic and do not guarantee finding the optimal architecture, with respect to the objective criterion. In addition, these search methods might require a large number of supervised training iterations and use a high amount of computational resources, rendering the solution infeasible for many applications. Moreover, optimizing for a specific criterion might result in a model that is suboptimal for other useful criteria such as model size, representation of uncertainty and robustness to adversarial attacks. Thus, the resulting architectures of most strategies used today, whether hand crafting or heuristic searches, are densely connected networks, which are not an optimal solution for the objective they were created to achieve, let alone other objectives.