Collaborating Authors

Have there been any attempts to design a spacecraft using ANN? • r/artificial


Developing architectures for NN's is still in the early stages of development. The multi-network designs so far have been limited to narrowly defined concepts within a single domain. The design of a complex system such as a ship will first require extensive training in current designs and second, the ability to envision inter-processings between system components. The cutting edge in NN architectures is in massive NN's such as those done by Google for recognizing hand written digits and the "Borg Cube" NN architecture developed by Affectiva for recognizing image components in pictures . In Google's approach, a number of layers, each of a number of NN's were run through with the image data.

Neural Architecture Search in Embedding Space Machine Learning

The neural architecture search (NAS) algorithm with reinforcement learning can be a powerful and novel framework for the automatic discovering process of neural architectures. However, its application is restricted by noncontinuous and high-dimensional search spaces, which result in difficulty in optimization. To resolve these problems, we proposed NAS in embedding space (NASES), which is a novel framework. Unlike other NAS with reinforcement learning approaches that search over a discrete and high-dimensional architecture space, this approach enables reinforcement learning to search in an embedding space by using architecture encoders and decoders. The current experiment demonstrated that the performance of the final architecture network using the NASES procedure is comparable with that of other popular NAS approaches for the image classification task on CIFAR-10. The beneficial-performance and effectiveness of NASES was impressive even when only the architecture-embedding searching and pre-training controller were applied without other NAS tricks such as parameter sharing. Specifically, considerable reduction in searches was achieved by reducing the average number of searching to 100 architectures to achieve a final architecture for the NASES procedure. Introduction Deep neural networks have enabled advances in image recognition, sequential pattern recognition, recommendation systems, and various tasks in the past decades.

The Compositional Architecture of the Internet

Communications of the ACM

In 1992, the explosive growth of the World Wide Web began. The architecture of the Internet was commonly described as having four layers above the physical media, each providing a distinct function: a "link" layer providing local packet delivery over heterogeneous physical networks, a "network" layer providing best-effort global packet delivery across autonomous networks all using the Internet Protocol (IP), a "transport" layer providing communication services such as reliable byte streams (TCP) and datagram service (UDP), and an "application" layer. In 1993, the last major change was made to this classic Internet architecture;11 since then the scale and economics of the Internet have precluded further changes to IP.12 A lot has happened in the world since 1993.

VGG16 architecture


VGGNet-16 consists of 16 convolutional layers and is very appealing because of its very uniform Architecture. Similar to AlexNet, it has only 3x3 convolutions, but lots of filters. It can be trained on 4 GPUs for 2–3 weeks. It is currently the most preferred choice in the community for extracting features from images. The weight configuration of the VGGNet is publicly available and has been used in many other applications and challenges as a baseline feature extractor.