layered neural network
Neural networks grown and self-organized by noise
Living neural networks emerge through a process of growth and self-organization that begins with a single cell and results in a brain, an organized and functional computational device. Artificial neural networks, however, rely on human-designed, hand-programmed architectures for their remarkable performance. Can we develop artificial computational devices that can grow and self-organize without human intervention? In this paper, we propose a biologically inspired developmental algorithm that can'grow' a functional, layered neural network from a single initial cell. The algorithm organizes inter-layer connections to construct retinotopic pooling layers.
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.05)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- North America > Canada (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.05)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- North America > Canada (0.04)
Neural networks grown and self-organized by noise
Living neural networks emerge through a process of growth and self-organization that begins with a single cell and results in a brain, an organized and functional computational device. Artificial neural networks, however, rely on human-designed, hand-programmed architectures for their remarkable performance. Can we develop artificial computational devices that can grow and self-organize without human intervention? In this paper, we propose a biologically inspired developmental algorithm that can'grow' a functional, layered neural network from a single initial cell. The algorithm organizes inter-layer connections to construct retinotopic pooling layers.
An Optimization Method of Layered Neural Networks based on the Modified Information Criterion
This paper proposes a practical optimization method for layered neural networks, by which the optimal model and parameter can be found simultaneously. 'i\Te modify the conventional information criterion into a differentiable function of parameters, and then, min(cid:173) imize it, while controlling it back to the ordinary form. Effective(cid:173) ness of this method is discussed theoretically and experimentally.
Neural networks grown and self-organized by noise
Raghavan, Guruprasad, Thomson, Matt
Living neural networks emerge through a process of growth and self-organization that begins with a single cell and results in a brain, an organized and functional computational device. Artificial neural networks, however, rely on human-designed, hand-programmed architectures for their remarkable performance. Can we develop artificial computational devices that can grow and self-organize without human intervention? In this paper, we propose a biologically inspired developmental algorithm that can'grow' a functional, layered neural network from a single initial cell. The algorithm organizes inter-layer connections to construct retinotopic pooling layers.
Silicon Brains: Designing Self Organising Neural Networks
A healthy child's brain under development is capable of adding nearly 250,000 neurons every minute! At birth, a brain has almost all the neurons that it will ever have. The brain continues to grow for a few years after a person is born and by the age of 2 years old. Thanks to the glial cells, the brain continues to grow. Glia continues to divide and multiply and is responsible for carrying out many important functions including insulating nerve cells with myelin.
- Health & Medicine > Therapeutic Area > Neurology (0.72)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.57)
Neural networks grown and self-organized by noise
Raghavan, Guruprasad, Thomson, Matt
Living neural networks emerge through a process of growth and self-organization that begins with a single cell and results in a brain, an organized and functional computational device. Artificial neural networks, however, rely on human-designed, hand-programmed architectures for their remarkable performance. Can we develop artificial computational devices that can grow and self-organize without human intervention? In this paper, we propose a biologically inspired developmental algorithm that can 'grow' a functional, layered neural network from a single initial cell. The algorithm organizes inter-layer connections to construct a convolutional pooling layer, a key constituent of convolutional neural networks (CNN's). Our approach is inspired by the mechanisms employed by the early visual system to wire the retina to the lateral geniculate nucleus (LGN), days before animals open their eyes. The key ingredients for robust self-organization are an emergent spontaneous spatiotemporal activity wave in the first layer and a local learning rule in the second layer that 'learns' the underlying activity pattern in the first layer. The algorithm is adaptable to a wide-range of input-layer geometries, robust to malfunctioning units in the first layer, and so can be used to successfully grow and self-organize pooling architectures of different pool-sizes and shapes. The algorithm provides a primitive procedure for constructing layered neural networks through growth and self-organization. Broadly, our work shows that biologically inspired developmental algorithms can be applied to autonomously grow functional 'brains' in-silico.
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- Research Report (0.64)
- Workflow (0.46)
The Truth about Neural Networks – Stupid Simple AI Series – Medium
The answer for this question is NO, the idea of neural networks is old. The basic idea was invented in 1943 and it was called threshold logic. Yes, this is the problem, "threshold logic"? Its name is very practical, describes the mechanism of the model, but the problem is that it sounds too scientific, you cannot create good advertisements for that. This was the first attempt, it was a primitive version of the current neural networks, but this was the first step.
Interpreting Layered Neural Networks via Hierarchical Modular Representation
Interpreting the prediction mechanism of complex models is currently one of the most important tasks in the machine learning field, especially with layered neural networks, which have achieved high predictive performance with various practical data sets. To reveal the global structure of a trained neural network in an interpretable way, a series of clustering methods have been proposed, which decompose the units into clusters according to the similarity of their inference roles. The main problems in these studies were that (1) we have no prior knowledge about the optimal resolution for the decomposition, or the appropriate number of clusters, and (2) there was no method with which to acquire knowledge about whether the outputs of each cluster have a positive or negative correlation with the input and output dimension values. In this paper, to solve these problems, we propose a method for obtaining a hierarchical modular representation of a layered neural network. The application of a hierarchical clustering method to a trained network reveals a tree-structured relationship among hidden layer units, based on their feature vectors defined by their correlation with the input and output dimension values.
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture (0.14)
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)