Goto

Collaborating Authors

Precise neural network computation with imprecise analog devices

arXiv.org Artificial Intelligence

The operations used for neural network computation map favorably onto simple analog circuits, which outshine their digital counterparts in terms of compactness and efficiency. Nevertheless, such implementations have been largely supplanted by digital designs, partly because of device mismatch effects due to material and fabrication imperfections. We propose a framework that exploits the power of deep learning to compensate for this mismatch by incorporating the measured device variations as constraints in the neural network training process. This eliminates the need for mismatch minimization strategies and allows circuit complexity and power-consumption to be reduced to a minimum. Our results, based on largescale simulations as well as a prototype VLSI chip implementation indicate a processing efficiency comparable to current state-of-art digital implementations. This method is suitable for future technology based on nanodevices with large variability, such as memristive arrays. The growing need for computing power has led to the exploration of computing technologies beyond the predominant von Neumann architecture. In particular, due to the separation of memory and processing elements, traditional computing systems experience a bottleneck when dealing with problems involving great amounts of high-dimensional data [4, 25], such as image processing, probabilistic inference, or speech recognition. These problems are often best tackled by conceptually simple but powerful and highly parallel models, such as deep neural networks (DNNs), which have delivered state-of-the-art performance on exactly those applications [29]. The fact that DNNs are characterized by stereotypical and simple operations at each unit, which often can be performed in parallel, makes them compatible with the processing style of graphics processing units (GPUs) [46]. The large computational demands of DNNs have simultaneously sparked interest in methods that make neural network inference faster and more power efficient, whether through new algorithmic inventions [19, 22, 12], dedicated digital hardware implementations [6, 17, 9, 34], or by taking inspiration from real nervous systems [15, 38, 33, 24, 40].


Bottom-Up and Top-Down Neural Processing Systems Design: Neuromorphic Intelligence as the Convergence of Natural and Artificial Intelligence

arXiv.org Artificial Intelligence

While Moore's law has driven exponential computing power expectations, its nearing end calls for new avenues for improving the overall system performance. One of these avenues is the exploration of new alternative brain-inspired computing architectures that promise to achieve the flexibility and computational efficiency of biological neural processing systems. Within this context, neuromorphic intelligence represents a paradigm shift in computing based on the implementation of spiking neural network architectures tightly co-locating processing and memory. In this paper, we provide a comprehensive overview of the field, highlighting the different levels of granularity present in existing silicon implementations, comparing approaches that aim at replicating natural intelligence (bottom-up) versus those that aim at solving practical artificial intelligence applications (top-down), and assessing the benefits of the different circuit design styles used to achieve these goals. First, we present the analog, mixed-signal and digital circuit design styles, identifying the boundary between processing and memory through time multiplexing, in-memory computation and novel devices. Next, we highlight the key tradeoffs for each of the bottom-up and top-down approaches, survey their silicon implementations, and carry out detailed comparative analyses to extract design guidelines. Finally, we identify both necessary synergies and missing elements required to achieve a competitive advantage for neuromorphic edge computing over conventional machine-learning accelerators, and outline the key elements for a framework toward neuromorphic intelligence.


To build amazing computers, mimic the brain?

#artificialintelligence

Researchers have discovered a solid-state material mimics the neural signals responsible for transmitting information in the human brain. The work is a step toward developing circuitry that functions like the human brain--neuromorphic computing. The researchers discovered a neuron-like electrical switching mechanism in the solid-state material β'-CuxV2O5--specifically, how it reversibly morphs between conducting and insulating behavior on command. The team was able to clarify the underlying mechanism driving this behavior by taking a new look at β'-CuxV2O5, a remarkable chameleon-like material that changes with temperature or an applied electrical stimulus. In the process, they zeroed in on how copper ions move around inside the material and how this subtle dance in turn sloshes electrons around to transform it.


Pulse-Firing Neural Chips for Hundreds of Neurons

Neural Information Processing Systems

U niv. of Edinburgh ABSTRACT We announce new CMOS synapse circuits using only three and four MOSFETsisynapse. Neural states are asynchronous pulse streams, upon which arithmetic is performed directly. Chips implementing over 100 fully programmable synapses are described and projections to networks of hundreds of neurons are made. 1 OVERVIEW OF PULSE FIRING NEURAL VLSI The inspiration for the use of pulse firing in silicon neural networks is clearly the electrical/chemical pulse mechanism in "real" biological neurons. Neurons fire voltage pulses of a frequency determined by their level of activity but of a constant magnitude (usually 5 Volts) [Murray,1989a]. As indicated in Figure 1, synapses perform arithmetic directly on these asynchronous pulses, to increment or decrement the receiving neuron's activity.


Pulse-Firing Neural Chips for Hundreds of Neurons

Neural Information Processing Systems

Oxford OX1 3PJ Edinburgh EH9 3JL U niv. of Edinburgh ABSTRACT We announce new CMOS synapse circuits using only three and four MOSFETsisynapse. Neural states are asynchronous pulse streams, upon which arithmetic is performed directly. Chips implementing over 100 fully programmable synapses are described and projections to networks of hundreds of neurons are made. 1 OVERVIEW OF PULSE FIRING NEURAL VLSI The inspiration for the use of pulse firing in silicon neural networks is clearly the electrical/chemical pulse mechanism in "real" biological neurons. Neurons fire voltage pulses of a frequency determined by their level of activity but of a constant magnitude (usually 5 Volts) [Murray,1989a]. As indicated in Figure 1, synapses perform arithmetic directly on these asynchronous pulses, to increment or decrement the receiving neuron's activity.