Goto

Collaborating Authors

Neuromorphic Chipsets - Industry Adoption Analysis

#artificialintelligence

Von Neumann Architecture Neuromorphic Architecture Neuromorphic architectures address challenges like high power consumption, low speed, and other efficiency-related bottlenecks prevalent in the traditional von Neumann architecture Architecture Bottleneck CPU Memory Neuromorphic architectures integrate processing and storage, getting rid of the bus bottleneck connecting the CPU and memory Encoding Scheme and Signals Unlike the von Neumann architecture with sudden highs and lows in the form of binary encoding, neuromorphic chips offer a continuous analog transition in the form of spiking signals Devices and Components CPU, memory, logic gates, etc. Artificial neurons and synapses Neuromorphic devices and components are more complex than logic gates Versus Versus Versus 10. NEUROMORPHIC CHIPSETS 10 SAMPLE REPORT Neuromorphic Chipsets vs. GPUs Parameters Neuromorphic Chips GPU Chips Basic Operation Based on the emulation of the biological nature of neurons onto a chip Use parallel processing to perform mathematical operations Parallelism Inherent parallelism enabled by neurons and synapses Require the development of architectures for parallel processing to handle multiple tasks simultaneously Data Processing High High Power Low Power-intensive Accuracy Low High Industry Adoption Still in the experimental stage More accessible Software New tools and methodologies need to be developed for programming neuromorphic hardware Easier to program than neuromorphic silicons Memory Integrated memory and neural processing Use of an external memory Limitations • Not suitable for precise calculations and programming- related challenges • Creation of neuromorphic devices is difficult due to the complexity of interconnections • Thread limited • Suboptimal for massively parallel structures Neuromorphic chipsets are at an early stage of development, and would take approximately 20 years to be at the same level as GPUs. The asynchronous operation of neuromorphic chips makes them more efficient than other processing units.


Building a Brain May Mean Going Analog

Communications of the ACM

Digital supercomputing can be expensive and energy-hungry, yet still it struggles with problems that the human brain tackles easily, such as understanding speech or viewing a photograph and recognizing what it shows. Even though artificial neural networks that apply deep learning have made much headway over the last few years, some computer scientists think they can do better with systems that even more closely resemble a living brain. Such neuromorphic computing, as this brain emulation is known, might not only accomplish tasks that current computers cannot, it could also lead to a clearer understanding of how human memory and cognition work. Also, if researchers can figure out how to build the machines out of analog circuits, they could run them with a fraction of the energy needed by modern computers. "The real driver for neuromorphic computing is energy efficiency, and the current design space on CMOS isn't particularly energy efficient," says Mark Stiles, a physicist who is a project leader in the Center for Nanoscale Science and Technology at the U.S. National Institutes for Standards and Technology (NIST) in Gaithersburg, MD.


Beyond von Neumann, Neuromorphic Computing Steadily Advances

#artificialintelligence

Both projects are part of the European Human Brain Project, originally funded by the European Commission's Future Emerging Technologies program (2005-2015). With more than one million cores, and one thousand simulated neurons per core, SpinNNaker should be capable of simulating one billion neurons in real-time. Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing, wrote an interesting commentary on the TrueNorth project that traces development of von Neumann architecture based computing and contrasts it with neuromorphic computing approaches: Introducing a Brain-inspired Computer. TrueNorth chip, introduced in August 2014, is a neuromorphic CMOS chip that consists of 4,096 hardware cores, each one simulating 256 programmable silicon "neurons" for a total of just over a million neurons.


Brain like a computer: bad at math, good at everything else.

#artificialintelligence

We all remember the painful arithmetic exercises at school. It takes at least a minute to multiply numbers like 3,752 and 6,901 with pencil and paper. Of course, today, when we have phones at hand, we can quickly check that the result of our exercise should be 25 892 552. Processors of modern phones can perform more than 100 billion of such operations per second. Moreover, these chips consume only a few watts, which makes them much more efficient than our slower brains, which consume 20 watts and require much more time to achieve the same result. Of course, the brain has not evolved to do arithmetic.


Neuro-memristive Circuits for Edge Computing: A review

arXiv.org Artificial Intelligence

The volume, veracity, variability and velocity of data produced from the ever increasing network of sensors connected to Internet pose challenges for power management, scalability and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce the overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks and open problems in the field of memristive circuit and architectures in terms of edge computing perspective.