Collaborating Authors

Beyond von Neumann, Neuromorphic Computing Steadily Advances


Both projects are part of the European Human Brain Project, originally funded by the European Commission's Future Emerging Technologies program (2005-2015). With more than one million cores, and one thousand simulated neurons per core, SpinNNaker should be capable of simulating one billion neurons in real-time. Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing, wrote an interesting commentary on the TrueNorth project that traces development of von Neumann architecture based computing and contrasts it with neuromorphic computing approaches: Introducing a Brain-inspired Computer. TrueNorth chip, introduced in August 2014, is a neuromorphic CMOS chip that consists of 4,096 hardware cores, each one simulating 256 programmable silicon "neurons" for a total of just over a million neurons.

Neuromorphic Chipsets - Industry Adoption Analysis


Von Neumann Architecture Neuromorphic Architecture Neuromorphic architectures address challenges like high power consumption, low speed, and other efficiency-related bottlenecks prevalent in the traditional von Neumann architecture Architecture Bottleneck CPU Memory Neuromorphic architectures integrate processing and storage, getting rid of the bus bottleneck connecting the CPU and memory Encoding Scheme and Signals Unlike the von Neumann architecture with sudden highs and lows in the form of binary encoding, neuromorphic chips offer a continuous analog transition in the form of spiking signals Devices and Components CPU, memory, logic gates, etc. Artificial neurons and synapses Neuromorphic devices and components are more complex than logic gates Versus Versus Versus 10. NEUROMORPHIC CHIPSETS 10 SAMPLE REPORT Neuromorphic Chipsets vs. GPUs Parameters Neuromorphic Chips GPU Chips Basic Operation Based on the emulation of the biological nature of neurons onto a chip Use parallel processing to perform mathematical operations Parallelism Inherent parallelism enabled by neurons and synapses Require the development of architectures for parallel processing to handle multiple tasks simultaneously Data Processing High High Power Low Power-intensive Accuracy Low High Industry Adoption Still in the experimental stage More accessible Software New tools and methodologies need to be developed for programming neuromorphic hardware Easier to program than neuromorphic silicons Memory Integrated memory and neural processing Use of an external memory Limitations • Not suitable for precise calculations and programming- related challenges • Creation of neuromorphic devices is difficult due to the complexity of interconnections • Thread limited • Suboptimal for massively parallel structures Neuromorphic chipsets are at an early stage of development, and would take approximately 20 years to be at the same level as GPUs. The asynchronous operation of neuromorphic chips makes them more efficient than other processing units.

Turning pings into packets: Why the future of computers looks a lot like your brain


The neurones and synapses of the human brain are serving as the inspiration for the next-generation of processor hardware. If you were looking for a model of the next wave of computing hardware, you could do worse than turn to the human brain: it's small, energy efficient, and has been functionality unmatched by any other machine you'd care to name. Given the performance of the human brain is still orders of magnitude ahead of the most powerful supercomputer in existence, yet it requires orders of magnitude less space to house it and energy to run it, researchers believe technology that mimics the human brain -- known as neuromorphic computing -- could be the future of computing. Despite its name, the aim of neuromorphic computing is not simply to model the workings of human grey matter (though researchers are indeed using it for that). Instead, neuromorphic computing is using the human brain as the inspiration for a new wave of low-energy, high-power hardware that could end up in everything from supercomputers to smartphones.

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System


Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated more efforts (academic, government, and commercial) but whose payoff also seems more distant. Intel's introduction this week of Pohoiki Beach – an 8-million-neuron, neuromorphic system using 64 Loihi research chips – brings some (needed) attention back to neuromorphic technology. The newest system will be available to Intel's roughly 60 neuromorphic ecosystem partners and represents a significant scaling up of its development platform with more to come; Intel reportedly plans to introduce a 768-chip, 100-million-neuron system (Pohoiki Springs) near the end of 2019. "Researchers can now efficiently scale up novel neural-inspired algorithms – such as sparse coding, simultaneous localization and mapping (SLAM), and path planning – that can learn and adapt based on data inputs. Pohoiki Beach represents a major milestone in Intel's neuromorphic research, laying the foundation for Intel Labs to scale the architecture to 100 million neurons later this year," according to the official announcement.

Neuromorphic Chips Leading Towards the Future of AI


Over the past few years, surging focus on neuroscience and prospect of understanding brain functionality have assisted in addressing current technological limitations by utilising neural computation principles. Recognising this potentiality, the research community has launched many remarkable projects to support computational neuroscience, for studying the nervous system's information processing properties. An example of this is the Blue Brain Project to be held in Switzerland at Ecole Polytechnique Federale de Lausanne. This project focuses on simulation of ten thousand neurons in rat's brain by analysing the nervous system in detail.