Goto

Collaborating Authors

 truenorth


You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference to ANN-Level Accuracy

P, Srivatsa, Chu, Kyle Timothy Ng, Amornpaisannon, Burin, Tavva, Yaswanth, Miriyala, Venkata Pavan Kumar, Wu, Jibin, Zhang, Malu, Li, Haizhou, Carlson, Trevor E.

arXiv.org Artificial Intelligence

In the past decade, advances in Artificial Neural Networks (ANNs) have allowed them to perform extremely well for a wide range of tasks. In fact, they have reached human parity when performing image recognition, for example. Unfortunately, the accuracy of these ANNs comes at the expense of a large number of cache and/or memory accesses and compute operations. Spiking Neural Networks (SNNs), a type of neuromorphic, or brain-inspired network, have recently gained significant interest as power-efficient alternatives to ANNs, because they are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate (MAC) operations. The vast majority of neuromorphic hardware designs support rate-encoded SNNs, where the information is encoded in spike rates. Rate-encoded SNNs could be seen as inefficient as an encoding scheme because it involves the transmission of a large number of spikes. A more efficient encoding scheme, Time-To-First-Spike (TTFS) encoding, encodes information in the relative time of arrival of spikes. While TTFS-encoded SNNs are more efficient than rate-encoded SNNs, they have, up to now, performed poorly in terms of accuracy compared to previous methods. Hence, in this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems. To accomplish this, we propose: (1) a novel optimization algorithm for TTFS-encoded SNNs converted from ANNs and (2) a novel hardware accelerator for TTFS-encoded SNNs, with a scalable and low-power design. Overall, our work in TTFS encoding and training improves the accuracy of SNNs to achieve state-of-the-art results on MNIST MLPs, while reducing power consumption by 1.46$\times$ over the state-of-the-art neuromorphic hardware.


Neuromorphic Computing: The Next-Level Artificial Intelligence

#artificialintelligence

Can AI function like a human brain? But now, armed with Neuromorphic Computing, they are ready to show the world that their dream can change the world for better. As we unearth the benefits, the success of our machine learning and AI quest seem to depend to a great extent on the success of Neuromorphic Computing. The technologies of the future like autonomous vehicles and robots will need access to and utilization of an enormous amount of data and information in real-time. Today, to a limited extent, this is done by machine learning and AI that depend on supercomputer power.


Low-Wattage Chip Modeled From Human Brain to Power New Federal Supercomputer – MeriTalk

#artificialintelligence

In light of recent advances in performance–not to mention the history of computing–it's reasonable to assume that artificial intelligence and machine learning systems will become smarter and faster. But government-funded research that is being put into practice at the Air Force Research Laboratory (AFRL) could achieve new levels of performance while also consuming minimal amounts of power. AFRL and IBM, working from a program started nearly a decade ago by the Defense Advanced Research Projects Agency (DARPA), have developed a "neuromorphic chip" called TrueNorth that is patterned on the neurons in the brain and can perform heavy-duty calculations while using a fraction of the energy of conventional processors. "The major advantage of this chip," Qing Wu, AFRL's principal electronics engineer said in a statement, "is it runs machine learning algorithms–the same ones as we run, the same functionality, same accuracy, but with much less power dissipation." IBM has started work on building AFRL a supercomputer made with 64 TrueNorth chips that will be used for pattern and object recognition.


Deep learning inference possible in embedded systems thanks to TrueNorth - IBM Blog Research

#artificialintelligence

Scientists at IBM Research – Almaden have demonstrated that the TrueNorth brain-inspired computer chip, with its 1 million neurons and 256 million synapses, can efficiently implement inference with deep networks that approach state-of-the-art classification accuracy on several vision and speech datasets. The essence of the innovation was a new algorithm for training deep networks to run efficiently on a neuromorphic architecture, such as TrueNorth, by using 1-bit neural spikes, low-precision synapses, and constrained block-wise connectivity--a task that was previously thought to be difficult, if not, impossible. "The goal of brain-inspired computing is to deliver a scalable neural network substrate while approaching fundamental limits of time, space, and energy," said IBM Fellow Dharmendra Modha, chief scientist, Brain-inspired Computing, IBM Research. Today, the TrueNorth development ecosystem includes not only the TrueNorth brain-inspired processor, the novel algorithm for training deep networks and the scaled-up NS16e System but also a simulator, a programming language, an integrated programming environment, a library of algorithms and applications, firmware, a teaching curriculum, single-chip boards, and scaled-out systems.


Deep learning inference possible in embedded systems thanks to TrueNorth - IBM Blog Research

#artificialintelligence

Scientists at IBM Research – Almaden have demonstrated that the TrueNorth brain-inspired computer chip, with its 1 million neurons and 256 million synapses, can efficiently implement inference with deep networks that approach state-of-the-art classification accuracy on several vision and speech datasets. This will open up the possibilities of embedding intelligence in the entire computing stack from the Internet of Things, to smartphones, to robotics, to cars, to cloud computing, and even supercomputing. The novel architecture of the TrueNorth processor can classify image data at between 1,200 and 2,600 frames per second while using a mere 25 to 275 mW, which is effectively greater than 6,000 fps per Watt. Like that kung fu master in the movies who simultaneously fights assaults from many opponents, this processor can detect patterns in real time from 50-100 cameras at once – each with 32 32 color pixels and streaming information at the standard TV rate of 24 fps – while running on a smartphone battery for days without recharging. The breakthrough was published this week in the peer-reviewed Proceedings of the National Academy of Sciences (PNAS).


Building a Brain May Mean Going Analog

Communications of the ACM

Digital supercomputing can be expensive and energy-hungry, yet still it struggles with problems that the human brain tackles easily, such as understanding speech or viewing a photograph and recognizing what it shows. Even though artificial neural networks that apply deep learning have made much headway over the last few years, some computer scientists think they can do better with systems that even more closely resemble a living brain. Such neuromorphic computing, as this brain emulation is known, might not only accomplish tasks that current computers cannot, it could also lead to a clearer understanding of how human memory and cognition work. Also, if researchers can figure out how to build the machines out of analog circuits, they could run them with a fraction of the energy needed by modern computers. "The real driver for neuromorphic computing is energy efficiency, and the current design space on CMOS isn't particularly energy efficient," says Mark Stiles, a physicist who is a project leader in the Center for Nanoscale Science and Technology at the U.S. National Institutes for Standards and Technology (NIST) in Gaithersburg, MD.


A Computer to Rival the Brain

#artificialintelligence

More than two hundred years ago, a French weaver named Joseph Jacquard invented a mechanism that greatly simplified textile production. His design replaced the lowly draw boy--the young apprentice who meticulously chose which threads to feed into the loom to create a particular pattern--with a series of paper punch cards, which had holes dictating the lay of each stitch. The device was so successful that it was repurposed in the first interfaces between humans and computers; for much of the twentieth century, programmers laid out their code like weavers, using a lattice of punched holes. The cards themselves were fussy and fragile. Ethereal information was at the mercy of its paper substrate, coded in a language only experts could understand.


More on 3rd Generation Spiking Neural Nets

@machinelearnbot

Summary: Here's some background on how 3rd generation Spiking Neural Nets are progressing and news about a first commercial rollout. Recently we wrote about the development of AI and neural nets beyond the second generation Convolutional and Recurrent Neural Nets (CNNs / RNNs) which have come on so strong and dominate the current conversation about deep learning. Our research shows that the next generation of neural nets is most likely to be led by Spiking Neural Nets (SNNs) that are a return to the'strong' AI tradition and closely mimic actual brain function. Unlike CNNs that fire signals to every one of their deep layer connections every time, SNNs are modeled after the fact that in the brain neurons do not constantly communicate with one another. Rather they communicate in spikes of signals or more correctly short trains of spiking signals.


IBM's 'Rodent Brain' Chip Could Make Our Phones Hyper-Smart

AITopics Original Links

Dharmendra Modha walks me to the front of the room so I can see it up close. About the size of a bathroom medicine cabinet, it rests on a table against the wall, and thanks to the translucent plastic on the outside, I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a '70s sci-fi movie, but Modha describes it differently. "You're looking at a small rodent," he says. He means the brain of a small rodent--or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons--the basic building blocks of biological brains.


Blueprints for Brainlike Computing from IBM

AITopics Original Links

To create a computer as powerful as the human brain, perhaps we first need to build one that works more like a brain. Today, at the International Joint Conference on Neural Networks in Dallas, IBM researchers will unveil a radically new computer architecture designed to bring that goal within reach. Using simulations of enormous complexity, they show that the architecture, named TrueNorth, could lead to a new generation of machines that function more like biological brains. The announcement builds on IBM's ongoing projects in cognitive computing. In 2011, the research team released computer chips that use a network of "neurosynaptic cores" to manage information in a way that resembles the functioning of neurons in a brain (see "IBM's New Chips Compute More Like We Do").