However, researchers at the University of Michigan are claiming the first memristor-based programmable computer that has the potential to make AI applications more efficient and faster. Because memristors have a memory, they can accumulate data in a way that is common for -- among other things -- neural networks. The chip has both an array of nearly 6,000 memristors, a crossbar array, along with analog to digital and digital to analog converters. In fact, there are 486 DACs and 162 ADCs along with an OpenRISC processor. According to the paper, the chip turned in 188 billion operations per second per watt while consuming about 300 mW of power.
A Northwestern research team has developed a novel device called a "memtransistor," which operates much like a neuron by performing both memory and information processing. With combined characteristics of a memristor and transistor, the memtransistor also encompasses multiple terminals that operate more similarly to a neural network.
Artificial intelligence (AI) machine learning can have a considerable carbon footprint. Deep learning is inherently costly, as it requires massive computational and energy resources. Now researchers in the U.K. have discovered how to create an energy-efficient artificial neural network without sacrificing accuracy and published the findings in Nature Communications on August 26, 2020. The biological brain is the inspiration for neuromorphic computing--an interdisciplinary approach that draws upon neuroscience, physics, artificial intelligence, computer science, and electrical engineering to create artificial neural systems that mimic biological functions and systems. The human brain is a complex system of roughly 86 billion neurons, 200 billion neurons, and hundreds of trillions of synapses.
Motivated by advantages of current-mode design, this brief contribution explores the implementation of weight matrices in neuromemristive systems via current-mode memristor crossbar circuits. After deriving theoretical results for the range and distribution of weights in the current-mode design, it is shown that any weight matrix based on voltage-mode crossbars can be mapped to a current-mode crossbar if the voltage-mode weights are carefully bounded. Then, a modified gradient descent rule is derived for the current-mode design that can be used to perform backpropagation training. Behavioral simulations on the MNIST dataset indicate that both voltage and current-mode designs are able to achieve similar accuracy and have similar defect tolerance. However, analysis of trained weight distributions reveals that current-mode and voltage-mode designs may use different feature representations.
Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology. Lu's next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called "sparse coding" to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos. Memristors are electrical resistors with memory -- advanced electronic devices that regulate current based on the history of the voltages applied to them.