Chips for Artificial Intelligence

Communications of the ACM

A look under the hood of any major search, commerce, or social-networking site today will reveal a profusion of "deep-learning" algorithms. Over the past decade, these powerful artificial intelligence (AI) tools have been increasingly and successfully applied to image analysis, speech recognition, translation, and many other tasks. Indeed, the computational and power requirements of these algorithms now constitute a major and still-growing fraction of datacenter demand. Designers often offload much of the highly parallel calculations to commercial hardware, especially graphics-processing units (GPUs) originally developed for rapid image rendering. These chips are especially well-suited to the computationally intensive "training" phase, which tunes system parameters using many validated examples.

Mac malware, possibly made in Iran, targets US defense industry


That's what two security researchers are warning, after finding a Mac-based malware that may be an attempt by Iranian hackers to target the U.S. defense industry. The malware, called MacDownloader, was found on a website impersonating the U.S. aerospace firm United Technologies, according to a report from Claudio Guarnieri and Collin Anderson, who are researching Iranian cyberespionage threats. The fake site was previously used in a spear phishing email attack to spread Windows malware and is believed to be maintained by Iranian hackers, the researchers claimed. Visitors to the site are greeted with a page about free programs and courses for employees of U.S. defense companies Lockheed Martin, Raytheon, and Boeing. The malware itself can be downloaded from an Adobe Flash installer for a video embedded in the site.

NullaNet: Training Deep Neural Networks for Reduced-Memory-Access Inference Machine Learning

Deep neural networks have been successfully deployed in a wide variety of applications including computer vision and speech recognition. However, computational and storage complexity of these models has forced the majority of computations to be performed on high-end computing platforms or on the cloud. To cope with computational and storage complexity of these models, this paper presents a training method that enables a radically different approach for realization of deep neural networks through Boolean logic minimization. The aforementioned realization completely removes the energy-hungry step of accessing memory for obtaining model parameters, consumes about two orders of magnitude fewer computing resources compared to realizations that use floatingpoint operations, and has a substantially lower latency.

Exploiting Inherent Error-Resiliency of Neuromorphic Computing to achieve Extreme Energy-Efficiency through Mixed-Signal Neurons Artificial Intelligence

Neuromorphic computing, inspired by the brain, promises extreme efficiency for certain classes of learning tasks, such as classification and pattern recognition. The performance and power consumption of neuromorphic computing depends heavily on the choice of the neuron architecture. Digital neurons (Dig-N) are conventionally known to be accurate and efficient at high speed, while suffering from high leakage currents from a large number of transistors in a large design. On the other hand, analog/mixed-signal neurons are prone to noise, variability and mismatch, but can lead to extremely low-power designs. In this work, we will analyze, compare and contrast existing neuron architectures with a proposed mixed-signal neuron (MS-N) in terms of performance, power and noise, thereby demonstrating the applicability of the proposed mixed-signal neuron for achieving extreme energy-efficiency in neuromorphic computing. The proposed MS-N is implemented in 65 nm CMOS technology and exhibits > 100X better energy-efficiency across all frequencies over two traditional digital neurons synthesized in the same technology node. We also demonstrate that the inherent error-resiliency of a fully connected or even convolutional neural network (CNN) can handle the noise as well as the manufacturing non-idealities of the MS-N up to certain degrees. Notably, a system-level implementation on MNIST datasets exhibits a worst-case increase in classification error by 2.1% when the integrated noise power in the bandwidth is ~ 0.1 uV2, along with +-3{\sigma} amount of variation and mismatch introduced in the transistor parameters for the proposed neuron with 8-bit precision.

US military bosses reveal plan to model insect brains to create 'conscious' AI flying insect robots

Daily Mail - Science & tech

The Pentagon's research arm is looking beyond the human brain to build artificial intelligence. In a recent call for submissions, DARPA revealed that it's looking for ways to take the brains of'very small flying insects' and model their functions in AI robots. The proposal looks to pave the way for robots that are smaller, energy-efficient and easier to train. DARPA is looking beyond the human brain to build artificial intelligence. In a call for proposals, it revealed that it's looking for ways to take insect brains and model their functions in AI robots DARPA is looking for proposals that understand the sensory and nervous systems in miniature insects and can turn them into'prototype computational models.'