Physicists from Petrozavodsk State University have proposed a new method for oscillatory neural network to recognize simple images. Such networks with an adjustable synchronous state of individual neurons have, presumably, dynamics similar to neurons in the living brain. AN oscillatory neural network is a complex interlacing of interacting elements (oscillators) that are able to receive and transmit oscillations of a certain frequency. Receiving signals of various frequencies from preceding elements, the artificial neuron oscillator can synchronize its rhythm with these fluctuations. As a result, in the network, some of the elements are synchronized with each other (periodically and simultaneously activated), and other elements are not synchronized.
The growing number of low-power smart devices in the Internet of Things is coupled with the concept of "Edge Computing", that is moving some of the intelligence, especially machine learning, towards the edge of the network. Enabling machine learning algorithms to run on resource-constrained hardware, typically on low-power smart devices, is challenging in terms of hardware (optimized and energy-efficient integrated circuits), algorithmic and firmware implementations. This paper presents FANN-on-MCU, an open-source toolkit built upon the Fast Artificial Neural Network (FANN) library to run lightweight and energy-efficient neural networks on microcontrollers based on both the ARM Cortex-M series and the novel RISC-V-based Parallel Ultra-Low-Power (PULP) platform. The toolkit takes multi-layer perceptrons trained with FANN and generates code targeted at execution on low-power microcontrollers either with a floating-point unit (i.e., ARM Cortex-M4F and M7F) or without (i.e., ARM Cortex M0-M3 or PULP-based processors). This paper also provides an architectural performance evaluation of neural networks on the most popular ARM Cortex-M family and the parallel RISC-V processor called Mr. Wolf. The evaluation includes experimental results for three different applications using a self-sustainable wearable multi-sensor bracelet. Experimental results show a measured latency in the order of only a few microseconds and a power consumption of few milliwatts while keeping the memory requirements below the limitations of the targeted microcontrollers. In particular, the parallel implementation on the octa-core RISC-V platform reaches a speedup of 22x and a 69% reduction in energy consumption with respect to a single-core implementation on Cortex-M4 for continuous real-time classification.
Eta Compute has developed a high-efficiency ASIC and new artificial intelligence (AI) software based on neural networks to solve the problems of edge and mobile devices without the use of cloud resources. Future mobile devices, which are constantly active in the IoT ecosystem, require a disruptive solution that offers processing power to enable machine intelligence with low power consumption for applications such as speech recognition and imaging. These are the types of applications for which Eta Compute designed its ECM3531. The IC is based on the ARM Cortex-M3 and NXP Coolflux DSP processors. It uses a tightly integrated DSP processor and a microcontroller architecture for a significant reduction in power for the intelligence of embedded machines.
Today microcontrollers can be found in almost any technical device, from washing machines to blood pressure meters and wearables. Researchers at the Fraunhofer Institute for Microelectronic Circuits and Systems IMS have developed AIfES, an artificial intelligence (AI) concept for microcontrollers and sensors that contains a completely configurable artificial neural network. AIfES is a platform-independent machine learning library which can be used to realize self-learning microelectronics requiring no connection to a cloud or to high-performance computers. The sensor-related AI system recognizes handwriting and gestures, enabling for example gesture control of input when the library is running on a wearable. A wide variety of software solutions currently exist for machine learning, but as a rule they are only available for the PC and are based on the programming language Python.
--Low-power sensing technologies, such as wearables, have emerged in the healthcare domain since they enable continuous and noninvasive monitoring of physiological signals. In order to endow such devices with clinical value, classical signal processing has encountered numerous challenges. In this paper, we focus on the inference of neural networks running in microcontrollers and low-power processors which wearable sensors and devices are generally equipped with. In particular, we adapted an existing convolutional-recurrent neural network, designed to detect and classify cardiac arrhythmias from a single-lead electrocardiogram, to the low-power embedded System-on- Chip nRF52 from Nordic Semiconductor with an ARM's Cortex-M4 processing core. We show our implementation in fixed-point precision, using the CMSIS-NN libraries, yields a drop of F 1 score from 0.8 to 0.784, from the original implementation, with a memory footprint of 195.6 KB, and a throughput of 33.98 MOps/s.