Capsule endoscopy identifies damaged areas in a patient's small intestine but often outputs poor-quality images or misses lesions, leading to either misdiagnosis or repetition of the lengthy procedure. The authors propose applying deep-learning models to automatically process the captured images and identify lesions in real time, enabling the capsule to take additional images of a specific location, adjust its focus level, or improve image quality. The authors also describe the technical challenges in realizing a viable automated capsule-endoscopy system. J. Ahn, H. Nguyen Loc, R. Krishna Balan, Y. Lee and J. Ko, "Finding Small-Bowel Lesions: Challenges in Endoscopy-Image-Based Learning Systems," in Computer, vol.
Recurrent neural networks (RNNs) have shown promising results in audio and speech-processing applications. The increasing popularity of Internet of Things (IoT) devices makes a strong case for implementing RNN-based inferences for applications such as acoustics-based authentication and voice commands for smart homes. However, the feasibility and performance of these inferences on resource-constrained devices remain largely unexplored. The authors compare traditional machine-learning models with deep-learning RNN models for an end-to-end authentication system based on breathing acoustics.
By leveraging advances in deep learning, challenging pattern recognition problems have been solved in computer vision, speech recognition, natural language processing, and more. Mobile computing has also adopted these powerful modeling approaches, delivering astonishing success in the field's core application domains, including the ongoing transformation of human activity recognition technology through machine learning.
Although the ability to collect, collate, and analyze the vast amount of data generated from cyber-physical systems and Internet of Things devices can be beneficial to both users and industry, this process has led to a number of challenges, including privacy and scalability issues. The authors present a hybrid framework where user-centered edge devices and resources can complement the cloud for providing privacy-aware, accurate, and efficient analytics.
How can the advantages of deep learning be brought to the emerging world of embedded IoT devices? The authors discuss several core challenges in embedded and mobile deep learning, as well as recent solutions demonstrating the feasibility of building IoT applications that are powered by effective, efficient, and reliable deep learning models.
To deliver the hardware computation power advances needed to support deep learning innovations, identifying deep learning properties that designers could potentially exploit is invaluable. This article articulates our strategy and overviews several value properties of deep learning models that we identified and some of our hardware designs that exploit them to reduce computation, and on- and off-chip storage and communication.
Mobile and embedded devices increasingly rely on deep neural networks to understand the world--a feat that would have overwhelmed their system resources only a few years ago. Further integration of machine learning and embedded/mobile systems will require additional breakthroughs of efficient learning algorithms that can function under fluctuating resource constraints, giving rise to a field that straddles computer architecture, software systems, and artificial intelligence. N. D. Lane and P. Warden, "The Deep (Learning) Transformation of Mobile and Embedded Computing," in Computer, vol.
Neuroscience initiatives aim to develop new technologies and tools to measure and manipulate neuronal circuits. To deal with the massive amounts of data generated by these tools, the authors envision the co-location of open data repositories in standardized formats together with high-performance computing hardware utilizing open source optimized analysis codes.
Loihi is Intel's novel, manycore neuromorphic processor and is the first of its kind to feature a microcode-programmable learning engine that enables on-chip training of spiking neural networks (SNNs). The authors present the Loihi toolchain, which consists of an intuitive Python-based API for specifying SNNs, a compiler and runtime for building and executing SNNs on Loihi, and several target platforms (Loihi silicon, FPGA, and functional simulator). To showcase the toolchain, the authors describe how to build, train, and use a SNN to classify handwritten digits from the MNIST database.