Goto

Collaborating Authors

756

AI Magazine

You are cordially invited to become a member of the AI Community's principal scientific society: Both these facts run counter to other connectionist models but easily fit SDM. Sparse Distributed Memory will be of interest to anyone doing research in neural models or brain physiology. As the theory is refined, the book will also be of interest to those trying to find applications for neural models. Finally, it will be fascinating to anyone who is even slightly curious about human intelligence and how it might arise from the brain. Terry Rooker is a graduate student at the Oregon Graduate Institute.


Sparse Distributed Memory

AI Magazine

Restricting the number of potential readers is unfortunate because an interdisciplinary view of the world around us must be developed. This book should have been written to show a scientist with a good mathematics background how to do modeling and simulation. Scientific research needs more people trained in system concepts, people trained to understand and apply the Weltanschauung of system theory. Indeed, the recent recommendation for science education that came out of the Science for All Americans study, sponsored by the American Association for the Advancement of Science, emphasized an interdisciplinary approach to scientific concepts. By limiting the technical accessibility of this book, the author has not helped us address the need for training scientists in the use of interdisciplinary tools in scientific research.


AI at the Far Edge

#artificialintelligence

The concept of "edge computing" has been around since the late 90s, and typically refers to systems that process data where it is collected instead of having to both store and push it to a centralized location for off-line processing. The aim is to move computation away from the data center in order to faciliate real-time analytics and reduce network and response latency. But some applications, particularly those that leverage deep learning, have been historically very difficult to deploy at the edge where power and compute are typically extremely limited. The problem has become particularly accute over the past few years as recent breakthroughs in deep learning have featured networks with a lot more depth and complexity, and thus require greater compute from the platforms they run on. But recent developments in the embedded hardware space have bridged that gap to a certain extent and enable AI to run fully on the edge, ushering a whole new wave of applications.


Grand Challenge: Unified Visual Data Representation

@machinelearnbot

All creatures have the ability to sense the surrounding world, but in various ways and degrees. You might envy the bloodhound's exceptional nose, but humans possess visual prowess that (although it doesn't match the eagle's eye in distance) is unsurpassed in the ability to detect and make sense of patterns. Our eyes and brains work as a team to discover meaningful patterns that help us make sense of the world [1]. Digital computers take input in direct quantitative form constructed from digits. Human extract most of quantitative information from 3D visual environment: distances between observable objects, sizes of objects, colors intensity and hue, proximity, similarity, symmetry … "A striking fact about human cognition is that we like to process quantitative information in graphic form" [2].


Neural Image Compression and Explanation

arXiv.org Machine Learning

Explaining the prediction of deep neural networks (DNNs) and semantic image compression are two active research areas of deep learning with a numerous of applications in decision-critical systems, such as surveillance cameras, drones and self-driving cars, where interpretable decision is critical and storage/network bandwidth is limited. In this paper, we propose a novel end-to-end Neural Image Compression and Explanation (NICE) framework that learns to (1) explain the prediction of convolutional neural networks (CNNs), and (2) subsequently compress the input images for efficient storage or transmission. Specifically, NICE generates a sparse mask over an input image by attaching a stochastic binary gate to each pixel of the image, whose parameters are learned through the interaction with the CNN classifier to be explained. The generated mask is able to capture the saliency of each pixel measured by its influence to the final prediction of CNN; it can also be used to produce a mixed-resolution image, where important pixels maintain their original high resolution and insignificant background pixels are subsampled to a low resolution. The produced images achieve a high compression rate (e.g., about 0.6x of original image file size), while retaining a similar classification accuracy. Extensive experiments across multiple image classification benchmarks demonstrate the superior performance of NICE compared to the state-of-the-art methods in terms of explanation quality and image compression rate.