deep learning


Selecting Receptive Fields in Deep Networks

Neural Information Processing Systems

Recent deep learning and unsupervised feature learning systems that learn from unlabeled data have achieved high performance in benchmarks by using extremely large architectures with many features (hidden units) at each layer. Unfortunately, for such large architectures the number of parameters usually grows quadratically in the width of the network, thus necessitating hand-coded "local receptive fields" that limit the number of connections from lower level features to higher ones (e.g., based on spatial locality). In this paper we propose a fast method to choose these connections that may be incorporated into a wide variety of unsupervised training methods. Specifically, we choose local receptive fields that group together those low-level features that are most similar to each other according to a pairwise similarity metric. This approach allows us to harness the advantages of local receptive fields (such as improved scalability, and reduced data requirements) when we do not know how to specify such receptive fields by hand or where our unsupervised training algorithm has no obvious generalization to a topographic setting.


A Better Way to Pretrain Deep Boltzmann Machines

Neural Information Processing Systems

We describe how the pre-training algorithm for Deep Boltzmann Machines (DBMs) is related to the pre-training algorithm for Deep Belief Networks and we show that under certain conditions, the pre-training procedure improves the variational lower bound of a two-hidden-layer DBM. Based on this analysis, we develop a different method of pre-training DBMs that distributes the modelling work more evenly over the hidden layers. Our results on the MNIST and NORB datasets demonstrate that the new pre-training algorithm allows us to learn better generative models. Papers published at the Neural Information Processing Systems Conference.


Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability

Neural Information Processing Systems

It has been argued that perceptual multistability reflects probabilistic inference performed by the brain when sensory input is ambiguous. Alternatively, more traditional explanations of multistability refer to low-level mechanisms such as neuronal adaptation. We employ a Deep Boltzmann Machine (DBM) model of cortical processing to demonstrate that these two different approaches can be combined in the same framework. Based on recent developments in machine learning, we show how neuronal adaptation can be understood as a mechanism that improves probabilistic, sampling-based inference. Using the ambiguous Necker cube image, we analyze the perceptual switching exhibited by the model.


Deep Learning with Kernel Regularization for Visual Recognition

Neural Information Processing Systems

In this paper we focus on training deep neural networks for visual recognition tasks. One challenge is the lack of an informative regularization on the network parameters, to imply a meaningful control on the computed function. We propose a training strategy that takes advantage of kernel methods, where an existing kernel function represents useful prior knowledge about the learning task of interest. We derive an efficient algorithm using stochastic gradient descent, and demonstrate very positive results in a wide range of visual recognition tasks. Papers published at the Neural Information Processing Systems Conference.


Multimodal Learning with Deep Boltzmann Machines

Neural Information Processing Systems

We propose a Deep Boltzmann Machine for learning a generative model of multimodal data. We show how to use the model to extract a meaningful representation of multimodal data. We find that the learned representation is useful for classification and information retreival tasks, and hence conforms to some notion of semantic similarity. The model defines a probability density over the space of multimodal inputs. By sampling from the conditional distributions over each data modality, it possible to create the representation even when some data modalities are missing.


Learning to Learn with Compound HD Models

Neural Information Processing Systems

We introduce HD (or Hierarchical-Deep'') models, a new compositional learning architecture that integrates deep learning models with structured hierarchical Bayesian models. Specifically we show how we can learn a hierarchical Dirichlet process (HDP) prior over the activities of the top-level features in a Deep Boltzmann Machine (DBM). This compound HDP-DBM model learns to learn novel concepts from very few training examples, by learning low-level generic features, high-level features that capture correlations among low-level features, and a category hierarchy for sharing priors over the high-level features that are typical of different kinds of concepts. We present efficient learning and inference algorithms for the HDP-DBM model and show that it is able to learn new concepts from very few examples on CIFAR-100 object recognition, handwritten character recognition, and human motion capture datasets. Papers published at the Neural Information Processing Systems Conference.


How AI Helped Decode Ancient Geoglyphic Etchings In Peru

#artificialintelligence

Trapezoids, triangles and many other geometric shapes -- that's what one would see if they flew a drone over the high desert in Peru, South America. These giant geometric figures resemble birds, insects and other living beings. These are the famous Nazca lines which were discovered in the 1920s. In total, there are over 800 straight lines and 300 geometric figures. Archaeologists have been studying these lies ever since their discovery and still continue to do so till date.


Kernel Methods for Deep Learning

Neural Information Processing Systems

We introduce a new family of positive-definite kernel functions that mimic the computation in large, multilayer neural nets. These kernel functions can be used in shallow architectures, such as support vector machines (SVMs), or in deep kernel-based architectures that we call multilayer kernel machines (MKMs). We evaluate SVMs and MKMs with these kernel functions on problems designed to illustrate the advantages of deep architectures. On several problems, we obtain better results than previous, leading benchmarks from both SVMs with Gaussian kernels as well as deep belief nets. Papers published at the Neural Information Processing Systems Conference.


Image Denoising and Inpainting with Deep Neural Networks

Neural Information Processing Systems

We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method achieves state-of-the-art performance in the image denoising task. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random.


The 10 Best Examples Of How Companies Use Artificial Intelligence In Practice

#artificialintelligence

All the world's tech giants from Alibaba to Amazon are in a race to become the world's leaders in artificial intelligence (AI). These companies are AI trailblazers and embrace AI to provide next-level products and services. Here are 10 of the best examples of how these companies are using artificial intelligence in practice. Chinese company Alibaba is the world's largest e-commerce platform that sells more than Amazon and eBay combined. Artificial intelligence (AI) is integral in Alibaba's daily operations and is used to predict what customers might want to buy.