Coupled Markov Random Fields and Mean Field Theory

Neural Information Processing Systems

In recent years many researchers have investigated the use of Markov Random Fields (MRFs) for computer vision. They can be applied for example to reconstruct surfaces from sparse and noisy depth data coming from the output of a visual process, or to integrate early vision processes to label physical discontinuities. In this paper weshow that by applying mean field theory to those MRFs models a class of neural networks is obtained. Those networks can speed up the solution for the MRFs models. The method is not restricted to computer vision. 1 Introduction


Machine Learning, Machine Vision, and the Brain

AI Magazine

The problem of learning is arguably at the very core of the problem of intelligence, both biological and artificial. In this article, we review our work over the last 10 years in the area of supervised learning, focusing on three interlinked directions of research--(1) theory, (2) engineering applications (making intelligent software), and (3) neuroscience (understanding the brain's mechanisms of learnings)--that contribute to and complement each other. Because seeing is intelligence, learning is also becoming a key to the study of artificial and biological vision. In the last few years, both computer vision--which attempts to build machines that see--and visual neuroscience--which aims to understand how our visual system works--are undergoing a fundamental change in their approaches. Visual neuroscience is beginning to focus on the mechanisms that allow the cortex to adapt its circuitry and learn a new task.


Machine Learning, Machine Vision, and the Brain

AI Magazine

The problem of learning is arguably at the very core of the problem of intelligence, both biological and artificial. In this article, we review our work over the last 10 years in the area of supervised learning, focusing on three interlinked directions of research -- (1) theory, (2) engineering applications (making intelligent software), and (3) neuroscience (understanding the brain's mechanisms of learnings) -- that contribute to and complement each other.


Gaussian Processes for Regression

Neural Information Processing Systems

The Bayesian analysis of neural networks is difficult because a simple priorover weights implies a complex prior distribution over functions. In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis forfixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (viaHybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results. 1 INTRODUCTION In the Bayesian approach to neural networks a prior distribution over the weights induces a prior distribution over functions. This prior is combined with a noise model, which specifies the probability of observing the targets t given function values y, to yield a posterior over functions which can then be used for predictions. For neural networks the prior over functions has a complex form which means that implementations must either make approximations (e.g.


From Regularization Operators to Support Vector Kernels

Neural Information Processing Systems

Support Vector (SV) Machines for pattern recognition, regression estimation and operator inversion exploit the idea of transforming into a high dimensional feature space where they perform a linear algorithm. Instead of evaluating this map explicitly, one uses Hilbert Schmidt Kernels k(x, y) which correspond to dot products of the mapped data in high dimensional space, i.e. k(x,y) ( I (x) · I (y)) (I) with I: .!Rn --*:F denoting the map into feature space. Mostly, this map and many of its properties are unknown. Even worse, so far no general rule was available.