Goto

Collaborating Authors

 Geiger, Davi


Quantum Clustering and Gaussian Mixtures

arXiv.org Machine Learning

The mixture of Gaussian distributions, a soft version of k-means , is considered a state-of-the-art clustering algorithm. It is widely used in computer vision for selecting classes, e.g., color, texture, and shapes. In this algorithm, each class is described by a Gaussian distribution, defined by its mean and covariance. The data is described by a weighted sum of these Gaussian distributions. We propose a new method, inspired by quantum interference in physics. Instead of modeling each class distribution directly, we model a class wave function such that its magnitude square is the class Gaussian distribution. We then mix the class wave functions to create the mixture wave function. The final mixture distribution is then the magnitude square of the mixture wave function. As a result, we observe the quantum class interference phenomena, not present in the Gaussian mixture model. We show that the quantum method outperforms the Gaussian mixture method in every aspect of the estimations. It provides more accurate estimations of all distribution parameters, with much less fluctuations, and it is also more robust to data deformations from the Gaussian assumptions. We illustrate our method for color segmentation as an example application.


Features as Sufficient Statistics

Neural Information Processing Systems

An image is often represented by a set of detected features. We get an enormous compression by representing images in this way. Furthermore, we get a representation which is little affected by small amounts of noise in the image. However, features are typically chosen in an ad hoc manner.


Features as Sufficient Statistics

Neural Information Processing Systems

An image is often represented by a set of detected features. We get an enormous compression by representing images in this way. Furthermore, weget a representation which is little affected by small amounts of noise in the image. However, features are typically chosen in an ad hoc manner.


Features as Sufficient Statistics

Neural Information Processing Systems

An image is often represented by a set of detected features. We get an enormous compression by representing images in this way. Furthermore, we get a representation which is little affected by small amounts of noise in the image. However, features are typically chosen in an ad hoc manner.


Learning How to Teach or Selecting Minimal Surface Data

Neural Information Processing Systems

Learning a map from an input set to an output set is similar to the problem of reconstructing hypersurfaces from sparse data (Poggio and Girosi, 1990). In this framework, we discuss the problem of automatically selecting "minimal" surface data. The objective is to be able to approximately reconstruct the surface from the selected sparse data. We show that this problem is equivalent to the one of compressing information by data removal and the one oflearning how to teach. Our key step is to introduce a process that statistically selects the data according to the model. During the process of data selection (learning how to teach) our system (teacher) is capable of predicting the new surface, the approximated one provided by the selected data.


Learning How to Teach or Selecting Minimal Surface Data

Neural Information Processing Systems

Marques Pereira Dipartimento di Informatica Universita di Trento Via Inama 7, Trento, TN 38100 ITALY Abstract Learning a map from an input set to an output set is similar to the problem ofreconstructing hypersurfaces from sparse data (Poggio and Girosi, 1990). In this framework, we discuss the problem of automatically selecting "minimal"surface data. The objective is to be able to approximately reconstruct the surface from the selected sparse data. We show that this problem is equivalent to the one of compressing information by data removal andthe one oflearning how to teach. Our key step is to introduce a process that statistically selects the data according to the model.


Coupled Markov Random Fields and Mean Field Theory

Neural Information Processing Systems

In recent years many researchers have investigated the use of Markov Random Fields (MRFs) for computer vision. They can be applied for example to reconstruct surfaces from sparse and noisy depth data coming from the output of a visual process, or to integrate early vision processes to label physical discontinuities. In this paper weshow that by applying mean field theory to those MRFs models a class of neural networks is obtained. Those networks can speed up the solution for the MRFs models. The method is not restricted to computer vision. 1 Introduction


Coupled Markov Random Fields and Mean Field Theory

Neural Information Processing Systems

In recent years many researchers have investigated the use of Markov Random Fields (MRFs) for computer vision. They can be applied for example to reconstruct surfaces from sparse and noisy depth data coming from the output of a visual process, or to integrate early vision processes to label physical discontinuities. In this paper we show that by applying mean field theory to those MRFs models a class of neural networks is obtained. Those networks can speed up the solution for the MRFs models. The method is not restricted to computer vision. 1 Introduction


Coupled Markov Random Fields and Mean Field Theory

Neural Information Processing Systems

In recent years many researchers have investigated the use of Markov Random Fields (MRFs) for computer vision. They can be applied for example to reconstruct surfaces from sparse and noisy depth data coming from the output of a visual process, or to integrate early vision processes to label physical discontinuities. In this paper we show that by applying mean field theory to those MRFs models a class of neural networks is obtained. Those networks can speed up the solution for the MRFs models. The method is not restricted to computer vision. 1 Introduction