Goto

Collaborating Authors

 Vivarelli, Francesco


General Bounds on Bayes Errors for Regression with Gaussian Processes

Neural Information Processing Systems

Based on a simple convexity lemma, we develop bounds for different types of Bayesian prediction errors for regression with Gaussian processes. The basic bounds are formulated for a fixed training set. Simpler expressions are obtained for sampling from an input distribution which equals the weight function of the covariance kernel, yielding asymptotically tight results. The results are compared with numerical experiments.


Discovering Hidden Features with Gaussian Processes Regression

Neural Information Processing Systems

W is often taken to be diagonal, but if we allow W to be a general positive definite matrix which can be tuned on the basis of training data, then an eigen-analysis of W shows that we are effectively creating hidden features, where the dimensionality of the hidden-feature space is determined by the data. We demonstrate the superiority of predictions using the general matrix over those based on a diagonal matrix on two test problems.


General Bounds on Bayes Errors for Regression with Gaussian Processes

Neural Information Processing Systems

Based on a simple convexity lemma, we develop bounds for different types of Bayesian prediction errors for regression with Gaussian processes. The basic bounds are formulated for a fixed training set. Simpler expressions are obtained for sampling from an input distribution which equals the weight function of the covariance kernel, yielding asymptotically tight results. The results are compared with numerical experiments.


Discovering Hidden Features with Gaussian Processes Regression

Neural Information Processing Systems

We study the dynamics of supervised learning in layered neural networks, inthe regime where the size p of the training set is proportional to the number N of inputs.


General Bounds on Bayes Errors for Regression with Gaussian Processes

Neural Information Processing Systems

Based on a simple convexity lemma, we develop bounds for different typesof Bayesian prediction errors for regression with Gaussian processes. The basic bounds are formulated for a fixed training set. Simpler expressions are obtained for sampling from an input distribution whichequals the weight function of the covariance kernel, yielding asymptotically tight results. The results are compared with numerical experiments.


Discovering Hidden Features with Gaussian Processes Regression

Neural Information Processing Systems

W is often taken to be diagonal, but if we allow W to be a general positive definite matrix which can be tuned on the basis of training data, then an eigen-analysis of W shows that we are effectively creating hidden features, where the dimensionality of the hidden-feature space is determined by the data. We demonstrate the superiority of predictions usi ng the general matrix over those based on a diagonal matrix on two test problems.