Not enough data to create a plot.
Try a different view from the menu above.
Country
Variational MCMC
de Freitas, Nando, Hojen-Sorensen, Pedro, Jordan, Michael I., Russell, Stuart
We propose a new class of learning algorithms that combines variational approximation and Markov chain Monte Carlo (MCMC) simulation. Naive algorithms that use the variational approximation as proposal distribution can perform poorly because this approximation tends to underestimate the true variance and other features of the data. We solve this problem by introducing more sophisticated MCMC algorithms. One of these algorithms is a mixture of two MCMC kernels: a random walk Metropolis kernel and a blockMetropolis-Hastings (MH) kernel with a variational approximation as proposaldistribution. The MH kernel allows one to locate regions of high probability efficiently. The Metropolis kernel allows us to explore the vicinity of these regions. This algorithm outperforms variationalapproximations because it yields slightly better estimates of the mean and considerably better estimates of higher moments, such as covariances. It also outperforms standard MCMC algorithms because it locates theregions of high probability quickly, thus speeding up convergence. We demonstrate this algorithm on the problem of Bayesian parameter estimation for logistic (sigmoid) belief networks.
Probabilistic Models for Unified Collaborative and Content-Based Recommendation in Sparse-Data Environments
Popescul, Alexandrin, Ungar, Lyle H., Pennock, David M, Lawrence, Steve
Recommender systems leverage product and community information to target products to consumers. Researchers have developed collaborative recommenders, content-based recommenders, and (largely ad-hoc) hybrid systems. We propose a unified probabilistic framework for merging collaborative and content-based recommendations. We extend Hofmann's [1999] aspect model to incorporate three-way co-occurrence data among users, items, and item content. The relative influence of collaboration data versus content data is not imposed as an exogenous parameter, but rather emerges naturally from the given data sources. Global probabilistic models coupled with standard Expectation Maximization (EM) learning algorithms tend to drastically overfit in sparse-data situations, as is typical in recommendation applications. We show that secondary content information can often be used to overcome sparsity. Experiments on data from the ResearchIndex library of Computer Science publications show that appropriate mixture models incorporating secondary data produce significantly better quality recommenders than k-nearest neighbors (k-NN). Global probabilistic models also allow more general inferences than local methods like k-NN.
Discovering Multiple Constraints that are Frequently Approximately Satisfied
Hinton, Geoffrey E., Teh, Yee Whye
Some high-dimensional data.sets can be modelled by assuming that there are many different linear constraints, each of which is Frequently Approximately Satisfied (FAS) by the data. The probability of a data vector under the model is then proportional to the product of the probabilities of its constraint violations. We describe three methods of learning products of constraints using a heavy-tailed probability distribution for the violations.
Determinantal point processes for machine learning
Determinantal point processes (DPPs) are elegant probabilistic models of repulsion that arise in quantum physics and random matrix theory. In contrast to traditional structured models like Markov random fields, which become intractable and hard to approximate in the presence of negative correlations, DPPs offer efficient and exact algorithms for sampling, marginalization, conditioning, and other inference tasks. We provide a gentle introduction to DPPs, focusing on the intuitions, algorithms, and extensions that are most relevant to the machine learning community, and show how DPPs can be applied to real-world applications like finding diverse sets of high-quality search results, building informative summaries by selecting diverse sentences from documents, modeling non-overlapping human poses in images or video, and automatically building timelines of important news stories.
Lattice Particle Filters
Ormoneit, Dirk, Lemieux, Christiane, Fleet, David J.
A standard approach to approximate inference in state-space models isto apply a particle filter, e.g., the Condensation Algorithm.However, the performance of particle filters often varies significantlydue to their stochastic nature.We present a class of algorithms, called lattice particle filters, thatcircumvent this difficulty by placing the particles deterministicallyaccording to a Quasi-Monte Carlo integration rule.We describe a practical realization of this idea, discuss itstheoretical properties, and its efficiency.Experimental results with a synthetic 2D tracking problem show that thelattice particle filter is equivalent to a conventional particle filterthat has between 10 and 60% more particles, depending ontheir "sparsity" in the state-space.We also present results on inferring 3D human motion frommoving light displays.
A Bayesian Approach to Tackling Hard Computational Problems
Horvitz, Eric J., Ruan, Yongshao, Gomes, Carla P., Kautz, Henry, Selman, Bart, Chickering, David Maxwell
We are developing a general framework for using learned Bayesian models for decision-theoretic control of search and reasoningalgorithms. We illustrate the approach on the specific task of controlling both general and domain-specific solvers on a hard class of structured constraint satisfaction problems. A successful strategyfor reducing the high (and even infinite) variance in running time typically exhibited by backtracking search algorithms is to cut off and restart the search if a solution is not found within a certainamount of time. Previous work on restart strategies have employed fixed cut off values. We show how to create a dynamic cut off strategy by learning a Bayesian model that predicts the ultimate length of a trial based on observing the early behavior of the search algorithm. Furthermore, we describe the general conditions under which a dynamic restart strategy can outperform the theoretically optimal fixed strategy.
Efficient Approximation for Triangulation of Minimum Treewidth
We present four novel approximation algorithms for finding triangulation of minimum treewidth. Two of the algorithms improve on the running times of algorithms by Robertson and Seymour, and Becker and Geiger that approximate the optimum by factors of 4 and 3 2/3, respectively. A third algorithm is faster than those but gives an approximation factor of 4 1/2. The last algorithm is yet faster, producing factor-O(lg/k) approximations in polynomial time. Finding triangulations of minimum treewidth for graphs is central to many problems in computer science. Real-world problems in artificial intelligence, VLSI design and databases are efficiently solvable if we have an efficient approximation algorithm for them. We report on experimental results confirming the effectiveness of our algorithms for large graphs associated with real-world problems.
A Case Study in Knowledge Discovery and Elicitation in an Intelligent Tutoring Application
Nicholson, Ann, Boneh, Tal, Wilkin, Tim, Stacey, Kaye, Sonenberg, Liz, Steinle, Vicki
Most successful Bayesian network (BN) applications to datehave been built through knowledge elicitation from experts.This is difficult and time consuming, which has lead to recentinterest in automated methods for learning BNs from data. We present a case study in the construction of a BN in anintelligent tutoring application, specifically decimal misconceptions. Wedescribe the BN construction using expert elicitation and then investigate how certainexisting automated knowledge discovery methods might support the BN knowledge engineering process.
Belief Optimization for Binary Networks: A Stable Alternative to Loopy Belief Propagation
We present a novel inference algorithm for arbitrary, binary, undirected graphs. Unlike loopy belief propagation, which iterates fixed point equations, we directly descend on the Bethe free energy. The algorithm consists of two phases, first we update the pairwise probabilities, given the marginal probabilities at each unit,using an analytic expression. Next, we update the marginal probabilities, given the pairwise probabilities by following the negative gradient of the Bethe free energy. Both steps are guaranteed to decrease the Bethe free energy, and since it is lower bounded, the algorithm is guaranteed to converge to a local minimum. We also show that the Bethe free energy is equal to the TAP free energy up to second order in the weights. In experiments we confirm that when belief propagation converges it usually finds identical solutions as our belief optimization method. However, in cases where belief propagation fails to converge, belief optimization continues to converge to reasonable beliefs. The stable nature of belief optimization makes it ideally suited for learning graphical models from data.
Efficient Stepwise Selection in Decomposable Models
Deshpande, Amol, Garofalakis, Minos, Jordan, Michael I.
In this paper, we present an efficient way of performing stepwise selection in the class of decomposable models. The main contribution of the paper is a simple characterization of the edges that canbe added to a decomposable model while keeping the resulting model decomposable and an efficient algorithm for enumerating all such edges for a given model in essentially O(1) time per edge. We also discuss how backward selection can be performed efficiently using our data structures.We also analyze the complexity of the complete stepwise selection procedure, including the complexity of choosing which of the eligible dges to add to (or delete from) the current model, with the aim ofminimizing the Kullback-Leibler distance of the resulting model from the saturated model for the data.