Goto

Collaborating Authors

More data means less inference: A pseudo-max approach to structured learning

Neural Information Processing Systems

The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference in this setting are intractable. Here we show that it is possible to circumvent this difficulty when the input distribution is rich enough via a method similar in spirit to pseudo-likelihood. We show how our new method achieves consistency, and illustrate empirically that it indeed performs as well as exact methods when sufficiently large training sets are used. Papers published at the Neural Information Processing Systems Conference.


A New Look at BDDs for Pseudo-Boolean Constraints

Journal of Artificial Intelligence Research

Pseudo-Boolean constraints are omnipresent in practical applications, and thus a significant effort has been devoted to the development of good SAT encoding techniques for them. Some of these encodings first construct a Binary Decision Diagram (BDD) for the constraint, and then encode the BDD into a propositional formula. These BDD-based approaches have some important advantages, such as not being dependent on the size of the coefficients, or being able to share the same BDD for representing many constraints. We first focus on the size of the resulting BDDs, which was considered to be an open problem in our research community. We report on previous work where it was proved that there are Pseudo-Boolean constraints for which no polynomial BDD exists. We also give an alternative and simpler proof assuming that NP is different from Co-NP. More interestingly, here we also show how to overcome the possible exponential blowup of BDDs by \emph{coefficient decomposition}. This allows us to give the first polynomial generalized arc-consistent ROBDD-based encoding for Pseudo-Boolean constraints. Finally, we focus on practical issues: we show how to efficiently construct such ROBDDs, how to encode them into SAT with only 2 clauses per node, and present experimental results that confirm that our approach is competitive with other encodings and state-of-the-art Pseudo-Boolean solvers.


Picture coding using pseudo-random noise

Classics

In David, E.E. (ed.) Special issue on sensory information processing, IRE Trans. On Information Theory, pp. 145-154


Recursive Sparse Pseudo-input Gaussian Process SARSA

arXiv.org Machine Learning

The class of Gaussian Process (GP) methods for Temporal Difference learning has shown promise for data-efficient model-free Reinforcement Learning. In this paper, we consider a recent variant of the GP-SARSA algorithm, called Sparse Pseudo-input Gaussian Process SARSA (SPGP-SARSA), and derive recursive formulas for its predictive moments. This extension promotes greater memory efficiency, since previous computations can be reused and, interestingly, it provides a technique for updating value estimates on a multiple timescales


Clamping Improves TRW and Mean Field Approximations

arXiv.org Machine Learning

We examine the effect of clamping variables for approximate inference in undirected graphical models with pairwise relationships and discrete variables. For any number of variable labels, we demonstrate that clamping and summing approximate sub-partition functions can lead only to a decrease in the partition function estimate for TRW, and an increase for the naive mean field method, in each case guaranteeing an improvement in the approximation and bound. We next focus on binary variables, add the Bethe approximation to consideration and examine ways to choose good variables to clamp, introducing new methods. We show the importance of identifying highly frustrated cycles, and of checking the singleton entropy of a variable. We explore the value of our methods by empirical analysis and draw lessons to guide practitioners.