Goto

Collaborating Authors

 Gelfand, Andrew


On Herding and the Perceptron Cycling Theorem

Neural Information Processing Systems

The paper develops a connection between traditional perceptron algorithms and recently introduced herding algorithms. It is shown that both algorithms can be viewed as an application of the perceptron cycling theorem. This connection strengthens some herding results and suggests new (supervised) herding algorithms that, like CRFs or discriminative RBMs, make predictions by conditioning on the input attributes. We develop and investigate variants of conditional herding, and show that conditional herding leads to practical algorithms that perform better than or on par with related classifiers such as the voted perceptron and the discriminative RBM. Papers published at the Neural Information Processing Systems Conference.


Pushing the Power of Stochastic Greedy Ordering Schemes for Inference in Graphical Models

AAAI Conferences

We study iterative randomized greedy algorithms for generating (elimination) orderings with small induced width and state space size — two parameters known to bound the complexity of inference in graphical models. We propose and implement the Iterative Greedy Variable Ordering (IGVO) algorithm, a new variant within this algorithm class. An empirical evaluation using different ranking functions and conditions of randomness, demonstrates that IGVO finds significantly better orderings than standard greedy ordering implementations when evaluated within an anytime framework. Additional order of magnitude improvements are demonstrated on a multi-core system, thus further expanding the set of solvable graphical models. The experiments also confirm the superiority of the MinFill heuristic within the iterative scheme.


Stopping Rules for Randomized Greedy Triangulation Schemes

AAAI Conferences

Many algorithms for performing inference in graphical models have complexity that is exponential in the treewidth — a parameter of the underlying graph structure. Computing the (minimal) treewidth is NPcomplete, so stochastic algorithms are sometimes used to find low width tree decompositions. A common approach for finding good decompositions is iteratively executing a greedy triangulation algorithm (e.g. minfill) with randomized tie-breaking. However, utilizing a stochastic algorithm as part of the inference task introduces a new problem — namely, deciding how long the stochastic algorithm should be allowed to execute before performing inference on the best tree decomposition found so far. We refer to this dilemma as the Stopping Problem and formalize it in terms of the total time needed to answer a probabilistic query. We propose a rule for discontinuing the search for improved decompositions and demonstrate the benefit (in terms of time saved) of applying this rule to Bayes and Markov network instances.


On Herding and the Perceptron Cycling Theorem

Neural Information Processing Systems

The paper develops a connection between traditional perceptron algorithms and recently introduced herding algorithms. It is shown that both algorithms can be viewed as an application of the perceptron cycling theorem. This connection strengthens some herding results and suggests new (supervised) herding algorithms that, like CRFs or discriminative RBMs, make predictions by conditioning on the input attributes. We develop and investigate variants of conditional herding, and show that conditional herding leads to practical algorithms that perform better than or on par with related classifiers such as the voted perceptron and the discriminative RBM.