Goto

Collaborating Authors

 bounded treewidth


Learning Treewidth-Bounded Bayesian Networks with Thousands of Variables

Mauro Scanagatta, Giorgio Corani, Cassio P. de Campos, Marco Zaffalon

Neural Information Processing Systems

Parviainen et al. (2014) adopted an anytime integer linear programming (ILP) Otherwise it returns a sub-optimal DAG with bounded treewidth. Nie et al. (2014) proposed an efficient anytime ILP approach with a polynomial number of constraints Nie et al. (2015) proposed the method S2.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This work develops a new exact algorithm for structure learning of chordal Markov networks (MN) under decomposable score functions. The algorithm implements a dynamic programming approach by introducing recursive partition tree structures, which are junction tree equivalent structures that well suit the decomposition of the problem into smaller instances so to enable dynamic programming. The authors review the literature, prove the correctness of their algorithm and compare it against a modified version of GOBNILP, which is implements an state-of-the-art method for Bayesian network exact structure learning. The paper is well-written, relevant for NIPS and technically sound.



Advances in Learning Bayesian Networks of Bounded Treewidth

Neural Information Processing Systems

This work presents novel algorithms for learning Bayesian networks of bounded treewidth. Both exact and approximate methods are developed. The exact method combines mixed integer linear programming formulations for structure learning and treewidth computation. The approximate method consists in sampling k-trees (maximal graphs of treewidth k), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that k-tree. The approaches are empirically compared to each other and to state-of-the-art methods on a collection of public data sets with up to 100 variables.


Advances in Learning Bayesian Networks of Bounded Treewidth

Siqi Nie, Denis D. Maua, Cassio P. de Campos, Qiang Ji

Neural Information Processing Systems

This work presents novel algorithms for learning Bayesian networks of bounded treewidth. Both exact and approximate methods are developed. The exact method combines mixed integer linear programming formulations for structure learning and treewidth computation. The approximate method consists in sampling k-trees (maximal graphs of treewidth k), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that k-tree. The approaches are empirically compared to each other and to state-of-the-art methods on a collection of public data sets with up to 100 variables.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

We thank all the reviewers for their time in giving us reviews and feedback. One point deserves specific comment: R1, R2, and R4 all had questions about the relationship between hypertree width and hierarchy width, and how this relates to the comparison between Gibbs sampling and exact inference techniques. When hierarchy width is bounded, the hypertree width is similarly bounded (Statement 1 in our paper). This means that for the models we focus on, where Gibbs mixes in polynomial time, exact inference also runs in polynomial time. However, for graphs with sufficiently small weights (such as the Paleontology model we mention), the polynomial exponent for Gibbs will be smaller than for exact sampling.


Advances in Learning Bayesian Networks of Bounded Treewidth

Neural Information Processing Systems

This work presents novel algorithms for learning Bayesian networks of bounded treewidth. Both exact and approximate methods are developed. The exact method combines mixed integer linear programming formulations for structure learning and treewidth computation. The approximate method consists in sampling k-trees (maximal graphs of treewidth k), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that k-tree. The approaches are empirically compared to each other and to state-of-the-art methods on a collection of public data sets with up to 100 variables.


On the Tractability Landscape of Conditional Minisum Approval Voting Rule

Amanatidis, Georgios, Lampis, Michael, Markakis, Evangelos, Papasotiropoulos, Georgios

arXiv.org Artificial Intelligence

This work examines the Conditional Approval Framework for elections involving multiple interdependent issues, specifically focusing on the Conditional Minisum Approval Voting Rule. We first conduct a detailed analysis of the computational complexity of this rule, demonstrating that no approach can significantly outperform the brute-force algorithm under common computational complexity assumptions and various natural input restrictions. In response, we propose two practical restrictions (the first in the literature) that make the problem computationally tractable and show that these restrictions are essentially tight. Overall, this work provides a clear picture of the tractability landscape of the problem, contributing to a comprehensive understanding of the complications introduced by conditional ballots and indicating that conditional approval voting can be applied in practice, albeit under specific conditions.


Advances in Learning Bayesian Networks of Bounded Treewidth Denis D. Mauá Rensselaer Polytechnic Institute University of São Paulo Troy, NY, USA

Neural Information Processing Systems

This work presents novel algorithms for learning Bayesian networks of bounded treewidth. Both exact and approximate methods are developed. The exact method combines mixed integer linear programming formulations for structure learning and treewidth computation. The approximate method consists in sampling k-trees (maximal graphs of treewidth k), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that k-tree. The approaches are empirically compared to each other and to state-of-the-art methods on a collection of public data sets with up to 100 variables.


Learning Treewidth-Bounded Bayesian Networks with Thousands of Variables

Neural Information Processing Systems

We present a method for learning treewidth-bounded Bayesian networks from data sets containing thousands of variables. Bounding the treewidth of a Bayesian network greatly reduces the complexity of inferences. Yet, being a global property of the graph, it considerably increases the difficulty of the learning process. Our novel algorithm accomplishes this task, scaling both to large domains and to large treewidths. Our novel approach consistently outperforms the state of the art on experiments with up to thousands of variables.