Goto

Collaborating Authors

 scale graphical model



Message Passing Inference for Large Scale Graphical Models with High Order Potentials

Neural Information Processing Systems

To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields. Despite this parallelization, current algorithms struggle when the energy has high order terms and the graph is densely connected. In this paper we propose a partitioning strategy followed by a message passing algorithm which is able to exploit pre-computations.


Message Passing Inference for Large Scale Graphical Models with High Order Potentials

Jian Zhang, Alex Schwing, Raquel Urtasun

Neural Information Processing Systems

To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields. Despite this parallelization, current algorithms struggle when the energy has high order terms and the graph is densely connected. In this paper we propose a partitioning strategy followed by a message passing algorithm which is able to exploit pre-computations.


Message Passing Inference for Large Scale Graphical Models with High Order Potentials

Neural Information Processing Systems

To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields. Despite this parallelization, current algorithms struggle when the energy has high order terms and the graph is densely connected. In this paper we propose a partitioning strategy followed by a message passing algorithm which is able to exploit pre-computations. We demonstrate the effectiveness of our approach on the task of joint layout and semantic segmentation estimation from single images, and show that our approach is orders of magnitude faster than current methods.


Message Passing Inference for Large Scale Graphical Models with High Order Potentials

Neural Information Processing Systems

To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields. Despite this parallelization, current algorithms struggle when the energy has high order terms and the graph is densely connected. In this paper we propose a partitioning strategy followed by a message passing algorithm which is able to exploit pre-computations.


Message Passing Inference for Large Scale Graphical Models with High Order Potentials

Zhang, Jian, Schwing, Alex, Urtasun, Raquel

Neural Information Processing Systems

To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields. Despite this parallelization, current algorithms struggle when the energy has high order terms and the graph is densely connected. In this paper we propose a partitioning strategy followed by a message passing algorithm which is able to exploit pre-computations. We demonstrate the effectiveness of our approach on the task of joint layout and semantic segmentation estimation from single images, and show that our approach is orders of magnitude faster than current methods. Papers published at the Neural Information Processing Systems Conference.


Message Passing Inference for Large Scale Graphical Models with High Order Potentials

Zhang, Jian, Schwing, Alex, Urtasun, Raquel

Neural Information Processing Systems

To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields. Despite this parallelization, current algorithms struggle when the energy has high order terms and the graph is densely connected. In this paper we propose a partitioning strategy followed by a message passing algorithm which is able to exploit pre-computations. It only updates the high-order factors when passing messages across machines. We demonstrate the effectiveness of our approach on the task of joint layout and semantic segmentation estimation from single images, and show that our approach is orders of magnitude faster than current methods.


Approximated Structured Prediction for Learning Large Scale Graphical Models

Hazan, Tamir, Urtasun, Raquel

arXiv.org Artificial Intelligence

This manuscript contains the proofs for "A Primal-Dual Message-Passing Algorithm for Approximated Large Scale Structured Prediction" We derive the Lagrangian by introducing the Lagrange multipliers Anymaa (33") for every marginalization constraint:13an bggfiyfiafija): bawdy"), and Lagrange multipliers 0r for every equality We obtain the dual function by minimizing the beliefs over their compact domain, i.e. Deriving the dual by minimizing over the compact set of beliefs enables us to obtain an unconstrained dual, which corresponds to the approximated structured prediction program. Its final form is derived similarly to Claim. We find the optimal Amyyywoxgv) whenever the gradient vanishes, i.e. A Hx7y7afiv (3912) 'i' Anymaa (go) $957971?