Goto

Collaborating Authors

 max-margin markov network



Multilabel Structured Output Learning with Random Spanning Trees of Max-Margin Markov Networks

Neural Information Processing Systems

We show that the usual score function for conditional Markov networks can be written as the expectation over the scores of their spanning trees. We also show that a small random sample of these output trees can attain a significant fraction of the margin obtained by the complete graph and we provide conditions under which we can perform tractable inference. The experimental results confirm that practical learning is scalable to realistic datasets using this approach.


Multilabel Structured Output Learning with Random Spanning Trees of Max-Margin Markov Networks

Neural Information Processing Systems

We show that the usual score function for conditional Markov networks can be written as the expectation over the scores of their spanning trees. We also show that a small random sample of these output trees can attain a significant fraction of the margin obtained by the complete graph and we provide conditions under which we can perform tractable inference. The experimental results confirm that practical learning is scalable to realistic datasets using this approach.


Max-Margin Markov Networks

Neural Information Processing Systems

In typical classification tasks, we seek a function which assigns a label to a sin- gle object. Kernel-based approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from the ability to use high-dimensional feature spaces, and from their strong theoretical guaran- tees. However, many real-world tasks involve sequential, spatial, or structured data, where multiple labels must be assigned. Existing kernel-based methods ig- nore structure in the problem, assigning labels independently to each object, los- ing much useful information. Conversely, probabilistic graphical models, such as Markov networks, can represent correlations between labels, by exploiting problem structure, but cannot handle high-dimensional feature spaces, and lack strong theoretical generalization guarantees.


Multilabel Structured Output Learning with Random Spanning Trees of Max-Margin Markov Networks

Marchand, Mario, Su, Hongyu, Morvant, Emilie, Rousu, Juho, Shawe-Taylor, John S.

Neural Information Processing Systems

We show that the usual score function for conditional Markov networks can be written as the expectation over the scores of their spanning trees. We also show that a small random sample of these output trees can attain a significant fraction of the margin obtained by the complete graph and we provide conditions under which we can perform tractable inference. The experimental results confirm that practical learning is scalable to realistic datasets using this approach. Papers published at the Neural Information Processing Systems Conference.