Learning as MAP Inference in Discrete Graphical Models
–Neural Information Processing Systems
We present a new formulation for binary classification. Instead of relying on convex losses and regularizers such as in SVMs, logistic regression and boosting, or instead non-convex but continuous formulations such as those encountered in neural networks and deep belief networks, our framework entails a non-convex but discrete formulation, where estimation amounts to finding a MAP configuration in a graphical model whose potential functions are low-dimensional discrete surrogates for the misclassification loss. We argue that such a discrete formulation can naturally account for a number of issues that are typically encountered in either the convex or the continuous non-convex approaches, or both. By reducing the learning problem to a MAP inference problem, we can immediately translate the guarantees available for many inference settings to the learning problem itself. We empirically demonstrate in a number of experiments that this approach is promising in dealing with issues such as severe label noise, while still having global optimality guarantees.
Neural Information Processing Systems
Mar-14-2024, 11:54:11 GMT
- Country:
- North America > United States > Massachusetts (0.28)
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Education > Focused Education > Special Education (0.45)