convex algorithm
Non-approximate Inference for Collective Graphical Models on Path Graphs via Discrete Difference of Convex Algorithm
The importance of aggregated count data, which is calculated from the data of multiple individuals, continues to increase. Collective Graphical Model (CGM) is a probabilistic approach to the analysis of aggregated data. One of the most important operations in CGM is maximum a posteriori (MAP) inference of unobserved variables under given observations. Because the MAP inference problem for general CGMs has been shown to be NP-hard, an approach that solves an approximate problem has been proposed. However, this approach has two major drawbacks.
Reviews: Optimal Cluster Recovery in the Labeled Stochastic Block Model
Is this a fundamental bottleneck or an artifact the proof technique? What happens if we tolerate p(i, j, l) that do not depend on n? 3) As a result of the assumption that all clusters are growing linearly in n, Theorem 3 for L 2 gives suboptimal result for minimum cluster size (which is a bottleneck for clustering algorithms). In particular, the minimum cluster size has to be \Omega(n). In both the cases (convex algorithms and spectral clustering), p and q in SBM can be as small as Omega(polylog(n) / n). Minor point: It would be better to have the algorithm in the paper since a part of the paper is about guarantees for it.
Non-approximate Inference for Collective Graphical Models on Path Graphs via Discrete Difference of Convex Algorithm
The importance of aggregated count data, which is calculated from the data of multiple individuals, continues to increase. Collective Graphical Model (CGM) is a probabilistic approach to the analysis of aggregated data. One of the most important operations in CGM is maximum a posteriori (MAP) inference of unobserved variables under given observations. Because the MAP inference problem for general CGMs has been shown to be NP-hard, an approach that solves an approximate problem has been proposed. However, this approach has two major drawbacks.
Non-approximate Inference for Collective Graphical Models on Path Graphs via Discrete Difference of Convex Algorithm
Akagi, Yasunori, Marumo, Naoki, Kim, Hideaki, Kurashima, Takeshi, Toda, Hiroyuki
The importance of aggregated count data, which is calculated from the data of multiple individuals, continues to increase. Collective Graphical Model (CGM) is a probabilistic approach to the analysis of aggregated data. One of the most important operations in CGM is maximum a posteriori (MAP) inference of unobserved variables under given observations. Because the MAP inference problem for general CGMs has been shown to be NP-hard, an approach that solves an approximate problem has been proposed. However, this approach has two major drawbacks. First, the quality of the solution deteriorates when the values in the count tables are small, because the approximation becomes inaccurate. Second, since continuous relaxation is applied, the integrality constraints of the output are violated. To resolve these problems, this paper proposes a new method for MAP inference for CGMs on path graphs. First we show that the MAP inference problem can be formulated as a (non-linear) minimum cost flow problem. Then, we apply Difference of Convex Algorithm (DCA), which is a general methodology to minimize a function represented as the sum of a convex function and a concave function. In our algorithm, important subroutines in DCA can be efficiently calculated by minimum convex cost flow algorithms. Experiments show that the proposed method outputs higher quality solutions than the conventional approach.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.65)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models (0.46)
Convex Techniques for Model Selection
We develop a robust convex algorithm to select the regularization parameter in model selection. In practice this would be automated in order to save practitioners time from having to tune it manually. In particular, we implement and test the convex method for $K$-fold cross validation on ridge regression, although the same concept extends to more complex models. We then compare its performance with standard methods.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)