SoLar: Sinkhorn Label Refinery for Imbalanced Partial-Label Learning (Appendix) Haobo Wang

Neural Information Processing Systems 

A.1 Proof of Theorem 1 First, we provide the following lemma to show the consistency of the standard cross-entropy loss. Now, we provide the main proof sketch for Theorem 1. Note that we always seek an optimal joint probability matrix before model training, which is mainly designed for empirical measures of the data samples. At a population level, we aim to search for an optimal probability measure that meets the marginal constraints and candidate constraints. Recall that Eq. (1) is a standard linear programming (LP) problem, and can be solved in polynomial LP solvers typically become time-consuming.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found