A Consistent Lebesgue Measure for Multi-label Learning
Demir, Kaan, Nguyen, Bach, Xue, Bing, Zhang, Mengjie
–arXiv.org Artificial Intelligence
Multi-label loss functions are usually non-differentiable, requiring surrogate loss functions for gradient-based optimisation. The consistency of surrogate loss functions is not proven and is exacerbated by the conflicting nature of multi-label loss functions. To directly learn from multiple related, yet potentially conflicting multi-label loss functions, we propose a Consistent Lebesgue Measure-based Multi-label Learner (CLML) and prove that CLML can achieve theoretical consistency under a Bayes risk framework. Empirical evidence supports our theory by demonstrating that: (1) CLML can consistently achieve state-of-the-art results; (2) the primary performance factor is the Lebesgue measure design, as CLML optimises a simpler feedforward model without additional label graph, perturbation-based conditioning, or semantic embeddings; and (3) an analysis of the results not only distinguishes CLML's effectiveness but also highlights inconsistencies between the surrogate and the desired loss functions.
arXiv.org Artificial Intelligence
Jan-31-2024
- Country:
- Europe (0.46)
- North America > United States (0.28)
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.46)
- Technology: