Goto

Collaborating Authors

 individual loss


Learning by Minimizing the Sum of Ranked Range

Neural Information Processing Systems

In forming learning objectives, one oftentimes needs to aggregate a set of individual values to a single output. Such cases occur in the aggregate loss, which combines individual losses of a learning model over each training sample, and in the individual loss for multi-label learning, which combines prediction scores over all class labels. In this work, we introduce the sum of ranked range (SoRR) as a general approach to form learning objectives. A ranked range is a consecutive sequence of sorted values of a set of real numbers. The minimization of SoRR is solved with the difference of convex algorithm (DCA). We explore two applications in machine learning of the minimization of the SoRR framework, namely the AoRR aggregate loss for binary classification and the TKML individual loss for multi-label/multi-class classification. Our empirical results highlight the effectiveness of the proposed optimization framework and demonstrate the applicability of proposed losses using synthetic and real datasets.










A Losses Table 3 lists the losses used for training

Neural Information Processing Systems

Table 3 lists the losses used for training. T able 3: Base loss functions used for experiments. Comparison of logistic regression models trained with individual losses for the Fashion-MNIST dataset.Model / Metric Zero-one Hinge Cross-entropy AUC Zero-one 0.1603 - - - (std) ( 0 . As baselines, we train with just one loss at a time and compare the ALMO performance to this per-loss optimal performance.