Appendix A Proofs A.1 Proof of Proposition
–Neural Information Processing Systems
For the general backward correction, based on Eq. 11, conducting adversarial training (A T) on the Based on Eqs. 13, 14 and 15, the inequality holds between their empirical formulations: We adversarially train a model with several complementary losses separately on Kuzushiji. The results show the same observation (as in Section 4.2) Note that we only optimize the model using the ones generated by the oracle. Figure 6: The results on four randomly sampled instances from Kuzushiji. A T with CLs, the two-stage method consists of a complementary learning phase and an A T phase, following the setups of complementary learning setups and A T setups (in Section 5), respectively. For CIFAR10 and SVHN, their learning rates are set to 0.01.
Neural Information Processing Systems
Nov-15-2025, 12:52:57 GMT