Supplementary for Dual Progressive Prototype Network for Generalized Zero-Shot Learning

Neural Information Processing Systems 

Notably, the performance of DPPN on CZSL is not as impressive as in GZSL. The best result is bolded. From the results, our DPPN outperforms the best previous method by respectively 3.8%, 6.7%, and 2.9% We adopt a two-step training schedule that frst trains DPPN with the fxed ResNet-101 backbone and then fne-tunes the whole network. The best result is bolded. Since the representation derives from the preceding representation, the preceding representations bring limited supplement to the fnal performance.