Retrieval-augmented Encoders for Extreme Multi-label Text Classification
Wang, Yau-Shian, Chang, Wei-Cheng, Jiang, Jyun-Yu, Zhang, Jiong, Yu, Hsiang-Fu, Vishwanathan, S. V. N.
–arXiv.org Artificial Intelligence
Extreme multi-label classification (XMC) seeks to find relevant labels from an extremely large label collection for a given text input. To tackle such a vast label space, current state-of-the-art methods fall into two categories. The oneversus-all (OVA) method uses learnable label embeddings for each label, excelling at memorization (i.e., capturing detailed training signals for accurate head label prediction). In contrast, the dual-encoder (DE) model maps input and label text into a shared embedding space for better generalization (i.e., the capability of predicting tail labels with limited training data), but may fall short at memorization. To achieve generalization and memorization, existing XMC methods often combine DE and OVA models, which involves complex training pipelines. Inspired by the success of retrieval-augmented language models, we propose the Retrieval-augmented Encoders for XMC (RAE-XMC), a novel framework that equips a DE model with retrieval-augmented capability for efficient memorization without additional trainable parameter. During training, RAE-XMC is optimized by the contrastive loss over a knowledge memory that consists of both input instances and labels. During inference, given a test input, RAE-XMC retrieves the top-K keys from the knowledge memory, and aggregates the corresponding values as the prediction scores. RAE-XMC not only advances the state-of-the-art (SOTA) DE method DEXML Gupta et al. (2024), but also achieves more than 10x speedup on the largest LF-AmazonTitles-1.3M dataset under the same 8 A100 GPUs training environments.
arXiv.org Artificial Intelligence
Feb-14-2025
- Country:
- North America
- Canada (0.14)
- United States (0.14)
- North America
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.34)
- Technology: