A General Framework for Producing Interpretable Semantic Text Embeddings
Sun, Yiqun, Huang, Qiang, Tang, Yixuan, Tung, Anthony K. H., Yu, Jun
–arXiv.org Artificial Intelligence
Semantic text embedding is essential to many tasks in Natural Language Processing (NLP). While black-box models are capable of generating high-quality embeddings, their lack of interpretability limits their use in tasks that demand transparency. Recent approaches have improved interpretability by leveraging domain-expert-crafted or LLM-generated questions, but these methods rely heavily on expert input or well-prompt design, which restricts their generalizability and ability to generate discriminative questions across a wide range of tasks. To address these challenges, we introduce CQG-MBQA (Contrastive Question Generation - Multi-task Binary Question Answering), a general framework for producing interpretable semantic text embeddings across diverse tasks. Our framework systematically generates highly discriminative, low cognitive load yes/no questions through the CQG method and answers them efficiently with the MBQA model, resulting in interpretable embeddings in a cost-effective manner. We validate the effectiveness and interpretability of CQG-MBQA through extensive experiments and ablation studies, demonstrating that it delivers embedding quality comparable to many advanced black-box models while maintaining inherently interpretability. Additionally, CQG-MBQA outperforms other interpretable text embedding methods across various downstream tasks. Text embedding is a cornerstone of Natural Language Processing (NLP), transforming texts--whether sentences, paragraphs, or full documents--into embedding vectors that capture their semantic meaning. In semantic embedding spaces, the similarity between texts is represented by the proximity of their embedding vectors, typically measured using distance measures like Euclidean distance, cosine distance, or inner product. Black-box text embedding methods, such as Sentence-BERT (Reimers & Gurevych, 2019), SimCSE (Gao et al., 2021), WhitenedCSE (Zhuo et al., 2023), and AnglE (Li & Li, 2024), excel at generating high-quality embeddings by training on vast amounts of data. These models are highly effective at capturing semantic similarities, making them indispensable for a variety of NLP tasks (Muennighoff et al., 2023). However, their black-box nature leaves the embeddings opaque to human users.
arXiv.org Artificial Intelligence
Oct-4-2024
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine
- Consumer Health (0.46)
- Therapeutic Area > Neurology (0.46)
- Health & Medicine
- Technology: