Token-Level Marginalization for Multi-Label LLM Classifiers
Praharaj, Anjaneya, Kasundra, Jaykumar
–arXiv.org Artificial Intelligence
This paper addresses the critical challenge of deriving interpretable confidence scores from generative language models (LLMs) when applied to multi-label content safety classification. While models like LLaMA Guard are effective for identifying unsafe content and its categories, their generative architecture inherently lacks direct class-level probabilities, which hinders model confidence assessment and performance interpretation. This limitation complicates the setting of dynamic thresholds for content moderation and impedes fine-grained error analysis. This research proposes and evaluates three novel token-level probability estimation approaches to bridge this gap. The aim is to enhance model interpretability and accuracy, and evaluate the generalizability of this framework across different instruction-tuned models. Through extensive experimentation on a synthetically generated, rigorously annotated dataset, it is demonstrated that leveraging token logits significantly improves the interpretability and reliability of generative classifiers, enabling more nuanced content safety moderation.
arXiv.org Artificial Intelligence
Dec-1-2025
- Country:
- Asia > India (0.05)
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States > New Mexico
- Bernalillo County > Albuquerque (0.05)
- Mexico > Mexico City
- Genre:
- Research Report (0.64)
- Technology: