CLMN: Concept based Language Models via Neural Symbolic Reasoning

Yang, Yibo

arXiv.org Artificial Intelligence 

Abstract-- Deep learning's remarkable performance in natural language processing (NLP) faces critical interpretability challenges, particularly in high-stakes domains like healthcare and finance where model transparency is essential. While concept bottleneck models (CBMs) have enhanced interpretabil-ity in computer vision by linking predictions to human-understandable concepts, their adaptation to NLP remains understudied with persistent limitations. Existing approaches either enforce rigid binary concept activations that degrade textual representation quality or obscure semantic interpretability through latent concept embeddings, while failing to capture dynamic concept interactions crucial for understanding linguistic nuances like negation or contextual modification. This paper proposes the C oncept L anguage M odel N etwork (CLMN), a novel neural-symbolic framework that reconciles performance and interpretability through continuous concept embeddings enhanced by fuzzy logic-based reasoning. CLMN addresses the information loss in traditional CBMs by projecting concepts into an interpretable embedding space while preserving human-readable semantics, and introduces adaptive concept interaction modeling through learnable neural-symbolic rules that explicitly represent how concepts influence each other and final predictions. By supplementing original text features with concept-aware representations and enabling automatic derivation of interpretable logic rules, our framework achieves superior performance on multiple NLP benchmarks while providing transparent explanations.