SLIDE: A Framework Integrating Small and Large Language Models for Open-Domain Dialogues Evaluation
Zhao, Kun, Yang, Bohao, Tang, Chen, Lin, Chenghua, Zhan, Liang
–arXiv.org Artificial Intelligence
The long-standing one-to-many problem of gold standard responses in open-domain dialogue systems presents challenges for automatic evaluation metrics. Though prior works have demonstrated some success by applying powerful Large Language Models (LLMs), existing approaches still struggle with the one-to-many problem, and exhibit subpar performance in domain-specific scenarios. We assume the commonsense reasoning biases within LLMs may hinder their performance in domainspecific evaluations. To address both issues, we propose a novel framework SLIDE (Small and Large Integrated for Dialogue Evaluation), that leverages both a small, specialised model (SLM), and LLMs for the evaluation of open domain dialogues. Our approach introduces several techniques: (1) Contrastive learning to differentiate between robust and non-robust response embeddings; (2) A novel metric for semantic sensitivity that combines embedding cosine distances with similarity learned through neural networks, and (3) a strategy for incorporating the evaluation results from both the SLM and LLMs. Our empirical results demonstrate that our approach achieves state-of-the-art performance in both the classification and evaluation tasks, and additionally the SLIDE evaluator exhibits better correlation with human judgements. Our code is available at https:// github.com/hegehongcha/SLIDE-ACL2024.
arXiv.org Artificial Intelligence
May-29-2024
- Country:
- Asia > Middle East
- UAE (0.14)
- North America > United States
- Michigan (0.14)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.66)
- Technology: