Goto

Collaborating Authors

 hallu


DiagnosingRetrieval-AugmentedGeneration

Neural Information Processing Systems

EvaluatingRAGsystems,however,presentsseveralchallenges: (1) modular complexity: The modular nature of RAG systems, comprising both a retriever and a generator, complicates the design of effective evaluation metrics. It is crucial to establish metrics that can holistically assess the entire system as well as evaluate the individual modules and their interplay [53],allowing for fully understanding the sources ofthe errors and misses and howthey aregenerated.



RAGChecker: A Fine-grained Framework for Diagnosing Retrieval-Augmented Generation

Ru, Dongyu, Qiu, Lin, Hu, Xiangkun, Zhang, Tianhang, Shi, Peng, Chang, Shuaichen, Jiayang, Cheng, Wang, Cunxiang, Sun, Shichao, Li, Huanyu, Zhang, Zizhao, Wang, Binjie, Jiang, Jiarong, He, Tong, Wang, Zhiguo, Liu, Pengfei, Zhang, Yue, Zhang, Zheng

arXiv.org Artificial Intelligence

Despite Retrieval-Augmented Generation (RAG) showing promising capability in leveraging external knowledge, a comprehensive evaluation of RAG systems is still challenging due to the modular nature of RAG, evaluation of long-form responses and reliability of measurements. In this paper, we propose a fine-grained evaluation framework, RAGChecker, that incorporates a suite of diagnostic metrics for both the retrieval and generation modules. Meta evaluation verifies that RAGChecker has significantly better correlations with human judgments than other evaluation metrics. Using RAGChecker, we evaluate 8 RAG systems and conduct an in-depth analysis of their performance, revealing insightful patterns and trade-offs in the design choices of RAG architectures. The metrics of RAGChecker can guide researchers and practitioners in developing more effective RAG systems. This work has been open sourced at https://github.com/amazon-science/RAGChecker.