Towards Interpretable and Efficient Automatic Reference-Based Summarization Evaluation
Liu, Yixin, Fabbri, Alexander R., Zhao, Yilun, Liu, Pengfei, Joty, Shafiq, Wu, Chien-Sheng, Xiong, Caiming, Radev, Dragomir
–arXiv.org Artificial Intelligence
Interpretability and efficiency are two important considerations for the adoption of neural automatic metrics. In this work, we develop strong-performing automatic metrics for reference-based summarization evaluation, based on a two-stage evaluation pipeline that first extracts basic information units from one text sequence and then checks the extracted units in another sequence. The metrics we developed include two-stage metrics that can provide high interpretability at both the fine-grained unit level and summary level, and one-stage metrics that achieve a balance between efficiency and interpretability. We make the developed tools publicly available at https://github.com/Yale-LILY/AutoACU.
arXiv.org Artificial Intelligence
Nov-16-2023
- Country:
- Asia (1.00)
- Europe (1.00)
- North America > United States
- Massachusetts (0.28)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Genre:
- Research Report (0.50)
- Technology: