REFLEX: Reference-Free Evaluation of Log Summarization via Large Language Model Judgment
–arXiv.org Artificial Intelligence
Evaluating log summarization systems is challenging due to the lack of high-quality reference summaries and the limitations of existing metrics like ROUGE and BLEU, which depend on surface-level lexical overlap. We introduce REFLEX, a reference-free evaluation metric for log summarization based on large language model (LLM) judgment. REFLEX uses LLMs as zero-shot evaluators to assess summary quality along dimensions such as relevance, informativeness, and coherence, without requiring gold-standard references or human annotations. We show that REFLEX produces stable, interpretable, and fine-grained evaluations across multiple log summarization dataset, and more effectively distinguishes model outputs than traditional metrics. REFLEX provides a scalable alternative for evaluating log summaries in real-world settings where reference data is scarce or unavailable.
arXiv.org Artificial Intelligence
Nov-12-2025
- Country:
- Asia > Indonesia (0.04)
- Europe > Italy
- North America > United States (0.05)
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology > Security & Privacy (0.68)
- Technology: