LLMs Show Surface-Form Brittleness Under Paraphrase Stress Tests
–arXiv.org Artificial Intelligence
Benchmark scores for Large Language Models (LLMs) can be inflated by memorization of test items or near duplicates. We present a simple, protocol that probes generalization by re-evaluating models on paraphrased versions of benchmark questions. Using Mistral-7B-Instruct and Qwen2.5-7B-Instruct, we measure the accuracy gap between original and paraphrased items on ARC-Easy and ARC-Challenge. Our pipeline controls decoding, enforces multiple-choice output format, and includes a robust paraphrase-cleaning step to preserve semantics. We find that paraphrasing induces a non-trivial accuracy drop (original vs. paraphrased), consistent with prior concerns about contamination and brittle surface-form shortcuts.
arXiv.org Artificial Intelligence
Oct-13-2025
- Country:
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Genre:
- Research Report (0.53)
- Industry:
- Education (0.89)
- Technology: