FOFO: A Benchmark to Evaluate LLMs' Format-Following Capability
Xia, Congying, Xing, Chen, Du, Jiangshu, Yang, Xinyi, Feng, Yihao, Xu, Ran, Yin, Wenpeng, Xiong, Caiming
–arXiv.org Artificial Intelligence
This paper presents FoFo, a pioneering benchmark for evaluating large language models' (LLMs) ability to follow complex, domain-specific formats, a crucial yet underexamined capability for their application as AI agents. Despite LLMs' advancements, existing benchmarks fail to assess their format-following proficiency adequately. FoFo fills this gap with a diverse range of real-world formats and instructions, developed through an AI-Human collaborative method. Our evaluation across both open-source (e.g., Llama 2, WizardLM) and closed-source (e.g., GPT-4, PALM2, Gemini) LLMs highlights three key findings: open-source models significantly lag behind closed-source ones in format adherence; LLMs' format-following performance is independent of their content generation quality; and LLMs' format proficiency varies across different domains. These insights suggest the need for specialized tuning for format-following skills and highlight FoFo's role in guiding the selection of domain-specific AI agents. FoFo is released here at https://github.com/SalesforceAIResearch/FoFo.
arXiv.org Artificial Intelligence
Feb-28-2024
- Country:
- North America > United States > Illinois (0.14)
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (0.68)
- Research Report
- Industry:
- Banking & Finance (1.00)
- Education > Educational Technology
- Educational Software (0.94)
- Health & Medicine
- Health Care Technology (0.67)
- Pharmaceuticals & Biotechnology (0.68)
- Therapeutic Area (0.68)
- Information Technology > Security & Privacy (0.93)
- Law (1.00)
- Leisure & Entertainment (0.67)
- Technology: