SEAL: Suite for Evaluating API-use of LLMs
Kim, Woojeong, Jagmohan, Ashish, Vempaty, Aditya
–arXiv.org Artificial Intelligence
Large language models (LLMs) have limitations in handling tasks that require real-time access to external APIs. While several benchmarks like ToolBench and APIGen have been developed to assess LLMs' API-use capabilities, they often suffer from issues such as lack of generalizability, limited multi-step reasoning coverage, and instability due to real-time API fluctuations. In this paper, we introduce SEAL, an end-to-end testbed designed to evaluate LLMs in real-world API usage. SEAL standardizes existing benchmarks, integrates an agent system for testing API retrieval and planning, and addresses the instability of real-time APIs by introducing a GPT-4-powered API simulator with caching for deterministic evaluations. Our testbed provides a comprehensive evaluation pipeline that covers API retrieval, API calls, and final responses, offering a reliable framework for structured performance comparison in diverse real-world scenarios. SEAL is publicly available, with ongoing updates for new benchmarks.
arXiv.org Artificial Intelligence
Sep-23-2024
- Country:
- North America > Canada > Quebec (0.14)
- Genre:
- Research Report (0.50)
- Workflow (0.46)
- Technology: