AIME: AI System Optimization via Multiple LLM Evaluators

Patel, Bhrij, Chakraborty, Souradip, Suttle, Wesley A., Wang, Mengdi, Bedi, Amrit Singh, Manocha, Dinesh

arXiv.org Artificial Intelligence 

Text-based AI system optimization typically involves a feedback loop scheme where a single LLM generates an evaluation in natural language of the current output to improve the next iteration's output. However, in this work, we empirically demonstrate that for a practical and complex task (code generation) with multiple criteria to evaluate, utilizing only one LLM evaluator tends to let errors in generated code go undetected, thus leading to incorrect evaluations and ultimately suboptimal test case performance. Motivated by this failure case, we assume there exists an optimal evaluation policy that samples an evaluation between response and ground truth. We then theoretically prove that a linear combination of multiple evaluators can approximate this optimal policy. From this insight, we propose AI system optimization via Multiple LLM Evaluators (AIME). AIME is an evaluation protocol that utilizes multiple LLMs that each independently generate an evaluation on separate criteria and then combine them via concatenation. We provide an extensive empirical study showing AIME outperforming baseline methods in code generation tasks, with up to 62% higher error detection rate and up to 16% higher success rate than a single LLM evaluation protocol on LeetCodeHard and HumanEval datasets. We also show that the selection of the number of evaluators and which criteria to utilize is non-trivial as it can impact pact success rate by up to 12%. Pre-trained foundation models, such as Large Language Models (LLMs), have developed rapidly over the recent years (Achiam et al., 2023; Touvron et al., 2023). As the application complexity increases, the shift to AI systems containing multiple components such as LLM-based agents and web search (Xiong et al., 2024), will continue (Zaharia et al., 2024; Yuksekgonul et al., 2024).