InfiBench: Evaluating the Question-Answering Capabilities of Code Large Language Models

Neural Information Processing Systems 

With the rapid development of code LLMs, many popular evaluation benchmarks, such as HumanEval, DS-1000, and MBPP, have emerged to measure the performance of code LLMs with a particular focus on code generation tasks. However, they are insufficient to cover the full range of expected capabilities of code LLMs, which span beyond code generation to answering diverse coding-related questions.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found