mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation

Open in new window