An In-depth Look at Gemini's Language Abilities

Akter, Syeda Nahida, Yu, Zichun, Muhamed, Aashiq, Ou, Tianyue, Bäuerle, Alex, Cabrera, Ángel Alexander, Dholakia, Krish, Xiong, Chenyan, Neubig, Graham

arXiv.org Artificial Intelligence 

The recently released Google Gemini class of models are the first to comprehensively report results that rival the OpenAI GPT series across a wide variety of tasks. In this paper, we do an in-depth exploration of Gemini's language abilities, making two contributions. First, we provide a third-party, objective comparison of the abilities of the OpenAI GPT and Google Gemini models with reproducible code and fully transparent results. Second, we take a closer look at the results, identifying areas where one of the two model classes excels. We perform this analysis over 10 datasets testing a variety of language abilities, including reasoning, answering knowledge-based questions, solving math problems, translating between languages, generating code, and acting as instruction-following agents. From this analysis, we find that Gemini Pro achieves accuracy that is close but slightly inferior to the corresponding GPT 3.5 Turbo on all English-language tasks that we benchmarked, but find that Gemini Pro excels in translation into other languages for the languages that it supports. We further provide explanations for some of the under-performing tasks, including failures in mathematical reasoning with many digits, sensitivity to multiple-choice answer ordering, and others. We also identify areas where Gemini Pro demonstrates comparably high performance, such as handling longer and more complex reasoning chains.