Lee, Nayeon
Cosmos-Reason1: From Physical Common Sense To Embodied Reasoning
NVIDIA, null, :, null, Azzolini, Alisson, Brandon, Hannah, Chattopadhyay, Prithvijit, Chen, Huayu, Chu, Jinju, Cui, Yin, Diamond, Jenna, Ding, Yifan, Ferroni, Francesco, Govindaraju, Rama, Gu, Jinwei, Gururani, Siddharth, Hanafi, Imad El, Hao, Zekun, Huffman, Jacob, Jin, Jingyi, Johnson, Brendan, Khan, Rizwan, Kurian, George, Lantz, Elena, Lee, Nayeon, Li, Zhaoshuo, Li, Xuan, Lin, Tsung-Yi, Lin, Yen-Chen, Liu, Ming-Yu, Mathau, Andrew, Ni, Yun, Pavao, Lindsey, Ping, Wei, Romero, David W., Smelyanskiy, Misha, Song, Shuran, Tchapmi, Lyne, Wang, Andrew Z., Wang, Boxin, Wang, Haoxiang, Wei, Fangyin, Xu, Jiashu, Xu, Yao, Yang, Xiaodong, Yang, Zhuolin, Zeng, Xiaohui, Zhang, Zhe
Physical AI systems need to perceive, understand, and perform complex actions in the physical world. In this paper, we present the Cosmos-Reason1 models that can understand the physical world and generate appropriate embodied decisions (e.g., next step action) in natural language through long chain-of-thought reasoning processes. We begin by defining key capabilities for Physical AI reasoning, with a focus on physical common sense and embodied reasoning. To represent physical common sense, we use a hierarchical ontology that captures fundamental knowledge about space, time, and physics. For embodied reasoning, we rely on a two-dimensional ontology that generalizes across different physical embodiments. Building on these capabilities, we develop two multimodal large language models, Cosmos-Reason1-8B and Cosmos-Reason1-56B. We curate data and train our models in four stages: vision pre-training, general supervised fine-tuning (SFT), Physical AI SFT, and Physical AI reinforcement learning (RL) as the post-training. To evaluate our models, we build comprehensive benchmarks for physical common sense and embodied reasoning according to our ontologies. Evaluation results show that Physical AI SFT and reinforcement learning bring significant improvements.
BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
Myung, Junho, Lee, Nayeon, Zhou, Yi, Jin, Jiho, Putri, Rifki Afina, Antypas, Dimosthenis, Borkakoty, Hsuvas, Kim, Eunsu, Perez-Almendros, Carla, Ayele, Abinew Ali, Gutiérrez-Basulto, Víctor, Ibáñez-García, Yazmín, Lee, Hwaran, Muhammad, Shamsuddeen Hassan, Park, Kiwoong, Rzayev, Anar Sabuhi, White, Nina, Yimam, Seid Muhie, Pilehvar, Mohammad Taher, Ousidhoum, Nedjma, Camacho-Collados, Jose, Oh, Alice
Large language models (LLMs) often lack culture-specific knowledge of daily life, especially across diverse regions and non-English languages. Existing benchmarks for evaluating LLMs' cultural sensitivities are limited to a single language or collected from online sources such as Wikipedia, which do not reflect the mundane everyday lifestyles of diverse regions. That is, information about the food people eat for their birthday celebrations, spices they typically use, musical instruments youngsters play, or the sports they practice in school is common cultural knowledge but uncommon in easily collected online sources, especially for underrepresented cultures. To address this issue, we introduce BLEnD, a hand-crafted benchmark designed to evaluate LLMs' everyday knowledge across diverse cultures and languages. BLEnD comprises 52.6k question-answer pairs from 16 countries/regions, in 13 different languages, including low-resource ones such as Amharic, Assamese, Azerbaijani, Hausa, and Sundanese. We construct the benchmark to include two formats of questions: short-answer and multiple-choice. We show that LLMs perform better for cultures that are highly represented online, with a maximum 57.34% difference in GPT-4, the best-performing model, in the short-answer format. For cultures represented by mid-to-high-resource languages, LLMs perform better in their local languages, but for cultures represented by low-resource languages, LLMs perform better in English than the local languages. We make our dataset publicly available at: https://github.com/nlee0212/BLEnD.
HyperCLOVA X Technical Report
Yoo, Kang Min, Han, Jaegeun, In, Sookyo, Jeon, Heewon, Jeong, Jisu, Kang, Jaewook, Kim, Hyunwook, Kim, Kyung-Min, Kim, Munhyong, Kim, Sungju, Kwak, Donghyun, Kwak, Hanock, Kwon, Se Jung, Lee, Bado, Lee, Dongsoo, Lee, Gichang, Lee, Jooho, Park, Baeseong, Shin, Seongjin, Yu, Joonsang, Baek, Seolki, Byeon, Sumin, Cho, Eungsup, Choe, Dooseok, Han, Jeesung, Jin, Youngkyun, Jun, Hyein, Jung, Jaeseung, Kim, Chanwoong, Kim, Jinhong, Kim, Jinuk, Lee, Dokyeong, Park, Dongwook, Sohn, Jeong Min, Han, Sujung, Heo, Jiae, Hong, Sungju, Jeon, Mina, Jung, Hyunhoon, Jung, Jungeun, Jung, Wangkyo, Kim, Chungjoon, Kim, Hyeri, Kim, Jonghyun, Kim, Min Young, Lee, Soeun, Park, Joonhee, Shin, Jieun, Yang, Sojin, Yoon, Jungsoon, Lee, Hwaran, Bae, Sanghwan, Cha, Jeehwan, Gylleus, Karl, Ham, Donghoon, Hong, Mihak, Hong, Youngki, Hong, Yunki, Jang, Dahyun, Jeon, Hyojun, Jeon, Yujin, Jeong, Yeji, Ji, Myunggeun, Jin, Yeguk, Jo, Chansong, Joo, Shinyoung, Jung, Seunghwan, Kim, Adrian Jungmyung, Kim, Byoung Hoon, Kim, Hyomin, Kim, Jungwhan, Kim, Minkyoung, Kim, Minseung, Kim, Sungdong, Kim, Yonghee, Kim, Youngjun, Kim, Youngkwan, Ko, Donghyeon, Lee, Dughyun, Lee, Ha Young, Lee, Jaehong, Lee, Jieun, Lee, Jonghyun, Lee, Jongjin, Lee, Min Young, Lee, Yehbin, Min, Taehong, Min, Yuri, Moon, Kiyoon, Oh, Hyangnam, Park, Jaesun, Park, Kyuyon, Park, Younghun, Seo, Hanbae, Seo, Seunghyun, Sim, Mihyun, Son, Gyubin, Yeo, Matt, Yeom, Kyung Hoon, Yoo, Wonjoon, You, Myungin, Ahn, Doheon, Ahn, Homin, Ahn, Joohee, Ahn, Seongmin, An, Chanwoo, An, Hyeryun, An, Junho, An, Sang-Min, Byun, Boram, Byun, Eunbin, Cha, Jongho, Chang, Minji, Chang, Seunggyu, Cho, Haesong, Cho, Youngdo, Choi, Dalnim, Choi, Daseul, Choi, Hyoseok, Choi, Minseong, Choi, Sangho, Choi, Seongjae, Choi, Wooyong, Chun, Sewhan, Go, Dong Young, Ham, Chiheon, Han, Danbi, Han, Jaemin, Hong, Moonyoung, Hong, Sung Bum, Hwang, Dong-Hyun, Hwang, Seongchan, Im, Jinbae, Jang, Hyuk Jin, Jang, Jaehyung, Jang, Jaeni, Jang, Sihyeon, Jang, Sungwon, Jeon, Joonha, Jeong, Daun, Jeong, Joonhyun, Jeong, Kyeongseok, Jeong, Mini, Jin, Sol, Jo, Hanbyeol, Jo, Hanju, Jo, Minjung, Jung, Chaeyoon, Jung, Hyungsik, Jung, Jaeuk, Jung, Ju Hwan, Jung, Kwangsun, Jung, Seungjae, Ka, Soonwon, Kang, Donghan, Kang, Soyoung, Kil, Taeho, Kim, Areum, Kim, Beomyoung, Kim, Byeongwook, Kim, Daehee, Kim, Dong-Gyun, Kim, Donggook, Kim, Donghyun, Kim, Euna, Kim, Eunchul, Kim, Geewook, Kim, Gyu Ri, Kim, Hanbyul, Kim, Heesu, Kim, Isaac, Kim, Jeonghoon, Kim, Jihye, Kim, Joonghoon, Kim, Minjae, Kim, Minsub, Kim, Pil Hwan, Kim, Sammy, Kim, Seokhun, Kim, Seonghyeon, Kim, Soojin, Kim, Soong, Kim, Soyoon, Kim, Sunyoung, Kim, Taeho, Kim, Wonho, Kim, Yoonsik, Kim, You Jin, Kim, Yuri, Kwon, Beomseok, Kwon, Ohsung, Kwon, Yoo-Hwan, Lee, Anna, Lee, Byungwook, Lee, Changho, Lee, Daun, Lee, Dongjae, Lee, Ha-Ram, Lee, Hodong, Lee, Hwiyeong, Lee, Hyunmi, Lee, Injae, Lee, Jaeung, Lee, Jeongsang, Lee, Jisoo, Lee, Jongsoo, Lee, Joongjae, Lee, Juhan, Lee, Jung Hyun, Lee, Junghoon, Lee, Junwoo, Lee, Se Yun, Lee, Sujin, Lee, Sungjae, Lee, Sungwoo, Lee, Wonjae, Lee, Zoo Hyun, Lim, Jong Kun, Lim, Kun, Lim, Taemin, Na, Nuri, Nam, Jeongyeon, Nam, Kyeong-Min, Noh, Yeonseog, Oh, Biro, Oh, Jung-Sik, Oh, Solgil, Oh, Yeontaek, Park, Boyoun, Park, Cheonbok, Park, Dongju, Park, Hyeonjin, Park, Hyun Tae, Park, Hyunjung, Park, Jihye, Park, Jooseok, Park, Junghwan, Park, Jungsoo, Park, Miru, Park, Sang Hee, Park, Seunghyun, Park, Soyoung, Park, Taerim, Park, Wonkyeong, Ryu, Hyunjoon, Ryu, Jeonghun, Ryu, Nahyeon, Seo, Soonshin, Seo, Suk Min, Shim, Yoonjeong, Shin, Kyuyong, Shin, Wonkwang, Sim, Hyun, Sim, Woongseob, Soh, Hyejin, Son, Bokyong, Son, Hyunjun, Son, Seulah, Song, Chi-Yun, Song, Chiyoung, Song, Ka Yeon, Song, Minchul, Song, Seungmin, Wang, Jisung, Yeo, Yonggoo, Yi, Myeong Yeon, Yim, Moon Bin, Yoo, Taehwan, Yoo, Youngjoon, Yoon, Sungmin, Yoon, Young Jin, Yu, Hangyeol, Yu, Ui Seon, Zuo, Xingdong, Bae, Jeongin, Bae, Joungeun, Cho, Hyunsoo, Cho, Seonghyun, Cho, Yongjin, Choi, Taekyoon, Choi, Yera, Chung, Jiwan, Han, Zhenghui, Heo, Byeongho, Hong, Euisuk, Hwang, Taebaek, Im, Seonyeol, Jegal, Sumin, Jeon, Sumin, Jeong, Yelim, Jeong, Yonghyun, Jiang, Can, Jiang, Juyong, Jin, Jiho, Jo, Ara, Jo, Younghyun, Jung, Hoyoun, Jung, Juyoung, Kang, Seunghyeong, Kim, Dae Hee, Kim, Ginam, Kim, Hangyeol, Kim, Heeseung, Kim, Hyojin, Kim, Hyojun, Kim, Hyun-Ah, Kim, Jeehye, Kim, Jin-Hwa, Kim, Jiseon, Kim, Jonghak, Kim, Jung Yoon, Kim, Rak Yeong, Kim, Seongjin, Kim, Seoyoon, Kim, Sewon, Kim, Sooyoung, Kim, Sukyoung, Kim, Taeyong, Ko, Naeun, Koo, Bonseung, Kwak, Heeyoung, Kwon, Haena, Kwon, Youngjin, Lee, Boram, Lee, Bruce W., Lee, Dagyeong, Lee, Erin, Lee, Euijin, Lee, Ha Gyeong, Lee, Hyojin, Lee, Hyunjeong, Lee, Jeeyoon, Lee, Jeonghyun, Lee, Jongheok, Lee, Joonhyung, Lee, Junhyuk, Lee, Mingu, Lee, Nayeon, Lee, Sangkyu, Lee, Se Young, Lee, Seulgi, Lee, Seung Jin, Lee, Suhyeon, Lee, Yeonjae, Lee, Yesol, Lee, Youngbeom, Lee, Yujin, Li, Shaodong, Liu, Tianyu, Moon, Seong-Eun, Moon, Taehong, Nihlenramstroem, Max-Lasse, Oh, Wonseok, Oh, Yuri, Park, Hongbeen, Park, Hyekyung, Park, Jaeho, Park, Nohil, Park, Sangjin, Ryu, Jiwon, Ryu, Miru, Ryu, Simo, Seo, Ahreum, Seo, Hee, Seo, Kangdeok, Shin, Jamin, Shin, Seungyoun, Sin, Heetae, Wang, Jiangping, Wang, Lei, Xiang, Ning, Xiao, Longxiang, Xu, Jing, Yi, Seonyeong, Yoo, Haanju, Yoo, Haneul, Yoo, Hwanhee, Yu, Liang, Yu, Youngjae, Yuan, Weijie, Zeng, Bo, Zhou, Qian, Cho, Kyunghyun, Ha, Jung-Woo, Park, Joonsuk, Hwang, Jihyun, Kwon, Hyoung Jo, Kwon, Soonyong, Lee, Jungyeon, Lee, Seungho, Lim, Seonghyeon, Noh, Hyunkyung, Choi, Seungho, Lee, Sang-Woo, Lim, Jung Hwa, Sung, Nako
We introduce HyperCLOVA X, a family of large language models (LLMs) tailored to the Korean language and culture, along with competitive capabilities in English, math, and coding. HyperCLOVA X was trained on a balanced mix of Korean, English, and code data, followed by instruction-tuning with high-quality human-annotated datasets while abiding by strict safety guidelines reflecting our commitment to responsible AI. The model is evaluated across various benchmarks, including comprehensive reasoning, knowledge, commonsense, factuality, coding, math, chatting, instruction-following, and harmlessness, in both Korean and English. HyperCLOVA X exhibits strong reasoning capabilities in Korean backed by a deep understanding of the language and cultural nuances. Further analysis of the inherent bilingual nature and its extension to multilingualism highlights the model's cross-lingual proficiency and strong generalization ability to untargeted languages, including machine translation between several language pairs and cross-lingual inference tasks. We believe that HyperCLOVA X can provide helpful guidance for regions or countries in developing their sovereign LLMs.
Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Bang, Yejin, Chen, Delong, Lee, Nayeon, Fung, Pascale
We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues. Existing benchmarks and measures focus on gender and racial biases. However, political bias exists in LLMs and can lead to polarization and other harms in downstream applications. In order to provide transparency to users, we advocate that there should be fine-grained and explainable measures of political biases generated by LLMs. Our proposed measure looks at different political issues such as reproductive rights and climate change, at both the content (the substance of the generation) and the style (the lexical polarity) of such bias. We measured the political bias in eleven open-sourced LLMs and showed that our proposed framework is easily scalable to other topics and is explainable.
A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Bang, Yejin, Cahyawijaya, Samuel, Lee, Nayeon, Dai, Wenliang, Su, Dan, Wilie, Bryan, Lovenia, Holy, Ji, Ziwei, Yu, Tiezheng, Chung, Willy, Do, Quyet V., Xu, Yan, Fung, Pascale
This paper proposes a framework for quantitatively evaluating interactive LLMs such as ChatGPT using publicly available data sets. We carry out an extensive technical evaluation of ChatGPT using 23 data sets covering 8 different common NLP application tasks. We evaluate the multitask, multilingual and multi-modal aspects of ChatGPT based on these data sets and a newly designed multimodal dataset. We find that ChatGPT outperforms LLMs with zero-shot learning on most tasks and even outperforms fine-tuned models on some tasks. We find that it is better at understanding non-Latin script languages than generating them. It is able to generate multimodal content from textual prompts, via an intermediate code generation step. Moreover, we find that ChatGPT is 63.41% accurate on average in 10 different reasoning categories under logical reasoning, non-textual reasoning, and commonsense reasoning, hence making it an unreliable reasoner. It is, for example, better at deductive than inductive reasoning. ChatGPT suffers from hallucination problems like other LLMs and it generates more extrinsic hallucinations from its parametric memory as it does not have access to an external knowledge base. Finally, the interactive feature of ChatGPT enables human collaboration with the underlying LLM to improve its performance, i.e, 8% ROUGE-1 on summarization and 2% ChrF++ on machine translation, in a multi-turn "prompt engineering" fashion. We also release codebase for evaluation set extraction.
Mitigating Framing Bias with Polarity Minimization Loss
Bang, Yejin, Lee, Nayeon, Fung, Pascale
Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events. Media outlets with divergent political stances often use polarized language in their reporting of the same event. We propose a new loss function that encourages the model to minimize the polarity difference between the polarized input articles to reduce framing bias. Specifically, our loss is designed to jointly optimize the model to map polarity ends bidirectionally. Our experimental results demonstrate that incorporating the proposed polarity minimization loss leads to a substantial reduction in framing bias when compared to a BART-based multi-document summarization model. Notably, we find that the effectiveness of this approach is most pronounced when the model is trained to minimize the polarity loss associated with informational framing bias (i.e., skewed selection of information to report).
Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Ji, Ziwei, Yu, Tiezheng, Xu, Yan, Lee, Nayeon, Ishii, Etsuko, Fung, Pascale
Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks. However, the practical deployment still faces challenges, notably the issue of "hallucination", where models generate plausible-sounding but unfaithful or nonsensical information. This issue becomes particularly critical in the medical domain due to the uncommon professional concepts and potential social risks involved. This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets. Our investigation centers on the identification and comprehension of common problematic answers, with a specific emphasis on hallucination. To tackle this challenge, we present an interactive self-reflection methodology that incorporates knowledge acquisition and answer generation. Through this feedback process, our approach steadily enhances the factuality, consistency, and entailment of the generated answers. Consequently, we harness the interactivity and multitasking ability of LLMs and produce progressively more precise and accurate answers. Experimental results on both automatic and human evaluation demonstrate the superiority of our approach in hallucination reduction compared to baselines.
Survey of Social Bias in Vision-Language Models
Lee, Nayeon, Bang, Yejin, Lovenia, Holy, Cahyawijaya, Samuel, Dai, Wenliang, Fung, Pascale
In recent years, the rapid advancement of machine learning (ML) models, particularly transformer-based pre-trained models, has revolutionized Natural Language Processing (NLP) and Computer Vision (CV) fields. However, researchers have discovered that these models can inadvertently capture and reinforce social biases present in their training datasets, leading to potential social harms, such as uneven resource allocation and unfair representation of specific social groups. Addressing these biases and ensuring fairness in artificial intelligence (AI) systems has become a critical concern in the ML community. The recent introduction of pre-trained vision-and-language (VL) models in the emerging multimodal field demands attention to the potential social biases present in these models as well. Although VL models are susceptible to social bias, there is a limited understanding compared to the extensive discussions on bias in NLP and CV. This survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL. By examining these perspectives, the survey aims to offer valuable guidelines on how to approach and mitigate social bias in both unimodal and multimodal settings. The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models in various applications and research endeavors.
KoBBQ: Korean Bias Benchmark for Question Answering
Jin, Jiho, Kim, Jiseon, Lee, Nayeon, Yoo, Haneul, Oh, Alice, Lee, Hwaran
The BBQ (Bias Benchmark for Question Answering) dataset enables the evaluation of the social biases that language models (LMs) exhibit in downstream tasks. However, it is challenging to adapt BBQ to languages other than English as social biases are culturally dependent. In this paper, we devise a process to construct a non-English bias benchmark dataset by leveraging the English BBQ dataset in a culturally adaptive way and present the KoBBQ dataset for evaluating biases in Question Answering (QA) tasks in Korean. We identify samples from BBQ into three classes: Simply-Translated (can be used directly after cultural translation), Target-Modified (requires localization in target groups), and Sample-Removed (does not fit Korean culture). We further enhance the cultural relevance to Korean culture by adding four new categories of bias specific to Korean culture and newly creating samples based on Korean literature. KoBBQ consists of 246 templates and 4,740 samples across 12 categories of social bias. Using KoBBQ, we measure the accuracy and bias scores of several state-of-the-art multilingual LMs. We demonstrate the differences in the bias of LMs in Korean and English, clarifying the need for hand-crafted data considering cultural differences.
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Srivastava, Aarohi, Rastogi, Abhinav, Rao, Abhishek, Shoeb, Abu Awal Md, Abid, Abubakar, Fisch, Adam, Brown, Adam R., Santoro, Adam, Gupta, Aditya, Garriga-Alonso, Adrià, Kluska, Agnieszka, Lewkowycz, Aitor, Agarwal, Akshat, Power, Alethea, Ray, Alex, Warstadt, Alex, Kocurek, Alexander W., Safaya, Ali, Tazarv, Ali, Xiang, Alice, Parrish, Alicia, Nie, Allen, Hussain, Aman, Askell, Amanda, Dsouza, Amanda, Slone, Ambrose, Rahane, Ameet, Iyer, Anantharaman S., Andreassen, Anders, Madotto, Andrea, Santilli, Andrea, Stuhlmüller, Andreas, Dai, Andrew, La, Andrew, Lampinen, Andrew, Zou, Andy, Jiang, Angela, Chen, Angelica, Vuong, Anh, Gupta, Animesh, Gottardi, Anna, Norelli, Antonio, Venkatesh, Anu, Gholamidavoodi, Arash, Tabassum, Arfa, Menezes, Arul, Kirubarajan, Arun, Mullokandov, Asher, Sabharwal, Ashish, Herrick, Austin, Efrat, Avia, Erdem, Aykut, Karakaş, Ayla, Roberts, B. Ryan, Loe, Bao Sheng, Zoph, Barret, Bojanowski, Bartłomiej, Özyurt, Batuhan, Hedayatnia, Behnam, Neyshabur, Behnam, Inden, Benjamin, Stein, Benno, Ekmekci, Berk, Lin, Bill Yuchen, Howald, Blake, Orinion, Bryan, Diao, Cameron, Dour, Cameron, Stinson, Catherine, Argueta, Cedrick, Ramírez, César Ferri, Singh, Chandan, Rathkopf, Charles, Meng, Chenlin, Baral, Chitta, Wu, Chiyu, Callison-Burch, Chris, Waites, Chris, Voigt, Christian, Manning, Christopher D., Potts, Christopher, Ramirez, Cindy, Rivera, Clara E., Siro, Clemencia, Raffel, Colin, Ashcraft, Courtney, Garbacea, Cristina, Sileo, Damien, Garrette, Dan, Hendrycks, Dan, Kilman, Dan, Roth, Dan, Freeman, Daniel, Khashabi, Daniel, Levy, Daniel, González, Daniel Moseguí, Perszyk, Danielle, Hernandez, Danny, Chen, Danqi, Ippolito, Daphne, Gilboa, Dar, Dohan, David, Drakard, David, Jurgens, David, Datta, Debajyoti, Ganguli, Deep, Emelin, Denis, Kleyko, Denis, Yuret, Deniz, Chen, Derek, Tam, Derek, Hupkes, Dieuwke, Misra, Diganta, Buzan, Dilyar, Mollo, Dimitri Coelho, Yang, Diyi, Lee, Dong-Ho, Schrader, Dylan, Shutova, Ekaterina, Cubuk, Ekin Dogus, Segal, Elad, Hagerman, Eleanor, Barnes, Elizabeth, Donoway, Elizabeth, Pavlick, Ellie, Rodola, Emanuele, Lam, Emma, Chu, Eric, Tang, Eric, Erdem, Erkut, Chang, Ernie, Chi, Ethan A., Dyer, Ethan, Jerzak, Ethan, Kim, Ethan, Manyasi, Eunice Engefu, Zheltonozhskii, Evgenii, Xia, Fanyue, Siar, Fatemeh, Martínez-Plumed, Fernando, Happé, Francesca, Chollet, Francois, Rong, Frieda, Mishra, Gaurav, Winata, Genta Indra, de Melo, Gerard, Kruszewski, Germán, Parascandolo, Giambattista, Mariani, Giorgio, Wang, Gloria, Jaimovitch-López, Gonzalo, Betz, Gregor, Gur-Ari, Guy, Galijasevic, Hana, Kim, Hannah, Rashkin, Hannah, Hajishirzi, Hannaneh, Mehta, Harsh, Bogar, Hayden, Shevlin, Henry, Schütze, Hinrich, Yakura, Hiromu, Zhang, Hongming, Wong, Hugh Mee, Ng, Ian, Noble, Isaac, Jumelet, Jaap, Geissinger, Jack, Kernion, Jackson, Hilton, Jacob, Lee, Jaehoon, Fisac, Jaime Fernández, Simon, James B., Koppel, James, Zheng, James, Zou, James, Kocoń, Jan, Thompson, Jana, Wingfield, Janelle, Kaplan, Jared, Radom, Jarema, Sohl-Dickstein, Jascha, Phang, Jason, Wei, Jason, Yosinski, Jason, Novikova, Jekaterina, Bosscher, Jelle, Marsh, Jennifer, Kim, Jeremy, Taal, Jeroen, Engel, Jesse, Alabi, Jesujoba, Xu, Jiacheng, Song, Jiaming, Tang, Jillian, Waweru, Joan, Burden, John, Miller, John, Balis, John U., Batchelder, Jonathan, Berant, Jonathan, Frohberg, Jörg, Rozen, Jos, Hernandez-Orallo, Jose, Boudeman, Joseph, Guerr, Joseph, Jones, Joseph, Tenenbaum, Joshua B., Rule, Joshua S., Chua, Joyce, Kanclerz, Kamil, Livescu, Karen, Krauth, Karl, Gopalakrishnan, Karthik, Ignatyeva, Katerina, Markert, Katja, Dhole, Kaustubh D., Gimpel, Kevin, Omondi, Kevin, Mathewson, Kory, Chiafullo, Kristen, Shkaruta, Ksenia, Shridhar, Kumar, McDonell, Kyle, Richardson, Kyle, Reynolds, Laria, Gao, Leo, Zhang, Li, Dugan, Liam, Qin, Lianhui, Contreras-Ochando, Lidia, Morency, Louis-Philippe, Moschella, Luca, Lam, Lucas, Noble, Lucy, Schmidt, Ludwig, He, Luheng, Colón, Luis Oliveros, Metz, Luke, Şenel, Lütfi Kerem, Bosma, Maarten, Sap, Maarten, ter Hoeve, Maartje, Farooqi, Maheen, Faruqui, Manaal, Mazeika, Mantas, Baturan, Marco, Marelli, Marco, Maru, Marco, Quintana, Maria Jose Ramírez, Tolkiehn, Marie, Giulianelli, Mario, Lewis, Martha, Potthast, Martin, Leavitt, Matthew L., Hagen, Matthias, Schubert, Mátyás, Baitemirova, Medina Orduna, Arnaud, Melody, McElrath, Melvin, Yee, Michael A., Cohen, Michael, Gu, Michael, Ivanitskiy, Michael, Starritt, Michael, Strube, Michael, Swędrowski, Michał, Bevilacqua, Michele, Yasunaga, Michihiro, Kale, Mihir, Cain, Mike, Xu, Mimee, Suzgun, Mirac, Walker, Mitch, Tiwari, Mo, Bansal, Mohit, Aminnaseri, Moin, Geva, Mor, Gheini, Mozhdeh, T, Mukund Varma, Peng, Nanyun, Chi, Nathan A., Lee, Nayeon, Krakover, Neta Gur-Ari, Cameron, Nicholas, Roberts, Nicholas, Doiron, Nick, Martinez, Nicole, Nangia, Nikita, Deckers, Niklas, Muennighoff, Niklas, Keskar, Nitish Shirish, Iyer, Niveditha S., Constant, Noah, Fiedel, Noah, Wen, Nuan, Zhang, Oliver, Agha, Omar, Elbaghdadi, Omar, Levy, Omer, Evans, Owain, Casares, Pablo Antonio Moreno, Doshi, Parth, Fung, Pascale, Liang, Paul Pu, Vicol, Paul, Alipoormolabashi, Pegah, Liao, Peiyuan, Liang, Percy, Chang, Peter, Eckersley, Peter, Htut, Phu Mon, Hwang, Pinyu, Miłkowski, Piotr, Patil, Piyush, Pezeshkpour, Pouya, Oli, Priti, Mei, Qiaozhu, Lyu, Qing, Chen, Qinlang, Banjade, Rabin, Rudolph, Rachel Etta, Gabriel, Raefer, Habacker, Rahel, Risco, Ramon, Millière, Raphaël, Garg, Rhythm, Barnes, Richard, Saurous, Rif A., Arakawa, Riku, Raymaekers, Robbe, Frank, Robert, Sikand, Rohan, Novak, Roman, Sitelew, Roman, LeBras, Ronan, Liu, Rosanne, Jacobs, Rowan, Zhang, Rui, Salakhutdinov, Ruslan, Chi, Ryan, Lee, Ryan, Stovall, Ryan, Teehan, Ryan, Yang, Rylan, Singh, Sahib, Mohammad, Saif M., Anand, Sajant, Dillavou, Sam, Shleifer, Sam, Wiseman, Sam, Gruetter, Samuel, Bowman, Samuel R., Schoenholz, Samuel S., Han, Sanghyun, Kwatra, Sanjeev, Rous, Sarah A., Ghazarian, Sarik, Ghosh, Sayan, Casey, Sean, Bischoff, Sebastian, Gehrmann, Sebastian, Schuster, Sebastian, Sadeghi, Sepideh, Hamdan, Shadi, Zhou, Sharon, Srivastava, Shashank, Shi, Sherry, Singh, Shikhar, Asaadi, Shima, Gu, Shixiang Shane, Pachchigar, Shubh, Toshniwal, Shubham, Upadhyay, Shyam, Shyamolima, null, Debnath, null, Shakeri, Siamak, Thormeyer, Simon, Melzi, Simone, Reddy, Siva, Makini, Sneha Priscilla, Lee, Soo-Hwan, Torene, Spencer, Hatwar, Sriharsha, Dehaene, Stanislas, Divic, Stefan, Ermon, Stefano, Biderman, Stella, Lin, Stephanie, Prasad, Stephen, Piantadosi, Steven T., Shieber, Stuart M., Misherghi, Summer, Kiritchenko, Svetlana, Mishra, Swaroop, Linzen, Tal, Schuster, Tal, Li, Tao, Yu, Tao, Ali, Tariq, Hashimoto, Tatsu, Wu, Te-Lin, Desbordes, Théo, Rothschild, Theodore, Phan, Thomas, Wang, Tianle, Nkinyili, Tiberius, Schick, Timo, Kornev, Timofei, Tunduny, Titus, Gerstenberg, Tobias, Chang, Trenton, Neeraj, Trishala, Khot, Tushar, Shultz, Tyler, Shaham, Uri, Misra, Vedant, Demberg, Vera, Nyamai, Victoria, Raunak, Vikas, Ramasesh, Vinay, Prabhu, Vinay Uday, Padmakumar, Vishakh, Srikumar, Vivek, Fedus, William, Saunders, William, Zhang, William, Vossen, Wout, Ren, Xiang, Tong, Xiaoyu, Zhao, Xinran, Wu, Xinyi, Shen, Xudong, Yaghoobzadeh, Yadollah, Lakretz, Yair, Song, Yangqiu, Bahri, Yasaman, Choi, Yejin, Yang, Yichi, Hao, Yiding, Chen, Yifu, Belinkov, Yonatan, Hou, Yu, Hou, Yufang, Bai, Yuntao, Seid, Zachary, Zhao, Zhuoye, Wang, Zijian, Wang, Zijie J., Wang, Zirui, Wu, Ziyi
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 450 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.