How Smart is BERT? Evaluating the Language Model's Commonsense Knowledge
In the new paper Does BERT Solve Commonsense Task via Commonsense Knowledge?, a team of researchers from Westlake University, Fudan University and Microsoft Research Asia dive deep into the large language model to discover how it encodes the structured commonsense knowledge it leverages on downstream commonsense tasks. The proven successes of pretrained language models such as BERT on various downstream tasks has stimulated research investigating the linguistic knowledge inside the model. Previous studies have revealed shallow syntactic, semantic and word sense knowledge in BERT, however, the question of how BERT deals with commonsense tasks has been relatively unexamined. CommonsenseQA is a multiple-choice question answering dataset built upon the CONCEPTNET knowledge graph. The researchers extracted multiple target concepts with the same semantic relation to a single source concept from CONCEPTNET, where each question has one of three target concepts as the correct answer. For example, "bird" is the source concept in the question "Where does a wild bird usually live?" and "countryside" is the correct answer from the possible target concepts "cage," "windowsill," and "countryside."
Aug-13-2020, 06:09:18 GMT