Scientific and Creative Analogies in Pretrained Language Models
Czinczoll, Tamara, Yannakoudakis, Helen, Mishra, Pushkar, Shutova, Ekaterina
–arXiv.org Artificial Intelligence
This paper examines the encoding of analogy in large-scale pretrained language models, such as BERT and GPT-2. Existing analogy datasets typically focus on a limited set of analogical relations, with a high similarity of the two domains between which the analogy holds. As a more realistic setup, we introduce the Scientific and Creative Analogy dataset (SCAN), a novel analogy dataset containing systematic mappings of multiple attributes and relational structures across dissimilar domains. Using this dataset, we test the analogical reasoning capabilities of several widely-used pretrained language models (LMs). We find that state-of-the-art LMs achieve low performance on these complex analogy tasks, highlighting the challenges still posed by analogy understanding.
arXiv.org Artificial Intelligence
Nov-28-2022
- Country:
- Europe (1.00)
- North America > United States
- Minnesota (0.28)
- Genre:
- Research Report (1.00)
- Technology: