Stress Test for BERT and Deep Models: Predicting Words from Italian Poetry
Delmonte, Rodolfo, Busetto, Nicolò
–arXiv.org Artificial Intelligence
In this paper we present a set of experiments carried out with BERT on a number of Italian sentences taken from poetry domain. The experiments are organized on the hypothesis of a very high level of difficulty in predictability at the three levels of linguistic complexity that we intend to monitor: lexical, syntactic and semantic level. To test this hypothesis we ran the Italian version of BERT with 80 sentences - for a total of 900 tokens - mostly extracted from Italian poetry of the first half of last century. We used then sentences from the newswire domain containing similar syntactic structures. The results show that the DL model is highly sensitive to presence of non-canonical structures. However, DLs are also very sensitive to word frequency and to local non-literal meaning compositional effect. This is also apparent by the preference for predicting function vs content words, collocates vs infrequent word phrases. In the paper, we focused our attention on the use of subword units done by BERT for out of vocabulary words. NTRODUCTION In this paper we report results of an extremely complex task for BERT: predicting the masked word in sentences extracted from Italian poetry of beginning of last century, using the output of the first projection layer of a Deep Learning model, the raw word embeddings. We decided to work on Italian to highlight its difference from English in an extended number of relevant linguistic properties. The underlying hypothesis aims at proving the ability of BERT [1] to predict masked words with increasing complex contexts. To verify this hypothesis we selected sentences that exhibit two important features of Italian texts, non-canonicity and presence of words with very low or rare frequency. To better evaluate the impact of these two factors on word predictability we created a word predictability measure which is based on a combination of scoring functions for context and word frequency of (co-)occurrence. The experiment uses BERT assuming that DNNs can be regarded capable of modeling the behaviour of the human brain in predicting a next word given a sentence and text corpus - but see the following section. It is usually the case that paradigmatic and syntagmatic properties of words in a sentence are tested separately.
arXiv.org Artificial Intelligence
Jan-21-2023
- Country:
- Asia
- Middle East > Republic of Türkiye
- Manisa Province > Manisa (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Middle East > Republic of Türkiye
- Europe
- Austria > Vienna (0.14)
- Italy
- Tuscany > Pisa Province
- Pisa (0.04)
- Veneto > Venice (0.04)
- Tuscany > Pisa Province
- Netherlands > North Holland
- Amsterdam (0.04)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Government (0.46)
- Technology: