ReCAM@IITK at SemEval-2021 Task 4: BERT and ALBERT based Ensemble for Abstract Word Prediction
Mittal, Abhishek, Modi, Ashutosh
–arXiv.org Artificial Intelligence
This paper describes our system for Task 4 of SemEval-2021: Reading Comprehension of Abstract Meaning (ReCAM). We participated in all subtasks where the main goal was to predict an abstract word missing from a statement. We fine-tuned the pre-trained masked language models namely BERT and ALBERT and used an Ensemble of these as our submitted system on Subtask 1 (ReCAM-Imperceptibility) and Subtask 2 (ReCAM-Nonspecificity). For Subtask 3 (ReCAM-Intersection), we submitted the ALBERT model as it gives the best results. We tried multiple approaches and found that Masked Language Modeling(MLM) based approach works the best.
arXiv.org Artificial Intelligence
Apr-4-2021
- Country:
- Europe > United Kingdom
- England > Leicestershire
- Leicester (0.04)
- Scotland (0.04)
- England > Leicestershire
- Oceania > Australia (0.04)
- Europe > United Kingdom
- Genre:
- Research Report (0.50)
- Industry:
- Education (0.68)
- Technology: