Inducing brain-relevant bias in natural language processing models
Schwartz, Dan, Toneva, Mariya, Wehbe, Leila
–Neural Information Processing Systems
Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain. However, these models have not been specifically designed to capture the way the brain represents language meaning. We hypothesize that fine-tuning these models to predict recordings of brain activity of people reading text will lead to representations that encode more brain-activity-relevant language information. We demonstrate that a version of BERT, a recently introduced and powerful language model, can improve the prediction of brain activity after fine-tuning. We show that the relationship between language and brain activity learned by BERT during this fine-tuning transfers across multiple participants.
Neural Information Processing Systems
Mar-19-2020, 02:31:11 GMT