Word Sense Induction with Knowledge Distillation from BERT
Saha, Anik, Gittens, Alex, Yener, Bulent
–arXiv.org Artificial Intelligence
Bülent Yener Department of Computer Science Rensselaer Polytechnic Institute 110 8th St, Troy, NY, USA yener@cs.rpi.edu Pre-trained contextual language models are ubiquitously employed for language understanding tasks, but are unsuitable for resource-constrained systems. Noncontextual word embeddings are an efficient alternative in these settings. Such methods typically use one vector to encode multiple different meanings of a word, and incur errors due to polysemy. This paper proposes a two-stage method to distill multiple word senses from a pre-trained language model (BERT) by using attention over the senses of a word in a context and transferring this sense information to fit multi-sense embeddings in a skip-gram-like framework. We demonstrate an effective approach to training the sense disambiguation mechanism in our model with a distribution over word senses extracted from the output layer embeddings of BERT. Experiments on the contextual word similarity and sense induction tasks show that this method is superior to or competitive with state-of-the-art multi-sense embeddings on multiple benchmark data sets, and experiments with an embedding-based topic model (ETM) demonstrates the benefits of using this multi-sense embedding in a downstream application. While modern deep contextual word embeddings have dramatically improved the state-of-the-art in natural language understanding (NLU) tasks, shallow noncontextual representation of words are more practical solution in settings constrained by compute power or latency. In single-sense embeddings such as word2vec or GloVe, the different meanings of a word are represented by the same vector, which leads to the meaning conflation problem in the presence of polysemy.
arXiv.org Artificial Intelligence
Apr-20-2023
- Country:
- North America > United States > New York > Rensselaer County > Troy (0.24)
- Genre:
- Research Report (0.50)
- Technology: