Saha, Anik
A Cross-Domain Evaluation of Approaches for Causal Knowledge Extraction
Saha, Anik, Hassanzadeh, Oktie, Gittens, Alex, Ni, Jian, Srinivas, Kavitha, Yener, Bulent
Causal knowledge extraction is the task of extracting relevant causes and effects from text by detecting the causal relation. Although this task is important for language understanding and knowledge discovery, recent works in this domain have largely focused on binary classification of a text segment as causal or non-causal. In this regard, we perform a thorough analysis of three sequence tagging models for causal knowledge extraction and compare it with a span based approach to causality extraction. Our experiments show that embeddings from pre-trained language models (e.g. BERT) provide a significant performance boost on this task compared to previous state-of-the-art models with complex architectures. We observe that span based models perform better than simple sequence tagging models based on BERT across all 4 data sets from diverse domains with different types of cause-effect phrases.
Word Sense Induction with Knowledge Distillation from BERT
Saha, Anik, Gittens, Alex, Yener, Bulent
Bülent Yener Department of Computer Science Rensselaer Polytechnic Institute 110 8th St, Troy, NY, USA yener@cs.rpi.edu Pre-trained contextual language models are ubiquitously employed for language understanding tasks, but are unsuitable for resource-constrained systems. Noncontextual word embeddings are an efficient alternative in these settings. Such methods typically use one vector to encode multiple different meanings of a word, and incur errors due to polysemy. This paper proposes a two-stage method to distill multiple word senses from a pre-trained language model (BERT) by using attention over the senses of a word in a context and transferring this sense information to fit multi-sense embeddings in a skip-gram-like framework. We demonstrate an effective approach to training the sense disambiguation mechanism in our model with a distribution over word senses extracted from the output layer embeddings of BERT. Experiments on the contextual word similarity and sense induction tasks show that this method is superior to or competitive with state-of-the-art multi-sense embeddings on multiple benchmark data sets, and experiments with an embedding-based topic model (ETM) demonstrates the benefits of using this multi-sense embedding in a downstream application. While modern deep contextual word embeddings have dramatically improved the state-of-the-art in natural language understanding (NLU) tasks, shallow noncontextual representation of words are more practical solution in settings constrained by compute power or latency. In single-sense embeddings such as word2vec or GloVe, the different meanings of a word are represented by the same vector, which leads to the meaning conflation problem in the presence of polysemy.