A Gated Self-attention Memory Network for Answer Selection

Lai, Tuan, Tran, Quan Hung, Bui, Trung, Kihara, Daisuke

arXiv.org Artificial Intelligence 

Previous deep learning based approaches for the task mainly adopt the Compare-Aggregate architecture that performs word-level comparison followed by aggregation. In this work, we take a departure from the popular Compare-Aggregate architecture, and instead, propose a new gated self-attention memory network for the task. Combined with a simple transfer learning technique from a large-scale online corpus, our model outperforms previous methods by a large margin, achieving new state-of- the-art results on two standard answer selection datasets: TrecQA and WikiQA. 1 Introduction and Related Work Answer selection is an important task, with applications in many areas (Lai et al., 2018). Given a question and a set of candidate answers, the task is to identify the most relevant candidate. Previous work on answer selection typically relies on feature engineering, linguistic tools, or external resources (Wang et al., 2007; Wang and Manning, 2010; Heilman and Smith, 2010; Yih et al., 2013; Y ao et al., 2013).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found