A Deep Architecture for Semantic Matching with Multiple Positional Sentence Representations
Wan, Shengxian (Chinese Academy of Sciences) | Lan, Yanyan (Chinese Academy of Sciences) | Guo, Jiafeng (Chinese Academy of Sciences) | Xu, Jun (Chinese Academy of Sciences) | Pang, Liang (Chinese Academy of Sciences) | Cheng, Xueqi (Chinese Academy of Sciences)
Matching natural language sentences is central for many applications such as information retrieval and question answering. Existing deep models rely on a single sentence representation or multiple granularity representations for matching. However, such methods cannot well capture the contextualized local information in the matching process. To tackle this problem, we present a new deep architecture to match two sentences with multiple positional sentence representations. Specifically, each positional sentence representation is a sentence representation at this position, generated by a bidirectional long short term memory (Bi-LSTM). The matching score is finally produced by aggregating interactions between these different positional sentence representations, through k-Max pooling and a multi-layer perceptron. Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.
Apr-19-2016
- Country:
- Asia > China (0.05)
- Europe
- France (0.04)
- Germany (0.04)
- Netherlands (0.04)
- Spain (0.04)
- North America > United States
- California > Santa Clara County > Palo Alto (0.04)
- Technology: