Goto

Collaborating Authors

 Tang, Duyu


User Modeling with Neural Network for Review Rating Prediction

AAAI Conferences

We present a neural network method for review rating prediction in this paper. Existing neural network methods for sentiment prediction typically only capture the semantics of texts, but ignore the user who expresses the sentiment.This is not desirable for review rating prediction as each user has an influence on how to interpret the textual content of a review.For example, the same word (e.g. good) might indicate different sentiment strengths when written by different users. We address this issue by developing a new neural network that takes user information into account. The intuition is to factor in user-specific modification to the meaning of a certain word.Specifically, we extend the lexical semantic composition models and introduce a user-word composition vector model (UWCVM), which effectively captures how user acts as a function affecting the continuous word representation. We integrate UWCVM into a supervised learning framework for review rating prediction, andconduct experiments on two benchmark review datasets.Experimental results demonstrate the effectiveness of our method. It shows superior performances over several strong baseline methods.


Modeling Mention, Context and Entity with Neural Networks for Entity Disambiguation

AAAI Conferences

Given a query consisting of a mention (name string) and a background document,entity disambiguation calls for linking the mention to an entity from reference knowledge base like Wikipedia.Existing studies typically use hand-crafted features to represent mention, context and entity, which is labor-intensive and weak to discover explanatory factors of data.In this paper, we address this problem by presenting a new neural network approach.The model takes consideration of the semantic representations of mention, context and entity, encodes them in continuous vector space and effectively leverages them for entity disambiguation.Specifically, we model variable-sized contexts with convolutional neural network, and embed the positions of context words to factor in the distance between context word and mention.Furthermore, we employ neural tensor network to model the semantic interactions between context and mention.We conduct experiments for entity disambiguation on two benchmark datasets from TAC-KBP 2009 and 2010.Experimental results show that our method yields state-of-the-art performances on both datasets.