Hierarchical Attention Generative Adversarial Networks for Cross-domain Sentiment Classification

arXiv.org Machine Learning

Cross-domain sentiment classification (CDSC) is an importance task in domain adaptation and sentiment classification. Due to the domain discrepancy, a sentiment classifier trained on source domain data may not works well on target domain data. In recent years, many researchers have used deep neural network models for cross-domain sentiment classification task, many of which use Gradient Reversal Layer (GRL) to design an adversarial network structure to train a domain-shared sentiment classifier. Different from those methods, we proposed Hierarchical Attention Generative Adversarial Networks (HAGAN) which alternately trains a generator and a discriminator in order to produce a document representation which is sentiment-distinguishable but domain-indistinguishable. Besides, the HAGAN model applies Bidirectional Gated Recurrent Unit (Bi-GRU) to encode the contextual information of a word and a sentence into the document representation. In addition, the HAGAN model use hierarchical attention mechanism to optimize the document representation and automatically capture the pivots and non-pivots. The experiments on Amazon review dataset show the effectiveness of HAGAN.


Linking Heterogeneous Input Features with Pivots for Domain Adaptation

AAAI Conferences

Sentiment classification aims to automatically predict sentiment polarity (e.g., positive or negative) of user generated sentiment data (e.g., reviews, blogs). In real applications, these user generated sentiment data can span so many different domains that it is difficult to manually label training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classification where a systemtrained using labeled reviews from a source domain is deployed to classify sentimentsof reviews in a different target domain. In this paper, we propose to link heterogeneous input features with pivots via joint non-negative matrix factorization. This is achieved by learning the domain-specific information from different domains into unified topics, with the help of pivots across all domains. We conduct experiments on a benchmark composed of reviews of 4 types of Amazon products. Experimental results show that our proposed approach significantly outperforms the baseline method, and achieves an accuracy which is competitive with the state-of-the-art methods for sentiment classification adaptation.


Structural Correspondence Learning for Cross-lingual Sentiment Classification with One-to-many Mappings

arXiv.org Machine Learning

Structural correspondence learning (SCL) is an effective method for cross-lingual sentiment classification. This approach uses unlabeled documents along with a word translation oracle to automatically induce task specific, cross-lingual correspondences. It transfers knowledge through identifying important features, i.e., pivot features. For simplicity, however, it assumes that the word translation oracle maps each pivot feature in source language to exactly only one word in target language. This one-to-one mapping between words in different languages is too strict. Also the context is not considered at all. In this paper, we propose a cross-lingual SCL based on distributed representation of words; it can learn meaningful one-to-many mappings for pivot words using large amounts of monolingual data and a small dictionary. We conduct experiments on NLP\&CC 2013 cross-lingual sentiment analysis dataset, employing English as source language, and Chinese as target language. Our method does not rely on the parallel corpora and the experimental results show that our approach is more competitive than the state-of-the-art methods in cross-lingual sentiment classification.


Structural Correspondence Learning for Cross-Lingual Sentiment Classification with One-to-Many Mappings

AAAI Conferences

Structural correspondence learning (SCL) is an effective method for cross-lingual sentiment classification. This approach uses unlabeled documents along with a word translation oracle to automatically induce task specific, cross-lingual correspondences. It transfers knowledge through identifying important features, i.e., pivot features. For simplicity, however, it assumes that the word translation oracle maps each pivot feature in source language to exactly only one word in target language. This one-to-one mapping between words in different languages is too strict. Also the context is not considered at all. In this paper, we propose a cross-lingual SCL based on distributed representation of words; it can learn meaningful one-to-many mappings for pivot words using large amounts of monolingual data and a small dictionary. We conduct experiments on NLP&CC 2013 cross-lingual sentiment analysis dataset, employing English as source language, and Chinese as target language. Our method does not rely on the parallel corpora and the experimental results show that our approach is more competitive than the state-of-the-art methods in cross-lingual sentiment classification.


Feature Ensemble Plus Sample Selection: Domain Adaptation for Sentiment Classification (Extended Abstract)

AAAI Conferences

The domain adaptation problem arises often in the field of sentiment classification. There are two distinct needs in domain adaptation, namely labeling adaptation and instance adaptation. Most of current research focuses on the former one, while neglects the latter one. In this work, we propose a joint approach, named feature ensemble plus sample selection (SS-FE), which takes both types of adaptation into account. A feature ensemble (FE) model is first proposed to learn a new labeling function in a feature re-weighting manner. Furthermore, a PCA-based sample selection (PCA-SS) method is proposed as an aid to FE for instance adaptation. Experimental results show that the proposed SS-FE approach could gain significant improvements, compared to individual FE and PCA-SS, due to its comprehensive consideration of both labeling adaptation and instance adaptation.