Goto

Collaborating Authors

Sentiment Analysis by Joint Learning of Word Embeddings and Classifier

arXiv.org Machine Learning

Word embeddings are representations of individual words of a text document in a vector space and they are often use- ful for performing natural language pro- cessing tasks. Current state of the art al- gorithms for learning word embeddings learn vector representations from large corpora of text documents in an unsu- pervised fashion. This paper introduces SWESA (Supervised Word Embeddings for Sentiment Analysis), an algorithm for sentiment analysis via word embeddings. SWESA leverages document label infor- mation to learn vector representations of words from a modest corpus of text doc- uments by solving an optimization prob- lem that minimizes a cost function with respect to both word embeddings as well as classification accuracy. Analysis re- veals that SWESA provides an efficient way of estimating the dimension of the word embeddings that are to be learned. Experiments on several real world data sets show that SWESA has superior per- formance when compared to previously suggested approaches to word embeddings and sentiment analysis tasks.


Sentiment Classification with Natural Language Processing on LSTM

#artificialintelligence

LSA itself is an unsupervised way of uncovering synonyms in a collection of documents.To start, we take a look how Latent Semantic Analysis is used in Natural Language Processing to analyze relationships between a set of documents and the terms that they contain. Then we go steps further to analyze and classify sentiment. We will review Chi Squared for feature selection along the way. We will use Recurrent Neural Networks, and in particular LSTMs, to perform sentiment analysis in Keras. Since, text is the most unstructured form of all the available data, various types of noise are present in it and the data is not readily analyzable without any pre-processing.


Using Sentiment Representation Learning to Enhance Gender Classification for User Profiling

arXiv.org Artificial Intelligence

User profiling means exploiting the technology of machine learning to predict attributes of users, such as demographic attributes, hobby attributes, preference attributes, etc. It's a powerful data support of precision marketing. Existing methods mainly study network behavior, personal preferences, post texts to build user profile. Through our data analysis of micro-blog, we find that females show more positive and have richer emotions than males in online social platform. This difference is very conducive to the distinction between genders. Therefore, we argue that sentiment context is important as well for user profiling.This paper focuses on exploiting microblog user posts to predict one of the demographic labels: gender. We propose a Sentiment Representation Learning based Multi-Layer Perceptron(SRL-MLP) model to classify gender. First we build a sentiment polarity classifier in advance by training Long Short-Term Memory(LSTM) model on e-commerce review corpus. Next we transfer sentiment representation to a basic MLP network. Last we conduct experiments on gender classification by sentiment representation. Experimental results show that our approach can improve gender classification accuracy by 5.53\%, from 84.20\% to 89.73\%.


Sentiment Analysis of Movie Reviews (3): doc2vec

@machinelearnbot

This is the last – for now – installment of my mini-series on sentiment analysis of the Stanford collection of IMDB reviews (originally published on recurrentnull.wordpress.com). So far, we've had a look at classical bag-of-words models and word vectors (word2vec). We saw that from the classifiers used, logistic regression performed best, be it in combination with bag-of-words or word2vec. We also saw that while the word2vec model did in fact model semantic dimensions, it was less successful for classification than bag-of-words, and we explained that by the averaging of word vectors we had to perform to obtain input features on review (not word) level. So the question now is: How would distributed representations perform if we did not have to throw away information by averaging word vectors?


Sentiment Analysis of Movie Reviews (3): doc2vec

@machinelearnbot

This is the last – for now – installment of my mini-series on sentiment analysis of the Stanford collection of IMDB reviews (originally published on recurrentnull.wordpress.com). So far, we've had a look at classical bag-of-words models and word vectors (word2vec). We saw that from the classifiers used, logistic regression performed best, be it in combination with bag-of-words or word2vec. We also saw that while the word2vec model did in fact model semantic dimensions, it was less successful for classification than bag-of-words, and we explained that by the averaging of word vectors we had to perform to obtain input features on review (not word) level. So the question now is: How would distributed representations perform if we did not have to throw away information by averaging word vectors?