Targeting Sentiment Expressions through Supervised Ranking of Linguistic Configurations

AAAI Conferences

User generated content is extremely valuable for mining market intelligence because it is unsolicited. We study the problem of analyzing users' sentiment and opinion in their blog, message board, etc. posts with respect to topics expressed as a search query.  In the scenario we consider the matches of the search query terms are expanded through coreference and meronymy to produce a set of mentions.  The mentions are contextually evaluated for sentiment and their scores are aggregated (using a data structure we introduce call the sentiment propagation graph) to produce an aggregate score for the input entity.  An extremely crucial part in the contextual evaluation of individual mentions is finding which sentiment expressions are semantically related to (target) which mentions --- this is the focus of our paper.  We present an approach where potential target mentions for a sentiment expression are ranked using supervised machine learning (Support Vector Machines) where the main features are the syntactic configurations (typed dependency paths) connecting the sentiment expression and the mention.  We have created a large English corpus of product discussions blogs annotated with semantic types of mentions, coreference, meronymy and sentiment targets.  The corpus proves that coreference and meronymy are not marginal phenomena but are really central to determining the overall sentiment for the top-level entity.  We evaluate a number of techniques for sentiment targeting and present results which we believe push the current state-of-the-art.


Deeply Moving: Deep Learning for Sentiment Analysis

@machinelearnbot

Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network.


Multi-task Learning for Target-dependent Sentiment Classification

arXiv.org Machine Learning

Detecting and aggregating sentiments toward people, organizations, and events expressed in unstructured social media have become critical text mining operations. Early systems detected sentiments over whole passages, whereas more recently, target-specific sentiments have been of greater interest. In this paper, we present MTTDSC, a multi-task target-dependent sentiment classification system that is informed by feature representation learnt for the related auxiliary task of passage-level sentiment classification. The auxiliary task uses a gated recurrent unit (GRU) and pools GRU states, followed by an auxiliary fully-connected layer that outputs passage-level predictions. In the main task, these GRUs contribute auxiliary per-token representations over and above word embeddings. The main task has its own, separate GRUs. The auxiliary and main GRUs send their states to a different fully connected layer, trained for the main task. Extensive experiments using two auxiliary datasets and three benchmark datasets (of which one is new, introduced by us) for the main task demonstrate that MTTDSC outperforms state-of-the-art baselines. Using word-level sensitivity analysis, we present anecdotal evidence that prior systems can make incorrect target-specific predictions because they miss sentiments expressed by words independent of target.



Tableau details its natural language query plans

ZDNet

Conversational interfaces with computers have been the talk of tech since the days of Star Trek. Mostly associated with voice response, frustrating experiences interacting with Siri, chatbots, or the interactive voice response (IVR) systems of call centers reveal what a long slog it's been for getting computers to understand natural language, regardless of whether it's in the form of voice or text. But it took the Amazon Echo's Alexa, which was designed as a conversational voice to Amazon's retail and entertainment services, to show that natural language interfaces could actually perform useful services. When we saw SAS founder Dr. James Goodnight demonstrate how Alexa could be used to query SAS Visual Analytics, we thought that was pretty cool. But when you look at this video, you'll realize that Alexa has only been taught a few things and has a long way to go before it will replace your keyboard or touchpad.