In the last several years, Twitter is being adopted by the companies as an alternative platform to interact with the customers to address their concerns. With the abundance of such unconventional conversation resources, push for developing effective virtual agents is more than ever. To address this challenge, a better understanding of such customer service conversations is required. Lately, there have been several works proposing a novel taxonomy for fine-grained dialogue acts as well as develop algorithms for automatic detection of these acts. The outcomes of these works are providing stepping stones for the ultimate goal of building efficient and effective virtual agents. But none of these works consider handling the notion of negation into the proposed algorithms. In this work, we developed an SVM-based dialogue acts prediction algorithm for Twitter customer service conversations where negation handling is an integral part of the end-to-end solution. For negation handling, we propose several efficient heuristics as well as adopt recent state-of- art third party machine learning based solutions. Empirically we show model's performance gain while handling negation compared to when we don't. Our experiments show that for the informal text such as tweets, the heuristic-based approach is more effective.
We present in this paper a study on negation in dialogues. In particular, we analyze the peculiarities of negation in dialogues and propose a new method to detect intra-sentential and inter-sentential negation scope and focus in dialogue context. A key element of the solution is to use dialogue context in the form of previous utterances, which is often needed for proper interpretation of negation in dialogue compared to literary, non-dialogue texts. We have modeled the negation scope and focus detection tasks as a sequence labeling tasks and used Conditional Random Field models to label each token in an utterance as being within the scope/focus of negation or not. The proposed negation scope and focus detection method is evaluated on a newly created corpus (called the DeepTutor Negation corpus; DT-Neg). This dataset was created from actual tutorial dialogue interactions between high school students and a state-of-the-art intelligent tutoring system.
We describe a state-of-the-art sentiment analysis system that detects (a) the sentiment of short informal textual messages such as tweets and SMS (message-level task) and (b) the sentiment of a word or a phrase within a message (term-level task). The system is based on a supervised statistical text classification approach leveraging a variety of surface-form, semantic, and sentiment features. The sentiment features are primarily derived from novel high-coverage tweet-specific sentiment lexicons. These lexicons are automatically generated from tweets with sentiment-word hashtags and from tweets with emoticons. To adequately capture the sentiment of words in negated contexts, a separate sentiment lexicon is generated for negated words. The system ranked first in the SemEval-2013 shared task `Sentiment Analysis in Twitter' (Task 2), obtaining an F-score of 69.02 in the message-level task and 88.93 in the term-level task. Post-competition improvements boost the performance to an F-score of 70.45 (message-level task) and 89.50 (term-level task). The system also obtains state-of-the-art performance on two additional datasets: the SemEval-2013 SMS test set and a corpus of movie review excerpts. The ablation experiments demonstrate that the use of the automatically generated lexicons results in performance gains of up to 6.5 absolute percentage points.
Gautam, Dipesh (The University of Memphis) | Maharjan, Nabin (The University of Memphis) | Banjade, Rajendra (The University of Memphis) | Tamang, Lasang Jimba (The University of Memphis) | Rus, Vasile (The University of Memphis)
Negation plays a significant role in spoken and written natural languages. Negation is used in language to deny something or to reverse the polarity or the sense of a statement. This paper presents a novel approach to automatically handling negation in tutorial dialogues using deep learning methods. In particular, we explored various Long Short Term Memory (LSTM) models to automatically detect negation focus, scope and cue in tutorial dialogues collected from experiments with actual students interacting with the state-of-the-art intelligent tutoring system, DeepTutor. The results obtained are promising.
We explore the relationship between negated text and negative sentiment in the task of sentiment classification. We propose a novel adjustment factor based on negation occurrences as a proxy for negative sentiment that can be applied to lexicon-based classifiers equipped with a negation detection pre-processing step. We performed an experiment on a multi-domain customer reviews dataset obtaining accuracy improvements over a baseline, and we further improved our results using out-of-domain data to calibrate the adjustment factor. We see future work possibilities in exploring negation detection refinements, and expanding the experiment to a broader spectrum of opinionated discourse, beyond that of customer reviews.