We present in this paper a study on negation in dialogues. In particular, we analyze the peculiarities of negation in dialogues and propose a new method to detect intra-sentential and inter-sentential negation scope and focus in dialogue context. A key element of the solution is to use dialogue context in the form of previous utterances, which is often needed for proper interpretation of negation in dialogue compared to literary, non-dialogue texts. We have modeled the negation scope and focus detection tasks as a sequence labeling tasks and used Conditional Random Field models to label each token in an utterance as being within the scope/focus of negation or not. The proposed negation scope and focus detection method is evaluated on a newly created corpus (called the DeepTutor Negation corpus; DT-Neg). This dataset was created from actual tutorial dialogue interactions between high school students and a state-of-the-art intelligent tutoring system.
In the last several years, Twitter is being adopted by the companies as an alternative platform to interact with the customers to address their concerns. With the abundance of such unconventional conversation resources, push for developing effective virtual agents is more than ever. To address this challenge, a better understanding of such customer service conversations is required. Lately, there have been several works proposing a novel taxonomy for fine-grained dialogue acts as well as develop algorithms for automatic detection of these acts. The outcomes of these works are providing stepping stones for the ultimate goal of building efficient and effective virtual agents. But none of these works consider handling the notion of negation into the proposed algorithms. In this work, we developed an SVM-based dialogue acts prediction algorithm for Twitter customer service conversations where negation handling is an integral part of the end-to-end solution. For negation handling, we propose several efficient heuristics as well as adopt recent state-of- art third party machine learning based solutions. Empirically we show model's performance gain while handling negation compared to when we don't. Our experiments show that for the informal text such as tweets, the heuristic-based approach is more effective.
Gautam, Dipesh (The University of Memphis) | Maharjan, Nabin (The University of Memphis) | Banjade, Rajendra (The University of Memphis) | Tamang, Lasang Jimba (The University of Memphis) | Rus, Vasile (The University of Memphis)
Negation plays a significant role in spoken and written natural languages. Negation is used in language to deny something or to reverse the polarity or the sense of a statement. This paper presents a novel approach to automatically handling negation in tutorial dialogues using deep learning methods. In particular, we explored various Long Short Term Memory (LSTM) models to automatically detect negation focus, scope and cue in tutorial dialogues collected from experiments with actual students interacting with the state-of-the-art intelligent tutoring system, DeepTutor. The results obtained are promising.
Information systems experience an ever-growing volume of unstructured data, particularly in the form of textual materials. This represents a rich source of information from which one can create value for people, organizations and businesses. For instance, recommender systems can benefit from automatically understanding preferences based on user reviews or social media. However, it is difficult for computer programs to correctly infer meaning from narrative content. One major challenge is negations that invert the interpretation of words and sentences. As a remedy, this paper proposes a novel learning strategy to detect negations: we apply reinforcement learning to find a policy that replicates the human perception of negations based on an exogenous response, such as a user rating for reviews. Our method yields several benefits, as it eliminates the former need for expensive and subjective manual labeling in an intermediate stage. Moreover, the inferred policy can be used to derive statistical inferences and implications regarding how humans process and act on negations.
We explore the relationship between negated text and negative sentiment in the task of sentiment classification. We propose a novel adjustment factor based on negation occurrences as a proxy for negative sentiment that can be applied to lexicon-based classifiers equipped with a negation detection pre-processing step. We performed an experiment on a multi-domain customer reviews dataset obtaining accuracy improvements over a baseline, and we further improved our results using out-of-domain data to calibrate the adjustment factor. We see future work possibilities in exploring negation detection refinements, and expanding the experiment to a broader spectrum of opinionated discourse, beyond that of customer reviews.