Understanding Language in Conversations "The problems addressed in discourse research aim to answer two general kinds of questions: (1) what information is contained in extended sequences of utterances that goes beyond the meaning of the individual utterances themselves? (2) how does the context in which an utterance is used affect the meaning of the individual utterances, or parts of them?"
– Barbara Grosz. Overview of Chapter 6: Discourse and Dialogue, Survey of the State of the Art in Human Language Technology (1996).
Speech processing research is at a high right now, with virtual assistants like Alexa, Siri, Google and others always listening and willing to help. But without a keen eye -- or ear -- for who this technology aims to assist, interest could wane, said Maxine Eskenazi, a Carnegie Mellon University researcher in the School of Computer Science who has worked on speech processing and spoken dialogue systems for decades. "We need to stop focusing on the agent and start focusing on the user," Eskenazi said. "It's only a dialogue if there are two individuals participating. If we make systems that are just fun for us to make but do not serve the user and do not help the user, then they'll stop using Alexa or Google."
We live in a world where the globe is connected and communication made easy with the click of a button. In this social age, the evolution of the World Wide Web (www), characterized by the advancement of smartphones and semantic technology, has redefined social media as a retail platform and an indispensable marketing tool. Social media marketing (SMM) is a form of digital marketing that involves sharing content on social media platforms in an attempt to actualize a firm's branding, sales, and web traffic goals. Social media has become essential to helping brands connect to a wider range of customers, establish brand presence, and increase sales both in-store and online. Considering the size of this new virtual market, it comes as no surprise that marketers would choose to use social media to increase brand awareness.
The main motive of using sentiment analysis is to find out the true feelings of the varied people living in our society. It can be used for analyzing the customer feedback of a particular company, normal users on social media towards a product, services, social issues, or political agendas. Companies also use it for brand analysis, reputation crises, campaigns performances, competitor analysis, and improve the service offered to the customers. Analyzing the sentiments of the customers helps the customer support team to prioritize their work for offering better service to end-users. What are the common challenges with which sentiment analysis deals?
Capturing IT effort that is overlooked or misinterpreted by Key Performance Indicators. KPIs such as call duration are not necessarily the best way to measure the effectiveness your IT support staff. For example, a long phone call may mean that your agent is handling a complex issue--not having trouble resolving it. You can use Sentiment Analysis to identify the agents that are consistently involved in calls with a positive sentiment, so you can reward them and use them to mentor less experienced team members. By pulling sentiment data into your IT department's KPI reports, you can find correlations that might otherwise be hidden.
In our current day and age, reviews are part of almost every product/service provided on the internet, as seen in  it is the primary way for a company to get an understanding concerning the amount of success their product has and as examined in  for the customer to build trust in purchasing or using a service of which only a description or a picture exits. Therefore, a need for a deeper understanding and analysis of those reviews are needed for any individual who wishes to derive various consequences regarding a product. Standard methods for such insight derivation include sentiment analysis, around which we will formulate a new approach for review rating classification. Reviews across the internet mainly consist of text-based and rating-based formats, where in many cases, a combination of both is considered a single review; the method developed in this paper focuses on the ability to associate a review to a rating cluster based on sentiment proportions. We will define two main groups: one group consisting of a majority of reviews higher than three stars (in a 5-star ranking system) and another group of all reviews, which correspond to the less than three stars.
We introduce a grey-box adversarial attack and defence framework for sentiment classification. We address the issues of differentiability, label preservation and input reconstruction for adversarial attack and defence in one unified framework. Our results show that once trained, the attacking model is capable of generating high-quality adversarial examples substantially faster (one order of magnitude less in time) than state-of-the-art attacking methods. These examples also preserve the original sentiment according to human evaluation. Additionally, our framework produces an improved classifier that is robust in defending against multiple adversarial attacking methods. Code is available at: https://github.com/ibm-aur-nlp/adv-def-text-dist.
Direct decoding for task-oriented dialogue is known to suffer from the explaining-away effect, manifested in models that prefer short and generic responses. Here we argue for the use of Bayes' theorem to factorize the dialogue task into two models, the distribution of the context given the response, and the prior for the response itself. This approach, an instantiation of the noisy channel model, both mitigates the explaining-away effect and allows the principled incorporation of large pretrained models for the response prior. We present extensive experiments showing that a noisy channel model decodes better responses compared to direct decoding and that a two stage pretraining strategy, employing both open-domain and task-oriented dialogue data, improves over randomly initialized models.
The recent success of reinforcement learning's (RL) in solving complex tasks is most often attributed to its capacity to explore and exploit an environment where it has been trained. Sample efficiency is usually not an issue since cheap simulators are available to sample data on-policy. On the other hand, task oriented dialogues are usually learnt from offline data collected using human demonstrations. Collecting diverse demonstrations and annotating them is expensive. Unfortunately, use of RL methods trained on off-policy data are prone to issues of bias and generalization, which are further exacerbated by stochasticity in human response and non-markovian belief state of a dialogue management system. To this end, we propose a batch RL framework for task oriented dialogue policy learning: causal aware safe policy improvement (CASPI). This method gives guarantees on dialogue policy's performance and also learns to shape rewards according to intentions behind human responses, rather than just mimicking demonstration data; this couple with batch-RL helps overall with sample efficiency of the framework. We demonstrate the effectiveness of this framework on a dialogue-context-to-text Generation and end-to-end dialogue task of the Multiwoz2.0 dataset. The proposed method outperforms the current state of the art on these metrics, in both case. In the end-to-end case, our method trained only on 10\% of the data was able to out perform current state in three out of four evaluation metrics.