Discourse & Dialogue


How to Mine the SERPs for SEO, Content & Customer Insights

#artificialintelligence

The most underutilized resources in SEO are search engine results pages (SERPs). I don't just mean looking at where our sites rank for a specific keyword or set of keywords, I mean the actual content of the SERPs. For every keyword you search in Google where you expand the SERP to show 100 results, you're going to find, on average, around 3,000 words. That's a lot of content, and the reason it has the potential to be so valuable to an SEO is that a lot of it has been algorithmically rewritten or cherry-picked from a page by Google to best address what it thinks the needs of the searcher are. One recent study showed that Google is rewriting or modifying the meta descriptions displayed in the SERPs 92% of the time.


Developing a NLP based PR platform for the Canadian Elections

#artificialintelligence

Elections are a vital part of democracy allowing people to vote for the candidate they think can best lead the country. A candidate's campaign aims to demonstrate to the public why they think they are the best choice. However, in this age of constant media coverage and digital communications, the candidate is scrutinized at every step. A single misquote or negative news about a candidate can be the difference between him winning or losing the election. It becomes crucial to have a public relations manager who can guide and direct the candidate's campaign by prioritizing specific campaign activities. One critical aspect of the PR manager's work is to understand the public perception of their candidate and improve public sentiment about the candidate.


Forward and Backward Knowledge Transfer for Sentiment Classification

arXiv.org Artificial Intelligence

This paper studies the problem of learning a sequence of sentiment classification tasks. The learned knowledge from each task is retained and used to help future or subsequent task learning. This learning paradigm is called Lifelong Learning (LL). However, existing LL methods either only transfer knowledge forward to help future learning and do not go back to improve the model of a previous task or require the training data of the previous task to retrain its model to exploit backward/reverse knowledge transfer. This paper studies reverse knowledge transfer of LL in the context of naive Bayesian (NB) classification. It aims to improve the model of a previous task by leveraging future knowledge without retraining using its training data. This is done by exploiting a key characteristic of the generative model of NB. That is, it is possible to improve the NB classifier for a task by improving its model parameters directly by using the retained knowledge from other tasks. Experimental results show that the proposed method markedly outperforms existing LL baselines.


Sparse Parallel Training of Hierarchical Dirichlet Process Topic Models

arXiv.org Machine Learning

Nonparametric extensions of topic models such as Latent Dirichlet Allocation, including Hierarchical Dirichlet Process (HDP), are often studied in natural language processing. Training these models generally requires use of serial algorithms, which limits scalability to large data sets and complicates acceleration via use of parallel and distributed systems. Most current approaches to scalable training of such models either don't converge to the correct target, or are not data-parallel. Moreover, these approaches generally do not utilize all available sources of sparsity found in natural language - an important way to make computation efficient. Based upon a representation of certain conditional distributions within an HDP, we propose a doubly sparse data-parallel sampler for the HDP topic model that addresses these issues.


Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study

arXiv.org Artificial Intelligence

Neural generative models have been become increasingly popular when building conversational agents. They offer flexibility, can be easily adapted to new domains, and require minimal domain engineering. A common criticism of these systems is that they seldom understand or use the available dialog history effectively. In this paper, we take an empirical approach to understanding how these models use the available dialog history by studying the sensitivity of the models to artificially introduced unnatural changes or perturbations to their context at test time. We experiment with 10 different types of perturbations on 4 multi-turn dialog datasets and find that commonly used neural dialog architectures like recurrent and transformer-based seq2seq models are rarely sensitive to most perturbations such as missing or reordering utterances, shuffling words, etc. Also, by open-sourcing our code, we believe that it will serve as a useful diagnostic tool for evaluating dialog systems in the future.


ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets

arXiv.org Machine Learning

Sentiment analysis is a highly subjective and challenging task. Its complexity further increases when applied to the Arabic language, mainly because of the large variety of dialects that are unstandardized and widely used in the Web, especially in social media. While many datasets have been released to train sentiment classifiers in Arabic, most of these datasets contain shallow annotation, only marking the sentiment of the text unit, as a word, a sentence or a document. In this paper, we present the Arabic Sentiment Twitter Dataset for the Levantine dialect (ArSenTD-LEV). Based on findings from analyzing tweets from the Levant region, we created a dataset of 4,000 tweets with the following annotations: the overall sentiment of the tweet, the target to which the sentiment was expressed, how the sentiment was expressed, and the topic of the tweet. Results confirm the importance of these annotations at improving the performance of a baseline sentiment classifier. They also confirm the gap of training in a certain domain, and testing in another domain.


Punchh Launches Deep Learning and Artificial Intelligence "Customer Sentiment Analysis" to Enable Real-Time Response to Customer Reviews

#artificialintelligence

Punchh, the leader in digital marketing solutions for physical retailers, today announced the launch of Punchh Deep Sentiment Analysis. The new product allows brands to extract valuable insights from customer reviews using Punchh's natural language comprehension engine built with industry-leading deep learning and artificial intelligence. Its natural language processing model achieves human-level performance, defined as more than 93 percent accurate, and features multi-language support. "In today's hyper-competitive climate, brands need to do everything they can to foster and nurture direct customer relationships, and paying attention to customer reviews is an essential part of that," said Shyam Rao, CEO of Punchh. "Manually reading every review is prohibitively time-consuming for most retailers, which leads to slower response times and poor customer experiences. Our solution uses AI and machine learning to help brands analyze reviews at scale and immediately identify critical information so they can focus on high-level insights and make quick decisions to strengthen customer relationships and increase loyalty."


Automatic Evaluation of Local Topic Quality

arXiv.org Machine Learning

Topic models are typically evaluated with respect to the global topic distributions that they generate, using metrics such as coherence, but without regard to local (token-level) topic assignments. Token-level assignments are important for downstream tasks such as classification. Even recent models, which aim to improve the quality of these token-level topic assignments, have been evaluated only with respect to global metrics. We propose a task designed to elicit human judgments of token-level topic assignments. We use a variety of topic model types and parameters and discover that global metrics agree poorly with human assignments. Since human evaluation is expensive we propose a variety of automated metrics to evaluate topic models at a local level. Finally, we correlate our proposed metrics with human judgments from the task on several datasets. We show that an evaluation based on the percent of topic switches correlates most strongly with human judgment of local topic quality. We suggest that this new metric, which we call consistency, be adopted alongside global metrics such as topic coherence when evaluating new topic models.


A Simple Dual-decoder Model for Generating Response with Sentiment

arXiv.org Machine Learning

How to generate human like response is one of the most challenging tasks for artificial intelligence. In a real application, after reading the same post different people might write responses with positive or negative sentiment according to their own experiences and attitudes. To simulate this procedure, we propose a simple but effective dual-decoder model to generate response with a particular sentiment, by connecting two sentiment decoders to one encoder. To support this model training, we construct a new conversation dataset with the form of (post, resp1, resp2) where two responses contain opposite sentiment. Experiment results show that our dual-decoder model can generate diverse responses with target sentiment, which obtains significant performance gain in sentiment accuracy and word diversity over the traditional single-decoder model. We will make our data and code publicly available for further study.


A Modern Hands-On Approach to Sentiment Analysis - Synerzip

#artificialintelligence

Human emotions are complex and difficult to decode. However, recent advancements in artificial intelligence and deep learning, are enabling new leaps in sentiment analysis. Put simply, sentiment analysis is a machine decoding human emotions for a specific purpose. Applications vary from mining opinions to gauging political inclinations to see how product reviews are affecting real-time sales. Social media companies actively use sentiment analysis to root out offensive and prejudiced content.