Discourse & Dialogue

Sentiment Analysis Just Got Smarter


They've developed a social sentiment technology based on deep learning that lets brands capture customer sentiment with 90% accuracy. This AI technology for the first time truly understands the meaning of full sentences and is able to accurately determine customer attitudes and contextual reactions in tweets, posts and articles. There are two main approaches most vendors use today: sentiment analysis based on keyword scoring, or a calculation based on predefined categories. For the first time, the algorithm understands the meaning of full sentences and is able to accurately determine customer attitudes and contextual reactions in tweets, posts and articles.

Google rolls out improvements to classification, sentiment analysis in Natural Language API


This is a Techmeme archive page. It shows how the site appeared at 3:50 PM ET, September 19, 2017. The most current version of the site as always is available at our home page. To view an earlier snapshot click here and then modify the date indicated.

What Marketers Need to Know About Machine Learning


With automated machine learning, we're starting to get a grasp on creating something like an automated data scientist to wrangle data, reduce dimensions, and get it into some kind of shape so that a human can then query it and gain some insights without having an entire data science department of two dozen people. We're even getting to the point with some unsupervised machine learning methods to parse out huge swaths of text to automatically generate things beyond, say, topic models or sentiment analysis. Like I mentioned earlier, we automate tasks across teams, like PR, business development, marketing, and sales, to have everyone's data communicating. While this is mostly a customer service function, marketing and PR teams most certainly should be paying attention to this.

Using Machine Learning to Visualize Customer Preferences


I thought this would be a cool way to look at the data available online about various products, so I built an automation around this type of analysis using the HackerNews API, Google's Natural Language API, and D3.js. Once we have these comments, we can use Google's Cloud Natural Language API for entity resolution and sentiment analysis. As each comment is passed to the Natural Language API, the document sentiment score along with the entities identified within it are stored. Instead of the red for Republican and blue for Democrat color scheme, the sentiment analysis weighted word cloud uses red for detractors and green for supporters.

Who Wants to Know the Inner Workings of LDA?


A crucially important aspect of the topic models learned by Latent Dirichlet Allocation is that they are generative models. More concretely, Latent Dirichlet Allocation imagines that each document is a distribution over topics in your dataset, like {Topic 1: 0.8, Topic 2: 0.0, Topic 3: 0.1, Topic 4: 0.1}. Once we've chosen our topic, we choose a word from that topic (so we'd choose "President" with high probability, and "united" or "states" with lower probability). There's a lot of nice information on web pages, but you'll find if you pass raw web page source to Topic Models, the terms it finds most important will be things like "html", "span", "div" and "href": Because these formatting directives appear all the time in web pages and in different quantities, the model will spend lots of effort trying to explain the differences in the occurrence rates of these tokens.

Consumer Sentiment Rebounds but Stands to Suffer From Charlottesville Fallout

U.S. News

"It's an unwelcome snakepit out there for GOP members working their states over summer recess, as many are doing," Gordon Hensley, a longtime Republican consultant and Senate strategist who worked on Wisconsin Gov. Scott Walker's presidential campaign, told U.S. News in an email Thursday. "Anytime the discussion revolves around Nazis, David Duke, white supremacist marauders and relitigating the civil war and slavery, it's a bad day/week/month for the GOP."

Finding the right representation for your NLP data - Tryolabs Blog


When considering what information is important for a certain decision procedure (say, a classification task), there's an interesting gap between what's theoretically --that is, actually-- important on the one hand and what gives good results in practice as input to machine learning (ML) algorithms, on the other. On the other hand, embedding syntactic structures in a vector space while making the distance relation meaningful is not quite as easy. Funnily enough, when I tried the two Dancing Monkeys in a Tuxedo sentences with Stanford's recursive sentiment analysis tool, it classified both sentences as negative. What you can do in this case is to restructure your input vector so that instead of having a unique, separate feature for the sentiment of the review, you use feature combinations (also called feature crosses) so that all word frequency features include information about the sentiment.



No - you can't call it a good model. In the domain you are talking about, we are more interested in catching a true churner than catching a true non-churner. Now from your data you can find - if you use the 0.8 as the cutoff - what %of true churners you correctly predict (true ve) and what % of true non-churners you wrongly label as churners (false ve). ROC tells you, what should be your cutoff and to get there how much false ve you need to tolerate.

Sentiment Analysis: Overview, Applications and Benefits


Mining such data to determine how people feel about your product, brand, or service, is called Sentiment Analysis. When applied to social media channels, it can be used to identify spikes in sentiment, thereby allowing you to identify potential product advocates or social media influencers. Companies such as Microsoft, IBM and smaller emerging companies offer REST APIs that integrate easily with your existing software applications. For example, using the following publicly available Sentiment Analysis REST API from a small start-up called Social Opinion, we pass in the text, "this phone is awesome", to the following URL: In the response, we can see the text has been identified as expressing positive emotion, with a 64% probability of that being true.

Text Mining Amazon Mobile Phone Reviews: Interesting Insights


In this study, we analysed more than 400 thousand reviews of unlocked mobile phones sold on Amazon.com to find insights with respect to reviews, ratings, price and their relationships. The plot between average review length and rating will help us find out if the products with detailed reviews attract better rating. We segregated the reviews according to their ratings – positive reviews (4 or 5 star) and negative reviews (1 or 2 star). Amazon's product review platform shows that most of the reviewers have given 4-star and 3-star ratings to unlocked mobile phones.