In the mood: the dynamics of collective sentiments on Twitter Machine Learning

We study the relationship between the sentiment levels of Twitter users and the evolving network structure that the users created by @-mentioning each other. We use a large dataset of tweets to which we apply three sentiment scoring algorithms, including the open source SentiStrength program. Specifically we make three contributions. Firstly we find that people who have potentially the largest communication reach (according to a dynamic centrality measure) use sentiment differently than the average user: for example they use positive sentiment more often and negative sentiment less often. Secondly we find that when we follow structurally stable Twitter communities over a period of months, their sentiment levels are also stable, and sudden changes in community sentiment from one day to the next can in most cases be traced to external events affecting the community. Thirdly, based on our findings, we create and calibrate a simple agent-based model that is capable of reproducing measures of emotive response comparable to those obtained from our empirical dataset.

Why automated sentiment analysis is broken and how to fix it


One of the most difficult challenges reporting and analytics face in public relations measurement is sentiment analysis. Machines attempt textual analysis of sentiment all the time; more often than not, it goes horribly wrong. How does it go wrong? Machines are incapable of understanding context. Machines are typically programmed to look for certain keywords as proxies for sentiment.

Figuring out what Aussies think about Trump on Twitter is pretty difficult


Australians reacted more "positive" than "negative" to the election of Donald Trump as the next president of the United States, according to a sentiment analysis study of tweets that were posted at the time. Only tweets sent on November 10, 2016, (just after the result of the US election) that included the word "Trump" and were sent from an Australian capital city were analysed. This resulted in 32,908 tweets including retweets being retrieved. For the purpose of this analysis we classified the tweet sentiment as either positive, negative or neutral. The figures (above) display the sentiment for each capital city and show that in Sydney, Brisbane, Canberra and Hobart there were more positive tweets about Trump.

SentiWorld: Understanding Emotions between Countries Based on Tweets

AAAI Conferences

In order to understand emotions between countries, we collected around 25 million tweets, analyzed them using statistical and network analysis methods, and visualized the analytic results as both a sentiment map and a sentiment network.

Comparing Overall and Targeted Sentiments in Social Media during Crises

AAAI Conferences

The tracking of citizens' reactions in social media during crises has attracted an increasing level of interest in the research community. In particular, sentiment analysis over social media posts can be regarded as a particularly useful tool, enabling civil protection and law enforcement agencies to more effectively respond during this type of situation. Prior work on sentiment analysis in social media during crises has applied well-known techniques for overall sentiment detection in posts. However, we argue that sentiment analysis of the overall post might not always be suitable, as it may miss the presence of more targeted sentiments, e.g. about the people and organizations involved (which we refer to as sentiment targets). Through a crowdsourcing study, we show that there are marked differences between the overall tweet sentiment and the sentiment expressed towards the subjects mentioned in tweets related to three crises events.