Goto

Collaborating Authors

MIT's Automatic Data-Driven Media Bias Measurement Method Achieves Human-Level Results

#artificialintelligence

Today more than ever, people are voicing concerns regarding biases in news media. Especially in the political arena, there are accusations of favouritism or disfavour in reporting, often expressed through the emphasizing or ignoring of certain political actors, policies, events, or topics. Is it possible to develop objective and transparent data-driven methods to identify such biases, rather than relying on subjective human judgements? MIT researchers Samantha D'Alonzo and Max Tegmark say "yes," and have proposed an automated method for measuring media bias. The proposed data-driven approach produces results that are in close accordance with human-judgement classifications on left-right and establishment biases.


Media Landscape in Twitter: A World of New Conventions and Political Diversity

AAAI Conferences

We present a preliminary but groundbreaking study of the media landscape of Twitter. We use public data on whom follows who to uncover common behaviour in media consumption, the relationship between various classes of media, and the diversity of media content which social links may bring. Our analysis shows that there is a non-negligible amount of indirect media exposure, either through friends who follow particular media sources, or via retweeted messages. We show that the indirect media exposure expands the political diversity of news to which users are exposed to a surprising extent, increasing the range by between 60-98%. These results are valuable because they have not been readily available to traditional media, and they can help predict how we will read news, and how publishers will interact with us in the future.


Alleviating Media Bias Through Intelligent Agent Blogging

arXiv.org Artificial Intelligence

Consumers of mass media must have a comprehensive, balanced and plural selection of news to get an unbiased perspective; but achieving this goal can be very challenging, laborious and time consuming. News stories development over time, its (in)consistency, and different level of coverage across the media outlets are challenges that a conscientious reader has to overcome in order to alleviate bias. In this paper we present an intelligent agent framework currently facilitating analysis of the main sources of on-line news in El Salvador. We show how prior tools of text analysis and Web 2.0 technologies can be combined with minimal manual intervention to help individuals on their rational decision process, while holding media outlets accountable for their work.


Are Google and Facebook really suppressing conservative politics?

The Guardian

In August, Paula Bolyard, a supervising editor at the conservative news outlet PJ Media, published a story reporting that 96% of Google search results for Donald Trump prioritized "left-leaning and anti-Trump media outlets". Bolyard's results were generated according to her own admittedly unscientific methodology. She searched for "Trump" in Google's News tab, and then used a highly questionable media chart that separated outlets into "left" and "right" to tabulate the results. She reported that 96 of 100 results returned were from so-called "left-leaning" news outlets, with 21 of those from CNN alone. Despite this dubious methodology, Bolyard's statistic spread, and her story was picked up by a Fox Business Network show.


DeSMOG: Detecting Stance in Media On Global Warming

arXiv.org Artificial Intelligence

Citing opinions is a powerful yet understudied strategy in argumentation. For example, an environmental activist might say, "Leading scientists agree that global warming is a serious concern," framing a clause which affirms their own stance ("that global warming is serious") as an opinion endorsed ("[scientists] agree") by a reputable source ("leading"). In contrast, a global warming denier might frame the same clause as the opinion of an untrustworthy source with a predicate connoting doubt: "Mistaken scientists claim [...]." Our work studies opinion-framing in the global warming (GW) debate, an increasingly partisan issue that has received little attention in NLP. We introduce DeSMOG, a dataset of stance-labeled GW sentences, and train a BERT classifier to study novel aspects of argumentation in how different sides of a debate represent their own and each other's opinions. From 56K news articles, we find that similar linguistic devices for self-affirming and opponent-doubting discourse are used across GW-accepting and skeptic media, though GW-skeptical media shows more opponent-doubt. We also find that authors often characterize sources as hypocritical, by ascribing opinions expressing the author's own view to source entities known to publicly endorse the opposing view. We release our stance dataset, model, and lexicons of framing devices for future work on opinion-framing and the automatic detection of GW stance.