Goto

Collaborating Authors

Facebook manually limited New York Post Hunter Biden story: Report

FOX News

Babylon Bee CEO Seth Dillon shares the full story with Shannon Bream on'Fox News @Night' Human moderators at Facebook made the decision to temporarily limit distribution of the New York Post's Hunter Biden story on the platform, Facebook confirmed to Fox News. Internal documents obtained by the Guardian show that Facebook's fact-checking system uses both artificial intelligence (AI) and human fact-checkers to flag articles that may be subject to fact checks. In some cases, articles from popular websites like the Post may be manually referred to fact-checkers "with or without temporary demotion." Facebook fact-checkers manually added the New York Post article to its queue, and reduced distribution of the article for a short period while reviewing its contents, the Guardian reported, citing the documents. "We can do this on escalation and based on whether the content is eligible for fact-checking, related to an issue of importance, and has an external signal of falsity," the documents read, according to the outlet.


Why We're Still in the Dark About Facebook's Fight Against Fake News

Mother Jones

Just hours after the mass shooting at a church in Sutherland Springs, Texas earlier this month, conspiracy theories connecting the perpetrator, Devin Kelley, to far-left antifa groups started spreading rapidly online. What first began as speculation on Twitter started becoming a commonly accepted theory after being shared by some of the most high-profile conspiracy theorists on the right, including Mike Cernovich and InfoWars' Alex Jones. The theory gained so much traction that tweets promoting it appeared at the top of Google searches and typing in "Devin Kelley" led to auto-complete suggestions that included "antifa." The day of the shooting, the connection also became the basis for a story by YourNewsWire. Though the website resembles a legitimate news site, it's actually been known to repeatedly push false reporting in the past, including the infamous Pizzagate scandal last December.


Facebook reveals sweeping new policy changes to tackle misinformation and 'problematic content'

Daily Mail - Science & tech

Facebook is doubling down on its efforts to prevent the spread of misinformation on its platform and some of its apps. In a nearly 2,000-word blog post, Facebook unveiled a slew of new policies that the company will put into place to clamp down on false news stories, images and videos. The plan, titled'remove, reduce and inform,' addresses one of the major criticisms against Facebook concerning the continued presence of harassment, hate speech and false content on its site. As part of a sweeping new plan, titled'remove, reduce and inform,' Facebook is launching its toughest measures yet to tackle misinformation. Guy Rosen, Facebook's vice president of integrity, and Tessa Lyons, Facebook's head of News Feed integrity, broke down the policy changes in a lengthy post.


Snopes quits Facebook's factchecking program amid questions over its impact

The Guardian

Facebook's controversial factchecking program has lost one of its major US partners. The news website Snopes.com announced on Friday it was cutting ties with the social network. The departure of Snopes, which has collaborated with Facebook for two years to debunk misinformation on the platform, doesn't come as a surprise. Numerous journalists working for Facebook's factchecking initiative have said the partnership was failing to have an impact. Snopes, which was paid by Facebook, announced in a short post that it had been evaluating the ramifications and costs of providing third-party factchecking services.


Facebook finally cracks down on fake news

Mashable

Facebook is finally starting to make good on its promise to fight fake news with technology. The company introduced a series of updates on Thursday that are aimed at preventing the spread of hoaxes as well as false and misleading news. The changes are part of what Facebook says are still "early experiments" in its efforts to reduce the presence of fake news on the service, and for now they're rolling out only to a small percentage of Facebook users in the U.S. The experiments include: a new tool that let users flag posts specifically as fake news, labels that show when a story has been marked as "disputed" by a third-party fact-checking organization, changes to the News Feed algorithm, and a crackdown on spammers who spread fake news with the intention of pocketing ad revenue. Together, these updates -- the first specific product changes Facebook has publicly revealed since the election threw fake news into the national spotlight -- are aimed at curtailing the "worst of the worst" offenders while "engaging both our community and third-party organizations," according to Facebook's vice president of product for News Feed, Adam Mosseri. On the reporting side, those who are part of Facebook's test will be able to choose "fake news" as a reason for reporting a link that's been shared in their News Feed or elsewhere on Facebook.