If you're trying to post a news story on Facebook, the platform might just stop you from doing that – even if the link is to a reputable source like USA TODAY. Facebook users have taken to other social platforms to complain about the former flagging their posts. The issue seems to happen with most external links – not just those related to coronavirus coverage, as first reported – with users receiving a message that their posts goes against community standards on spam. The issue is a "bug in an anti-spam system, unrelated to any changes in our content moderator workforce," Guy Rosen, Facebook's vice president of integrity, explained on Twitter. On Monday, Facebook posted an update to its corporate site on how the social platform is handling content moderation.
Hours after Twitter announced it would preemptively debunk false information about voting by mail and election results, the tech company quickly put a label on one of President Donald Trump's tweets. Must have final total on November 3rd," Trump tweeted at 7:43 p.m. EDT Monday. Less than 30 minutes later, Twitter had put this label on the tweet: "Some or all of the content shared in this Tweet is disputed and might be misleading about how to participate in an election or another civic process." It also put a prompt with more information on the security of voting by mail. Twitter said Monday it will introduce prompts to U.S. users "that preemptively address topics that are likely to be the subject of election misinformation."
In the early hours of the Wednesday after Election Day, as President Donald Trump inaccurately claimed victory in several states and leveled charges that his opponents were "trying to steal the election," Twitter took the kind of action against disinformation that many had been urging for years. It labeled and obscured tweets, prevented retweets and likes, and stopped recommending false content. Facebook also applied labels to similar posts and shut down a "Stop the Steal" Facebook group organized around armed opposition to made-up voter fraud that had started accumulating new members at the unprecedented rate of 242 per minute. There has been plenty of misinformation before and after the election. Facebook posts falsely asserting that thousands of dead Pennsylvanians were voting reached up to 11.3 million people, and Spanish-language disinformation may have played a substantial role in Florida results. At the same time, the platforms did adopt and enforce election integrity procedures, showing they could at least sometimes put out disinformation flares before they blazed out of control.
Is President Trump the nation's chief disinformation officer? Controversial posts concerning COVID-19 on Monday in which the president tells the public "Don't let it dominate you" and "Don't be afraid of it" and claims he may have immunity to the deadly virus have heightened public criticism of Trump for spreading dangerous falsehoods. "There is no doubt that Donald Trump is the largest spreader of specific and important types of misinformation today," said Graham Brookie, director of the Atlantic Council's Digital Forensic Research Lab. In the critical last weeks of the election, social media companies are facing a tsunami of conspiracy theories, hoaxes and fake claims on everything from COVID-19 to voting. And whether during a presidential debate, in press briefings or in posts on Facebook and Twitter, much of that misinformation is being generated and amplified by Trump, two recent studies show.
A tiny fraction of Twitter users spread the vast majority of fake news in 2016, with conservatives and older people sharing misinformation more, a new study finds. Scientists examined more than 16,000 U.S. Twitter accounts and found that 16 of them - less than one-tenth of 1 percent - tweeted out nearly 80 percent of the misinformation masquerading as news, according to a study in the journal Science. About 99 percent of the Twitter users spread virtually no fake information in the most heated part of the election year, said study co-author David Lazer, a Northeastern University political and computer science professor. Scientists examined more than 16,000 U.S. Twitter accounts and found that 16 of them - less than one-tenth of 1 percent - tweeted out nearly 80 percent of the misinformation masquerading as news, according to a study Thursday in the journal Science. Spreading fake information'is taking place in a very seamy, but small, corner of Twitter,' Lazer said.