Facebook to reexamine how livestream videos are flagged after Christchurch shooting

Washington Post - Technology News

The first user to alert Facebook to the grisly video of the New Zealand terrorist attack clocked in 29 minutes after the broadcast began and 12 minutes after it ended. Had it been flagged while the feed was live, Facebook said Thursday, the social network might have moved faster to remove it. Now Facebook said it will reexamine how it reacts to live and recently aired videos. To alert first responders of an emergency as fast as possible, the company says it prioritizes user reports of a live stream for "accelerated review." "We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground," the company said in an update of its response to the Christchurch attack.

Facebook pledges to improve AI detection of terrorist videos in wake of New Zealand mosque shooting

Daily Mail - Science & tech

Facebook says it considers the language, context and details in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety.

Social Media Giants Have Been Promising to Stop Livestreamed Violence For Years. They Still Can't.

Mother Jones

By Saturday night, Facebook said it had removed 1.5 million videos depicting the deadly mass shooting in New Zealand that had taken place roughly 24 hours earlier. The videos were copies of an original livestream of the killings that the shooter broadcast via the site, which was removed by the company about 20 minutes after it was first loaded. "Our hearts go out to the victims, their families and the community affected by the horrific terrorist attacks in Christchurch," said executive Chris Sonderby in a post on Facebook's public relations site. "We continue to work around the clock to prevent this content from appearing on our site, using a combination of technology and people." The livestream of the shootings, which resulted in the deaths of 50 people gathered at Christchurch mosques, and its wide copying brought unprecedented attention to tech giants' abilities to grapple with violent content, especially in real time.

Facebook to BAN users from livestreaming for 30 days if they break its rules

Daily Mail - Science & tech

Facebook claims it will now ban users from using its'Live' function for 30 days if they breach rules laid out by the firm as it cracks down on violent content. It comes as part of a widespread attempt to erradicate hate crimes and violence form the web across all outlets following the devastating Christchurch massacre. The social network says it is introducing a'one strike' policy for those who violate its most serious rules. Facebook's announcement comes as tech giants and world leaders meet in Paris to discuss plans to eliminate online violence. Representatives of Google, Facebook and Twitter were present at the meeting, hosted by French president Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern.

'A Game of Whack-a-Mole.' Why Facebook and Others Are Struggling to Delete Footage of the New Zealand Shooting

TIME - Tech

In an apparent effort to ensure their heinous actions would "go viral," a shooter who murdered at least 49 people in attacks on two mosques in Christchurch, New Zealand, on Friday live-streamed footage of the assault online, leaving Facebook, YouTube and other social media companies scrambling to block and delete the footage even as other copies continued to spread like a virus. The original Facebook Live broadcast was eventually taken down, but not before its 17-minute runtime had been viewed, replayed and downloaded by users. Copies of that footage quickly proliferated to other platforms, like YouTube, Twitter, Instagram and Reddit, and back to Facebook itself. Even as the platforms worked to take some copies down, other versions were re-uploaded elsewhere. The episode underscored social media companies' Sisyphean struggle to police violent content posted on their platforms.