Facebook claims it will now ban users from using its'Live' function for 30 days if they breach rules laid out by the firm as it cracks down on violent content. It comes as part of a widespread attempt to erradicate hate crimes and violence form the web across all outlets following the devastating Christchurch massacre. The social network says it is introducing a'one strike' policy for those who violate its most serious rules. Facebook's announcement comes as tech giants and world leaders meet in Paris to discuss plans to eliminate online violence. Representatives of Google, Facebook and Twitter were present at the meeting, hosted by French president Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern.
In an apparent effort to ensure their heinous actions would "go viral," a shooter who murdered at least 49 people in attacks on two mosques in Christchurch, New Zealand, on Friday live-streamed footage of the assault online, leaving Facebook, YouTube and other social media companies scrambling to block and delete the footage even as other copies continued to spread like a virus. The original Facebook Live broadcast was eventually taken down, but not before its 17-minute runtime had been viewed, replayed and downloaded by users. Copies of that footage quickly proliferated to other platforms, like YouTube, Twitter, Instagram and Reddit, and back to Facebook itself. Even as the platforms worked to take some copies down, other versions were re-uploaded elsewhere. The episode underscored social media companies' Sisyphean struggle to police violent content posted on their platforms.
Facebook has released more details of its response to the Christchurch terrorist attack, saying it did not deal with the attacker's live stream as quickly as it could have because it was not reported as a video of suicide. The company said streams that were flagged by users while live were prioritised for accelerated review, as were any recently live streams that were reported for suicide content. It said it received the first user report about the Christchurch stream 12 minutes after it ended, and because it was reported for reasons other than suicide it was handled "according to different procedures". Guy Rosen, Facebook's head of integrity, wrote in a blogpost: "We are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review." Rosen said training AI to recognise such videos would require "many thousands of examples of content … something which is difficult as these events are thankfully rare".
By Saturday night, Facebook said it had removed 1.5 million videos depicting the deadly mass shooting in New Zealand that had taken place roughly 24 hours earlier. The videos were copies of an original livestream of the killings that the shooter broadcast via the site, which was removed by the company about 20 minutes after it was first loaded. "Our hearts go out to the victims, their families and the community affected by the horrific terrorist attacks in Christchurch," said executive Chris Sonderby in a post on Facebook's public relations site. "We continue to work around the clock to prevent this content from appearing on our site, using a combination of technology and people." The livestream of the shootings, which resulted in the deaths of 50 people gathered at Christchurch mosques, and its wide copying brought unprecedented attention to tech giants' abilities to grapple with violent content, especially in real time.