Facebook claims it will now ban users from using its'Live' function for 30 days if they breach rules laid out by the firm as it cracks down on violent content. It comes as part of a widespread attempt to erradicate hate crimes and violence form the web across all outlets following the devastating Christchurch massacre. The social network says it is introducing a'one strike' policy for those who violate its most serious rules. Facebook's announcement comes as tech giants and world leaders meet in Paris to discuss plans to eliminate online violence. Representatives of Google, Facebook and Twitter were present at the meeting, hosted by French president Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern.
Facebook has released more details of its response to the Christchurch terrorist attack, saying it did not deal with the attacker's live stream as quickly as it could have because it was not reported as a video of suicide. The company said streams that were flagged by users while live were prioritised for accelerated review, as were any recently live streams that were reported for suicide content. It said it received the first user report about the Christchurch stream 12 minutes after it ended, and because it was reported for reasons other than suicide it was handled "according to different procedures". Guy Rosen, Facebook's head of integrity, wrote in a blogpost: "We are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review." Rosen said training AI to recognise such videos would require "many thousands of examples of content … something which is difficult as these events are thankfully rare".
In an apparent effort to ensure their heinous actions would "go viral," a shooter who murdered at least 49 people in attacks on two mosques in Christchurch, New Zealand, on Friday live-streamed footage of the assault online, leaving Facebook, YouTube and other social media companies scrambling to block and delete the footage even as other copies continued to spread like a virus. The original Facebook Live broadcast was eventually taken down, but not before its 17-minute runtime had been viewed, replayed and downloaded by users. Copies of that footage quickly proliferated to other platforms, like YouTube, Twitter, Instagram and Reddit, and back to Facebook itself. Even as the platforms worked to take some copies down, other versions were re-uploaded elsewhere. The episode underscored social media companies' Sisyphean struggle to police violent content posted on their platforms.
SAN FRANCISCO/BANGALORE, INDIA - The Friday massacre at two New Zealand mosques, live-streamed to the world, was not the first internet broadcast of a violent crime, but it showed that stopping gory footage from spreading online persists as a major challenge for tech companies despite years of investment. The massacre in Christchurch was live-streamed by an attacker through his Facebook profile for 17 minutes, according to a copy seen by Reuters. Facebook said it removed the stream after being alerted to it by New Zealand police. But a few hours later, footage from the stream remained on Facebook, Twitter and Alphabet Inc.'s YouTube, as well as Facebook-owned Instagram and WhatsApp. It also remained available on file-sharing websites such as New Zealand-based Mega.nz.