In an apparent effort to ensure their heinous actions would "go viral," a shooter who murdered at least 49 people in attacks on two mosques in Christchurch, New Zealand, on Friday live-streamed footage of the assault online, leaving Facebook, YouTube and other social media companies scrambling to block and delete the footage even as other copies continued to spread like a virus. The original Facebook Live broadcast was eventually taken down, but not before its 17-minute runtime had been viewed, replayed and downloaded by users. Copies of that footage quickly proliferated to other platforms, like YouTube, Twitter, Instagram and Reddit, and back to Facebook itself. Even as the platforms worked to take some copies down, other versions were re-uploaded elsewhere. The episode underscored social media companies' Sisyphean struggle to police violent content posted on their platforms.
Facebook has released more details of its response to the Christchurch terrorist attack, saying it did not deal with the attacker's live stream as quickly as it could have because it was not reported as a video of suicide. The company said streams that were flagged by users while live were prioritised for accelerated review, as were any recently live streams that were reported for suicide content. It said it received the first user report about the Christchurch stream 12 minutes after it ended, and because it was reported for reasons other than suicide it was handled "according to different procedures". Guy Rosen, Facebook's head of integrity, wrote in a blogpost: "We are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review." Rosen said training AI to recognise such videos would require "many thousands of examples of content … something which is difficult as these events are thankfully rare".
By Saturday night, Facebook said it had removed 1.5 million videos depicting the deadly mass shooting in New Zealand that had taken place roughly 24 hours earlier. The videos were copies of an original livestream of the killings that the shooter broadcast via the site, which was removed by the company about 20 minutes after it was first loaded. "Our hearts go out to the victims, their families and the community affected by the horrific terrorist attacks in Christchurch," said executive Chris Sonderby in a post on Facebook's public relations site. "We continue to work around the clock to prevent this content from appearing on our site, using a combination of technology and people." The livestream of the shootings, which resulted in the deaths of 50 people gathered at Christchurch mosques, and its wide copying brought unprecedented attention to tech giants' abilities to grapple with violent content, especially in real time.
The first user to alert Facebook to the grisly video of the New Zealand terrorist attack clocked in 29 minutes after the broadcast began and 12 minutes after it ended. Had it been flagged while the feed was live, Facebook said Thursday, the social network might have moved faster to remove it. Now Facebook said it will reexamine how it reacts to live and recently aired videos. To alert first responders of an emergency as fast as possible, the company says it prioritizes user reports of a live stream for "accelerated review." "We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground," the company said in an update of its response to the Christchurch attack.