In a clear act of terror, multiple people have died after gunmen opened fire at mosques in Christchurch, New Zealand on Friday. New Zealand Police said four suspects are in custody, and that the situation is ongoing. The city is in lockdown, and the incident has already been described by Prime Minister Jacinda Ardern as "one of New Zealand's darkest days." It's also been reported that one of the attacks was live streamed online on YouTube and Facebook, although the original videos have been since taken down. Clips of the disturbing attack continue to be shared online, and inexplicably, broadcast by news outlets.
SAN FRANCISCO/BANGALORE, INDIA - The Friday massacre at two New Zealand mosques, live-streamed to the world, was not the first internet broadcast of a violent crime, but it showed that stopping gory footage from spreading online persists as a major challenge for tech companies despite years of investment. The massacre in Christchurch was live-streamed by an attacker through his Facebook profile for 17 minutes, according to a copy seen by Reuters. Facebook said it removed the stream after being alerted to it by New Zealand police. But a few hours later, footage from the stream remained on Facebook, Twitter and Alphabet Inc.'s YouTube, as well as Facebook-owned Instagram and WhatsApp. It also remained available on file-sharing websites such as New Zealand-based Mega.nz.
At least 49 people were murdered Friday at two mosques in Christchurch, New Zealand, in an attack that followed a grim playbook for terrorism in the social media era. The shooter apparently seeded warnings on Twitter and 8chan before livestreaming the rampage on Facebook for 17 gut-wrenching minutes. Almost immediately, people copied and reposted versions of the video across the internet, including on Reddit, Twitter, and YouTube. News organizations as well started airing some of the footage as they reported on the destruction that took place. By the time Silicon Valley executives woke up Friday morning, tech giants' algorithms and international content moderating armies were already scrambling to contain the damage--and not very successfully.
Facebook claims it will now ban users from using its'Live' function for 30 days if they breach rules laid out by the firm as it cracks down on violent content. It comes as part of a widespread attempt to erradicate hate crimes and violence form the web across all outlets following the devastating Christchurch massacre. The social network says it is introducing a'one strike' policy for those who violate its most serious rules. Facebook's announcement comes as tech giants and world leaders meet in Paris to discuss plans to eliminate online violence. Representatives of Google, Facebook and Twitter were present at the meeting, hosted by French president Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern.
By Saturday night, Facebook said it had removed 1.5 million videos depicting the deadly mass shooting in New Zealand that had taken place roughly 24 hours earlier. The videos were copies of an original livestream of the killings that the shooter broadcast via the site, which was removed by the company about 20 minutes after it was first loaded. "Our hearts go out to the victims, their families and the community affected by the horrific terrorist attacks in Christchurch," said executive Chris Sonderby in a post on Facebook's public relations site. "We continue to work around the clock to prevent this content from appearing on our site, using a combination of technology and people." The livestream of the shootings, which resulted in the deaths of 50 people gathered at Christchurch mosques, and its wide copying brought unprecedented attention to tech giants' abilities to grapple with violent content, especially in real time.