Facebook to BAN users from livestreaming for 30 days if they break its rules

Daily Mail - Science & tech

Facebook claims it will now ban users from using its'Live' function for 30 days if they breach rules laid out by the firm as it cracks down on violent content. It comes as part of a widespread attempt to erradicate hate crimes and violence form the web across all outlets following the devastating Christchurch massacre. The social network says it is introducing a'one strike' policy for those who violate its most serious rules. Facebook's announcement comes as tech giants and world leaders meet in Paris to discuss plans to eliminate online violence. Representatives of Google, Facebook and Twitter were present at the meeting, hosted by French president Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern.


Facebook pledges to improve AI detection of terrorist videos in wake of New Zealand mosque shooting

Daily Mail - Science & tech

Facebook says it considers the language, context and details in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety.


Facebook teams up with police to stop streaming of terror attacks

The Guardian

Facebook is working with the Metropolitan police to improve the social network's ability to detect live streaming of terrorism and potentially alert officers about an attack sooner. The tech company will provide officers at the Met's firearms training centres with body cameras, in an effort to help its artificial intelligence more accurately and rapidly identify videos of real-life first person shooter incidents. Facebook came under fire for the spread of a live stream video showing the New Zealand mosque shootings in March, which left 51 dead. The video was viewed fewer than 200 times during its live broadcast and was watched about 4,000 times in total before being removed. Facebook relies on AI to spot violating content and remove it as quickly as possible, but in the case of the Christchurch terrorist attack, it says it simply did not have enough first-person footage of violent events for the system to match it up against.


'A Game of Whack-a-Mole.' Why Facebook and Others Are Struggling to Delete Footage of the New Zealand Shooting

TIME - Tech

In an apparent effort to ensure their heinous actions would "go viral," a shooter who murdered at least 49 people in attacks on two mosques in Christchurch, New Zealand, on Friday live-streamed footage of the assault online, leaving Facebook, YouTube and other social media companies scrambling to block and delete the footage even as other copies continued to spread like a virus. The original Facebook Live broadcast was eventually taken down, but not before its 17-minute runtime had been viewed, replayed and downloaded by users. Copies of that footage quickly proliferated to other platforms, like YouTube, Twitter, Instagram and Reddit, and back to Facebook itself. Even as the platforms worked to take some copies down, other versions were re-uploaded elsewhere. The episode underscored social media companies' Sisyphean struggle to police violent content posted on their platforms.


Facebook reviews live stream policy after Christchurch attack

The Guardian

Facebook has released more details of its response to the Christchurch terrorist attack, saying it did not deal with the attacker's live stream as quickly as it could have because it was not reported as a video of suicide. The company said streams that were flagged by users while live were prioritised for accelerated review, as were any recently live streams that were reported for suicide content. It said it received the first user report about the Christchurch stream 12 minutes after it ended, and because it was reported for reasons other than suicide it was handled "according to different procedures". Guy Rosen, Facebook's head of integrity, wrote in a blogpost: "We are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review." Rosen said training AI to recognise such videos would require "many thousands of examples of content … something which is difficult as these events are thankfully rare".