Facebook vows to improve AI detection of terrorist videos

Engadget 

Facebook rushed to pull down footage of the New Zealand mass shooter's video from its platform, but it didn't start doing so until after the live broadcast was done. In a new post, Facebook VP of Integrity Guy Rosen discussed the company's successes and shortcomings in addressing the situation, as well as its plans to prevent videos like that from spreading on the social network in the future. He explained that while the platform's AI can quickly detect videos containing suicidal or harmful acts, the shooter's stream didn't trigger it. To be able to train the matching AI to detect that specific type of content, the platform needs big volumes of training data. As Facebook explains, something like that is difficult to obtain as "these events are thankfully rare."