A police officer directs pedestrians near the site of one of the mass shootings at two mosques in Christchurch, New Zealand on Saturday. Nearly a year ago, Mark Zuckerberg testified before Congress that Facebook does not allow hate groups on its platform. "If there's a group that--their primary purpose or--or a large part of what they do is spreading hate, we will ban them from the platform," he told the House Energy and Commerce Committee on April 11, 2018. Across the country in San Francisco, Madihha Ahussain was watching from her office at Muslim Advocates, a civil rights group that seeks to protect Muslim Americans from discrimination, bigotry, and violence. She found the assertion shocking.
Facebook's policy reversal marks a major step toward reckoning with the vast amount of white nationalist content that continues to fester on social media services. With a growing number of populist movements gaining hold around the globe, technology companies have been reluctant to ban white nationalist content, wary of charges of censorship. White nationalism hurtled back into the spotlight after a gunman opened fire at two mosques in Christchurch, New Zealand, killing 50 people. In a 74-page manifesto, he described himself as an "ordinary white man" whose goal was to "crush immigration and deport those invaders already living on our soil" and "ensure the existence of our people, and a future for white children."
House Democrats plan to grill Facebook and Google next week on their efforts to stop the spread of white nationalism and hate speech online, a hearing that comes in response to a series of violent, racially motivated attacks around the world, including a mass shooting in New Zealand last month. The scheduled April 9 hearing by the House Judiciary Committee seeks to probe "the impact white nationalist groups have on American communities and the spread of white identity ideology," the panel announced Wednesday, along with "what social media companies can do" to stop the spread of extremist content on the web. Facebook, Google and other tech giants long have faced criticism from Congress for failing to crack down on a wide array of abusive posts, photos and videos that attack people on the basis of race, gender or other traits. But their heightened attention to the issue and investments in more content reviewers --along with more potent artificial intelligence tools -- still haven't thwarted the proliferation of troubling content. The shooting at two mosques in the New Zealand city of Christchurch brought this into sharp relief, after videos of the attack targeting Muslims spread rapidly on social-media sites.
Facebook has allowed far-right group Britain First to set up new pages and pay for adverts, despite vowing to crack down on extremists. Days after the social media giant was used to livestream the New Zealand terror attack, The Independent can reveal Facebook anti-Muslim leader Paul Golding set up two new platforms. One was functioning as Britain First's official page and had more than 7,300 followers, with Golding posting pictures from a "Britain First defenders" training day and telling people to "pray for churches" in response to the Christchurch mosque shooting. We'll tell you what's true. You can form your own view.
After every mass murder, journalists, researchers, and horrified members of the public turn to the internet as they struggle to understand why the perpetrator would take so many lives. Often, those searches paint a picture of a disturbed individual who has been radicalized in dark, online rabbit holes. But on Friday, the suspected gunman behind the Christchurch, New Zealand, mosque shootings appeared to take the process of internet radicalization to a disturbing new level--turning the massacre itself another dark internet rabbit hole designed to draw the attention of like-minded people around the world while attracting new allies to his cause. "This definitely is a real-life shitpost," said Joel Finklestein, a researcher specializing in the digital spread of extremist content at the Anti-Defamation League and the Network Contagion Research Institute. Shitposting is an internet term for pumping out low-quality and often ironic online content to get a reaction from other people.