Goto

Collaborating Authors

From the Web to Real Life: The Growing Threat of Online-Bred Right-Wing Extremism

Der Spiegel International

At around 1:30 p.m. on a recent Friday afternoon, some people on 8chan, an online message board, watched a mass murder unfold. Brenton Tarrant had just announced he would carry out a deadly attack and stream it live on Facebook. The first fans quickly voiced their support. "Good luck," one user wrote; another: "Sounds fun." A third person wrote that it was the "best start to a weekend ever." When Tarrant's head-mounted camera showed him murdering the first person at the entrance to the Al Noor Mosque in Christchurch, New Zealand -- someone who had just greeted him kindly -- a fourth person wrote, "Holy fuck nice shootin." Around 200 Facebook users watched through their smartphones, tablets or computers as the 28-year-old got out of his car, opened his trunk where he kept his weapons, and began killing 50 people in and around two mosques. His victims included children, like the 3-year-old Mucad Ibrahim; students, like the 14-year-old Sayyad Milne; men, like the father Khaled Mustafa, and women, like Husne Ara Parvin, who was gunned down while trying to protect her wheelchair-bound husband. A mass killing of Muslims, documented in real time, filmed in the style of a first-person-shooter video game and cheered on like a football match. "This is how we win," a fifth person wrote. It's hard to imagine a greater contempt for humanity. None of the 200 users flagged the video to Facebook, and thousands of people have watched the livestream after the fact. The social network, whose CEO, Mark Zuckerberg, likes to brag about the tens of thousands of moderators on its payroll who constantly monitor content, didn't notice anything at first. Facebook didn't receive the first notice until 12 minutes after the livestream ended.


From the Web to Real Life: The Growing Threat of Online-Bred Right-Wing Extremism

Der Spiegel International

At around 1:30 p.m. on a recent Friday afternoon, some people on 8chan, an online message board, watched a mass murder unfold. Brenton Tarrant had just announced he would carry out a deadly attack and stream it live on Facebook. The first fans quickly voiced their support. "Good luck," one user wrote; another: "Sounds fun." A third person wrote that it was the "best start to a weekend ever." When Tarrant's head-mounted camera showed him murdering the first person at the entrance to the Al Noor Mosque in Christchurch, New Zealand -- someone who had just greeted him kindly -- a fourth person wrote, "Holy fuck nice shootin." Around 200 Facebook users watched through their smartphones, tablets or computers as the 28-year-old got out of his car, opened his trunk where he kept his weapons, and began killing 50 people in and around two mosques. His victims included children, like the 3-year-old Mucad Ibrahim; students, like the 14-year-old Sayyad Milne; men, like the father Khaled Mustafa, and women, like Husne Ara Parvin, who was gunned down while trying to protect her wheelchair-bound husband. A mass killing of Muslims, documented in real time, filmed in the style of a first-person-shooter video game and cheered on like a football match. "This is how we win," a fifth person wrote. It's hard to imagine a greater contempt for humanity. None of the 200 users flagged the video to Facebook, and thousands of people have watched the livestream after the fact. The social network, whose CEO, Mark Zuckerberg, likes to brag about the tens of thousands of moderators on its payroll who constantly monitor content, didn't notice anything at first. Facebook didn't receive the first notice until 12 minutes after the livestream ended.


New Zealand mosque shooting: Are social media companies unwitting accomplices?

USATODAY - Tech Top Stories

Muslims attend a vigil at the East London Mosque for the victims of the New Zealand mosque attacks on March 15, 2019 in London, England. Patrols have been increased after 49 people were killed in mass shootings at two mosques in central Christchurch, New Zealand, on Friday. Tough questions are being asked about the role of social media in the wake of the horrific shooting that took the lives of at least 49 people at two New Zealand mosques. The 28-year-old alleged white supremacist gunman not only livestreamed the rampage via helmet-cam on Facebook and Twitter, but footage of the massacre circulated even hours after the shooting, despite the frantic efforts by Facebook, YouTube, Twitter and Reddit to take it down as quickly as possible, each of which issued the requisite statements condemning the terror, and each of which have codes of conduct that are sometimes violated. 'Violent terrorist': Who is Brenton Tarrant, the white supremacist suspected in New Zealand mosque shootings?


How the Suspected New Zealand Gunman Weaponized the Internet

Mother Jones

After every mass murder, journalists, researchers, and horrified members of the public turn to the internet as they struggle to understand why the perpetrator would take so many lives. Often, those searches paint a picture of a disturbed individual who has been radicalized in dark, online rabbit holes. But on Friday, the suspected gunman behind the Christchurch, New Zealand, mosque shootings appeared to take the process of internet radicalization to a disturbing new level--turning the massacre itself another dark internet rabbit hole designed to draw the attention of like-minded people around the world while attracting new allies to his cause. "This definitely is a real-life shitpost," said Joel Finklestein, a researcher specializing in the digital spread of extremist content at the Anti-Defamation League and the Network Contagion Research Institute. Shitposting is an internet term for pumping out low-quality and often ironic online content to get a reaction from other people.


Australian bill could imprison social network execs over violent content

Engadget

Australia may take a stricter approach to violent online material than Europe in light of the mass shooting in Christchurch, New Zealand. The government is introducing legislation that would punish social networks that don't "expeditiously" remove "abhorrent" violent content produced by perpetrators, such as terrorism, kidnapping and rape. If found guilty, a company could not only face fines up to 10 percent of their annual turnover, but see its executives imprisoned for up to three years. The country's Safety Commissioner would have the power to issue formal notices, giving companies a deadline to remove offending material. Platform hosts would also have to notify Australia if they discover their service is streaming violent content taking place within the country.