Collaborating Authors


Threats of a Replication Crisis in Empirical Computer Science

Communications of the ACM

Andy Cockburn ( is a professor at the University of Cantebury, Christchurch, New Zealand, where he is head of the HCI and Multimedia Lab. Pierre Dragicevic is a research scientist at Inria, Orsay, France.

New Zealand bans 'abhorrent' video game seemingly based on Christchurch mass shooting

FOX News

Fox News Flash top headlines for Oct. 31 are here. Check out what's clicking on New Zealand has banned an "abhorrent" video game that the country's chief censor said glorifies the mass shooting at two mosques in Christchurch that killed 51 worshipers last March, according to a report. Chief Censor David Shanks said in a statement that the creators of the game set out to "produce and sell a game designed to place the player in the role of a white supremacist terrorist killer." He classified the game as objectionable, adding that in the game "anyone who isn't a white heterosexual male is a target for simply existing," Reuters reported.

The Pope says AI could lead humanity to "barbarism"


At a conference at the Vatican last week, Pope Francis warned a group of Silicon Valley execs that in the wrong hands, artificial intelligence could have devastating consequences for humanity. "If mankind's so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest," he said, according to Reuters. The development of advanced AI can "raise increasingly significant implications in all areas of human activity," the Pope said. He also called for "open and concrete discussions" to develop "both theoretical and practical moral principles." The conference also grappled with the March 2019 attacks in Christchurch, New Zealand, and how social media platforms helped spread footage taken during the shootings, according to TIME.

Facebook to crack down on extremism by training AI with police videos

FOX News

Letitia James says she is leading a bipartisan coalition of attorneys to determine whether Facebook'endangered consumer data, reduced the quality of consumers' choices or increased the price of advertising'; reaction from Rep. Ro Khanna, Democratic congressman from California. Facebook will work with law enforcement agencies to train its artificial intelligence systems to detect videos of violent events as part of its ongoing battle against extremism on the platform. The new effort, announced in a Tuesday blog post, will harness body-cam footage of firearms training provided by U.S. and U.K. government and law enforcement agencies as a way to train systems to automatically detect first-person violent events -- without also sweeping up violence from movies or video games. The tech giant came under fire earlier this year when its AI systems were unable to detect a live-streamed video of a mass shooting at a mosque in Christchurch, New Zealand. The company eventually imposed some new restrictions on live-streaming.

New technologies, artificial intelligence aid fight against global terrorism


But it also provides "live video broadcasting of brutal killings", he continued, citing the recent attack in the New Zealand city of Christchurch, where dozens of Muslim worshippers were killed by a self-avowed white supremacist. "This is done in order to spread fear and split society", maintained the UNOCT chief, warning of more serious developments, such as attempts by terrorists to create home-made biological weapons. He pointed out that terrorists have the capacity to use drones to deliver chemical, biological or radiological materials, which Mr. Voronkov said, "are even hard to imagine." But the international community is "not sitting idly by", he stressed, noting that developments in this area allow the processing and identification of key information, which can counter terrorist operations with lightning speed. "The Internet content of terrorists is detected and deleted faster than ever", elaborated the UNOCT chief.

Can Artificial Intelligence Predict The Spread Of Online Hate Speech?


The rise in online hate speech and the way it is reflected in the offline world is a hot topic in politics right now. The internet has given everyone a voice, which clearly has positive implications for the way citizens can publicly challenge authority and debate issues. It's fairly commonly assumed that this form of hate speech, particularly when encountered alongside other factors such as social deprivation or mental illness, has the potential to radicalize individuals in dangerous ways, and inspire them to commit illegal and violent acts. Just as terrorist organizations like ISIS can be seen using hate speech in videos and propaganda material intended to incite violence, racist and anti-Islamic material is thought to have inspired killers like Anders Breivik, who killed 69 youths in a 2011 shooting spree, and the 2019 Christchurch mosque shooting in which 51 died. So far these links between online and real-world actions, though common sense tells us they are likely to exist, have been difficult to prove scientifically.

Can artificial intelligence algorithms help regulate extreme speech?


Following the attacks in Christchurch, New Zealand, in March, social media companies have once again come under growing pressure to "do …

AI video screening still a long way off, says Facebook executive


Facebook Inc.'s chief artificial intelligence scientist said the company is years away from being able to use software to automatically screen live video for extreme violence. Yann LeCun's comments follow the March livestream of the Christchurch mosque shootings in New Zealand. 'This problem is very far from being solved,' LeCun said Friday during a talk at Facebook's AI Research Lab in Paris. Facebook was criticised for allowing the Christchurch attacker to broadcast the shootings live without adequate oversight that could have resulted in quicker take-downs of the video. It also struggled to prevent other users from re-posting the attacker's footage.

AI-Powered Gun Detection Is Coming to Mosques Worldwide Following Christchurch Shootings


In March, a gunman walked into two mosques in Christchurch, New Zealand, opened fire, and killed dozens of worshippers. According to a police official, the suspected gunman was arrested 36 minutes after police were called to the scene. Now, a tech company believes its smart security cameras can prevent attacks like the tragedy in Christchurch, and says it plans to install its AI-powered systems in mosques around the world. Athena Security, the tech company behind the security system, and Al-Ameri International Trading announced the Keep Mosques Safe initiative last week. Al-Ameri International Trading, along with several Islamic non-profit groups, will fund the Keep Mosques Safe effort.

Future of artificial intelligence becomes key topic at World Economic Forum » Uncensored Publications


I have a lovely partner and 3 very active youngsters. We live in the earthquake ravaged Eastern Suburbs of Christchurch, New Zealand. I began commenting/posting on Uncensored back in early 2012 looking for discussion and answers on the cause and agendas relating to our quakes. I have always maintained an interest in ancient mysteries, UFOs, hidden agendas, geoengineering and secret societies and keep a close eye on current world events. Since 2013 I have been an active member of