Major media companies and Facebook are scrambling to come to grips with a landmark ruling by an Australian judge that found publishers are legally responsible for pre-moderating comments on the social media site. On Monday in the New South Wales supreme court judge Stephen Rothman found that commercial entities, including media companies, could be regarded as the publishers of comments made on Facebook, and as such had a responsibility to ensure defamatory remarks were not posted in the first place. The judgment has potentially profound impacts on the way news organisations in Australia interact with the social media giant, and prompted immediate backlash from the country's largest media companies. "This ruling shows how far out of step Australia's defamation laws are with other English-speaking democracies and highlights the urgent need for change," News Corp Australia said in a statement following the ruling. "It defies belief that media organisations are held responsible for comments made by other people on social media pages. "It is ridiculous that the media company is held responsible while Facebook, which gives us no ability to turn off comments on its platform, bears no responsibility at all." The ruling was made in a pre-trial hearing over a defamation case brought by 22-year-old Dylan Voller against a number of media outlets over comments made by readers on Facebook. Voller, whose treatment as a detainee inside the Don Dale youth detentio n centre in the Northern Territory triggered a royal commission in 2016, is suing the Australian, the Sydney Morning Herald and the Centralian Advocate newspapers, as well as Sky News Australia's the Bolt Report. The action relates not to the articles themselves, but to comments made about Voller by members of the public on 10 Facebook posts published on the companies' public Facebook pages in 2016 and 2017, which he alleges carried false and defamatory imputations. News organisations in Australia were already liable for Facebook comments made on articles posted on their public pages, but until now the test related to whether a publisher had been negligent in not removing potentially defamatory comments. However, in a pre-trial ruling on Monday, Rothman found media companies in effect had a responsibility to pre-moderate them. "Up until yesterday the general thread [was] if you knew or ought to have known a defamatory post was there, you had to take it down," Paul Gordon, a social media lawyer at Wallmans lawyers in Adelaide told Guardian Australia. "What the judge yesterday found was a bit different, because it wasn't alleged by Voller that the media companies had been negligent in failing to the take down the comments.
Netflix customers have been warned not to fall for a sophisticated new scam targeting subscribers to the video streaming service. New South Wales Police alerted social media users on Wednesday, posting a screengrab of a fake email used in the scam. The email is sophisticated and well-designed, and aims to fool Netflix customers into handing over their credit card details. The image posted by the police highlights the fake email address used by the scammers, which does not resemble an official Netflix email. The text of the email reads: 'We attempted to authorize the Amex you have on file but were unable to do so.'
The International Joint Conference of Artificial Intelligence (or in short IJCAI) is the most established, important and leading scientific event in Artificial Intelligence. Established in 1969 as the first ever international conference on Artificial Intelligence (AI), it is an extension of the seminal (AI first) Dartmouth workshop in 1956 (for the interested, read the inspirational first papers on AI). Practically, much of the leading AI science and technology was presented during previous IJCAI conferences. Before we talk more about the upcoming IJCAI-ECAI-18 event in Stockholm, Sweden (July 13-19, 2018), let us share with you our first-hand experience of IJCAI 2017 in Melbourne, Australia. IJCAI 2017 brings together the brightest researcher and technologists from around the world.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. On Monday morning, I became one of the trial users of a potential new Facebook feature, one of its recent attempts at fighting the fake news and unconstructive debate rife on its platform: upvotes and downvotes. After an initial downvote trial among select Facebook users in February, the Reddit-style, crowdsourced comment ranking system is being further tested in Australia and New Zealand (I just returned from the former, and Facebook seems to think I'm still there), this time with upvotes, too. Little gray arrows--one up, one down--have appeared beneath comments on posts from select public pages, asking for my input. "Press the down arrow if a comment has bad intentions or is disrespectful."
After every mass murder, journalists, researchers, and horrified members of the public turn to the internet as they struggle to understand why the perpetrator would take so many lives. Often, those searches paint a picture of a disturbed individual who has been radicalized in dark, online rabbit holes. But on Friday, the suspected gunman behind the Christchurch, New Zealand, mosque shootings appeared to take the process of internet radicalization to a disturbing new level--turning the massacre itself another dark internet rabbit hole designed to draw the attention of like-minded people around the world while attracting new allies to his cause. "This definitely is a real-life shitpost," said Joel Finklestein, a researcher specializing in the digital spread of extremist content at the Anti-Defamation League and the Network Contagion Research Institute. Shitposting is an internet term for pumping out low-quality and often ironic online content to get a reaction from other people.