Civil Rights & Constitutional Law


Hugh Hefner's death sparked Twitter praise and criticism from celebrities

Mashable

The announcement of Hugh Hefner's death on Wednesday sparked a wide range of emotions from those who knew the controversial figure personally and those who just read the articles of Playboy magazine. SEE ALSO: 'Playboy' founder Hugh Hefner dies at 91 Folks like Kim Kardashian, Larry King, Gene Simmons, Diddy and more have shared throwback images of the man and their condolences. Rest in peace to my man Hugh Hefner!! But... *robot head explodes, humans escape from robot hot tub* The amount of people treating a porn mogul as some kind of civil rights leader who'empowered women' online rn is gonna make me barf Hugh Hefner is rightly remembered for rebelling against right wing moralism before most people, but please don't forget he treated women like garbage to do it.


FaceApp 'Racist' Filter Shows Users As Black, Asian, Caucasian And Indian

International Business Times

In addition to these blatantly racial face filters – which change everything from hair color to skin tone to eye color – other FaceApp users noted earlier this year that the "hot" filter consistently lightens people's skin color. FaceApp CEO Yaroslav Goncharov defended the Asian, Black, Caucasian and Indian filters in an email to The Verge: "The ethnicity change filters have been designed to be equal in all aspects," he told The Verge over email. Goncharov explained the "hot" filter backlash as an "unfortunate side-effect of the underlying neural network caused by the training set bias." "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behavior."


Microsoft's Zo chatbot told a user that 'Quran is very violent'

#artificialintelligence

Microsoft's earlier chatbot Tay had faced some problems as the bot picking up the worst of humanity, and spouted racists, sexist comments on Twitter when it was introduced last year. The'Quran is violent' comment highlights the kind of problems that still exist when it comes to creating a chatbot, especially one which is drawing its knowledge from conversations with humans. With Tay, Microsoft launched bot on Twitter, which can be a hotbed of polarizing, and often abusive content. Tay had spewed anti-Semitic, racist sexist content, given this was what users on Twitter were tweeting to the chatbot, which is designed to learn from human behaviour.


Inside Google's Internet Justice League and Its AI-Powered War on Trolls

#artificialintelligence

The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.


5 AI Solutions Showing Signs of Racism

#artificialintelligence

Several artificial intelligence projects have been created over the past few years, most of which still had some kinks to work out. For some reason, multiple AI solutions showed signs of racism once they were deployed in a live environment. It turned out the creators of the AI-driven algorithm powering Pokemon Go did not provide a diverse training set, nor did they spend time in those neighborhoods. It is becoming evident a lot of these artificial intelligence solutions show signs of "white supremacy" for some reason.


How artificial intelligence can be corrupted to repress free speech

#artificialintelligence

By keeping ISPs and websites under threat of closure, the government is able to leverage that additional labor force to help monitor a larger population than it would otherwise be able to. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers that enabled the government to punish any outlet that publishes "directly as news reports unverified content found on online platforms such as social media." And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression," Danielle Keats Citron, professor of law at the University of Maryland Carey School of Law, wrote to Engadget. "Context is crucial to many free-speech questions like whether a threat amounts to a true threat and whether a person is a limited-purpose public figure," professor Keats Citron told Engadget.


How artificial intelligence can be corrupted to repress free speech

#artificialintelligence

According to a 2016 report from internet liberty watchdog, Freedom House, two-thirds of all internet users reside in countries where criticism of the ruling administration is censored -- 27 percent of them live in nations where posting, sharing or supporting unpopular opinions on social media can get you arrested. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers which enabled the government to punish any outlet that publishes "directly as news reports unverified content found on online platforms such as social media." And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression," Danielle Keats Citron, Professor of Law at the University of Maryland Carey School of Law, wrote to Engadget. "Context is crucial to many free speech questions like whether a threat amounts to a true threat and whether a person is a limited purpose public figure," Professor Keats Citron told Engadget.


How artificial intelligence can be corrupted to repress free speech

Engadget

According to a 2016 report from internet liberty watchdog, Freedom House, two-thirds of all internet users reside in countries where criticism of the ruling administration is censored -- 27 percent of them live in nations where posting, sharing or supporting unpopular opinions on social media can get you arrested. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers which enabled the government to punish any outlet that publishes "directly as news reports unverified content found on online platforms such as social media." And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression," Danielle Keats Citron, Professor of Law at the University of Maryland Carey School of Law, wrote to Engadget. The popular game managed to reduce toxic language and the abuse of other players by 11 percent and 6.2 percent, respectively, after LoL's developer, RiotGames, instituted an automated notification system that reminded players not to be jerks at various points throughout each match.


Twitter and Dataminr block government 'spy centers' from seeing user data

The Guardian

Twitter has blocked federally funded "domestic spy centers" from using a powerful social media monitoring tool after public records revealed that the government had special access to users' information for controversial surveillance efforts. The American Civil Liberties Union of California discovered that so-called fusion centers, which collect intelligence, had access to monitoring technology from Dataminr, an analytics company partially owned by Twitter. Records that the ACLU obtained uncovered that a fusion center in southern California had access to Dataminr's "geospatial analysis application", which allowed the government to do location-based tracking as well as searches tied to keywords. In October, the ACLU obtained government records revealing that Twitter, Facebook and Instagram had provided users' data to Geofeedia, a software company that aids police surveillance programs and has targeted protesters of color.


Breaking the Black Box: How Machines Learn to Be Racist

#artificialintelligence

Microsoft learned that lesson the hard way earlier this year when it released an AI Twitter bot called Tay that had been trained to talk like a Millennial teen. Last year, Google's automatic image recognition engine tagged a photo of two black people as "gorillas" -- presumably because the machine learned on a database that hadn't included enough photos of either animals or people. For the AI trained with Digital news outlets, close synonyms were "Ferguson" and "Bernie." We created this AI system using Google's open source technology, and trained it to produce synonyms based on what it learned from different news sources.