Civil Rights & Constitutional Law


Assange Keeps Warning Of AI Censorship, And It's Time We Started Listening

#artificialintelligence

Throughout the near entirety of human history, a population's understanding of what's going on in the world has been controlled by those in power. The men in charge controlled what the people were told about rival populations, the history of their tribe and its leadership, etc. When the written word was invented, men in charge dictated what books were permitted to be written and circulated, what ideas were allowed, what narratives the public would be granted access to. This continued straight on into modern times. Where power is not overtly totalitarian, wealthy elites have bought up all media, first in print, then radio, then television, and used it to advance narratives that are favorable to their interests.


FaceApp 'Racist' Filter Shows Users As Black, Asian, Caucasian And Indian

International Business Times

In addition to these blatantly racial face filters – which change everything from hair color to skin tone to eye color – other FaceApp users noted earlier this year that the "hot" filter consistently lightens people's skin color. FaceApp CEO Yaroslav Goncharov defended the Asian, Black, Caucasian and Indian filters in an email to The Verge: "The ethnicity change filters have been designed to be equal in all aspects," he told The Verge over email. Goncharov explained the "hot" filter backlash as an "unfortunate side-effect of the underlying neural network caused by the training set bias." "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behavior."


Microsoft's Zo chatbot told a user that 'Quran is very violent'

#artificialintelligence

Microsoft's earlier chatbot Tay had faced some problems as the bot picking up the worst of humanity, and spouted racists, sexist comments on Twitter when it was introduced last year. The'Quran is violent' comment highlights the kind of problems that still exist when it comes to creating a chatbot, especially one which is drawing its knowledge from conversations with humans. With Tay, Microsoft launched bot on Twitter, which can be a hotbed of polarizing, and often abusive content. Tay had spewed anti-Semitic, racist sexist content, given this was what users on Twitter were tweeting to the chatbot, which is designed to learn from human behaviour.


Inside Google's Internet Justice League and Its AI-Powered War on Trolls

#artificialintelligence

The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.


5 AI Solutions Showing Signs of Racism

#artificialintelligence

Several artificial intelligence projects have been created over the past few years, most of which still had some kinks to work out. For some reason, multiple AI solutions showed signs of racism once they were deployed in a live environment. It turned out the creators of the AI-driven algorithm powering Pokemon Go did not provide a diverse training set, nor did they spend time in those neighborhoods. It is becoming evident a lot of these artificial intelligence solutions show signs of "white supremacy" for some reason.


How artificial intelligence can be corrupted to repress free speech

#artificialintelligence

By keeping ISPs and websites under threat of closure, the government is able to leverage that additional labor force to help monitor a larger population than it would otherwise be able to. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers that enabled the government to punish any outlet that publishes "directly as news reports unverified content found on online platforms such as social media." And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression," Danielle Keats Citron, professor of law at the University of Maryland Carey School of Law, wrote to Engadget. "Context is crucial to many free-speech questions like whether a threat amounts to a true threat and whether a person is a limited-purpose public figure," professor Keats Citron told Engadget.


How artificial intelligence can be corrupted to repress free speech

#artificialintelligence

According to a 2016 report from internet liberty watchdog, Freedom House, two-thirds of all internet users reside in countries where criticism of the ruling administration is censored -- 27 percent of them live in nations where posting, sharing or supporting unpopular opinions on social media can get you arrested. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers which enabled the government to punish any outlet that publishes "directly as news reports unverified content found on online platforms such as social media." And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression," Danielle Keats Citron, Professor of Law at the University of Maryland Carey School of Law, wrote to Engadget. "Context is crucial to many free speech questions like whether a threat amounts to a true threat and whether a person is a limited purpose public figure," Professor Keats Citron told Engadget.


How artificial intelligence can be corrupted to repress free speech

Engadget

According to a 2016 report from internet liberty watchdog, Freedom House, two-thirds of all internet users reside in countries where criticism of the ruling administration is censored -- 27 percent of them live in nations where posting, sharing or supporting unpopular opinions on social media can get you arrested. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers which enabled the government to punish any outlet that publishes "directly as news reports unverified content found on online platforms such as social media." And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression," Danielle Keats Citron, Professor of Law at the University of Maryland Carey School of Law, wrote to Engadget. The popular game managed to reduce toxic language and the abuse of other players by 11 percent and 6.2 percent, respectively, after LoL's developer, RiotGames, instituted an automated notification system that reminded players not to be jerks at various points throughout each match.


Twitter and Dataminr block government 'spy centers' from seeing user data

The Guardian

Twitter has blocked federally funded "domestic spy centers" from using a powerful social media monitoring tool after public records revealed that the government had special access to users' information for controversial surveillance efforts. The American Civil Liberties Union of California discovered that so-called fusion centers, which collect intelligence, had access to monitoring technology from Dataminr, an analytics company partially owned by Twitter. Records that the ACLU obtained uncovered that a fusion center in southern California had access to Dataminr's "geospatial analysis application", which allowed the government to do location-based tracking as well as searches tied to keywords. In October, the ACLU obtained government records revealing that Twitter, Facebook and Instagram had provided users' data to Geofeedia, a software company that aids police surveillance programs and has targeted protesters of color.


Microsoft unveils a new (and hopefully not racist) chat bot

#artificialintelligence

Tay gave chat bots a bad name, but Microsoft's new version has grown up. Microsoft unveiled a new chat bot in the U.S. on Tuesday, saying it's learned from the Tay experiment earlier this year. Zo is now available on messaging app Kik and on the website Zo.ai. Tay was meant to be a cheeky young person you could talk to on Twitter. Users tried -- successfully -- to get the bot to say racist and inappropriate things.