Civil Rights & Constitutional Law

People are incensed that an elitist dating app is promoting itself with racist slurs


An elitist, racist dating app is making waves in Singapore -- and its founder is defending it vehemently. SEE ALSO: Teen creates Facebook page to spotlight immigrants' weekly achievements A week ago, it made a Facebook post advertising itself. The term "banglas" is a racist term for the Bangladeshi migrant workers in Singapore. In an earlier Medium post he made in December, Eng said his app would allow filtering by "prestigious schools."

Artificial Intelligence as a Weapon for Hate and Racism


According to Crawford, "AI is really, really good at centralizing power; at claiming a type of scientific neutrality without being transparent. She refers to the data upon which this type of facial recognition/machine learning systems is based as "human-trained." She cited problems with an emerging form of machine learning, predictive policing. "Police systems ingest huge amounts of historical crime data as a way of predicting where future crime might happen, where the hotspots will be," she explained.

ACLU Challenges Warrant for Pipeline Protest Facebook Data

U.S. News

"Political speech and the freedom to engage in political activity without being subjected to undue government scrutiny are at the heart of the First Amendment," ACLU of Washington staff attorney La Rond Baker said in a statement announcing the filing. "Further, the Fourth Amendment prohibits the government from performing broad fishing expeditions into private affairs. And seizing information from Facebook accounts simply because they are associated with protests of the government violates these core constitutional principles."

The March on Austin: Washington Casts a Shadow on SXSW


For the creators, marketers and entrepreneurs descending this weekend on Austin, Texas, politics in the wake of President Trump will surely be top of mind, perhaps even overshadowing some of the innovation in virtual reality and artificial intelligence. This year's dialog will focus on how "social media can drive organized protests and provide support for causes our current administration has reprioritized," like the environment, gender equality and women's rights, said Neil Carty, senior VP-innovation strategy at consultancy MediaLink. "There is a shift away from interruptive TV ads to content people want to watch in its own right," said Jody Raida, director-branded entertainment at McGarryBowen. Artificial intelligence and virtual reality will also be hot, with dozens of sessions dedicated to the technologies, along with the application of chatbots and live video.

Apple's head of Siri is joining the Partnership on AI


Tim Cook's firm has become a founding member of the organisation, which includes Google/DeepMind, Microsoft, IBM, Facebook and Amazon. Apple's Tom Gruber, the chief technology officer of AI personal assistant Siri, has joined the group of trustees running the non-profit partnership. As well as Gruber, the Partnership on AI has announced six independent board members: Dario Amodei from Elon Musk's OpenAI, Eric Sears of the MacArthur Foundation, and Deirdre Mulligan from UC Berkley. Facebook, Google (in the form of DeepMind), Microsoft, IBM, and Amazon have created a partnership to research and collaborate on advancing AI in a responsible way.

A collection of 13,500 insults lobbed by Wikipedia editors is helping researchers understand and fight trolls


They say the data will boost efforts to train software to understand and police online harassment. The collaborators have already used the data to train machine-learning algorithms that rival crowdsourced workers at spotting personal attacks. When they ran it through the full collection of 63 million discussion posts made by Wikipedia editors, they found that only around one in 10 attacks had resulted in action by moderators. Wikimedia Foundation made reducing harassment among Wikipedia editors a priority last year.

How to Keep Your AI From Turning Into a Racist Monster


Algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed--causes everything from warped Google searches to barring qualified women from medical school. Tay's embrace of humanity's worst attributes is an example of algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed. Recently, a Carnegie Mellon research team unearthed algorithmic bias in online ads. When they simulated people searching for jobs online, Google ads showed listings for high-income jobs to men nearly six times as often as to equivalent women.

Nowhere to hide

BBC News

And Russian app FindFace lets you match a photograph you've taken of someone to their social media profile on the country's popular social media platform Vkontakte. Carl Gohringer, founder and director at Allevate, a facial recognition firm that works with law enforcement, intelligence and government agencies, says: "The amount of media - such as videos and photos - available to us as individuals, organisations and businesses, and to intelligence and law enforcement agencies, is staggering. But Ruth Boardman, data privacy specialist at international law firm Bird & Bird, says individual rights still vary from one EU state to another. And the automation of security vetting decisions based on facial recognition tech raises serious privacy issues.

How artificial intelligence can be corrupted to repress free speech


By keeping ISPs and websites under threat of closure, the government is able to leverage that additional labor force to help monitor a larger population than it would otherwise be able to. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers that enabled the government to punish any outlet that publishes "directly as news reports unverified content found on online platforms such as social media." And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression," Danielle Keats Citron, professor of law at the University of Maryland Carey School of Law, wrote to Engadget. "Context is crucial to many free-speech questions like whether a threat amounts to a true threat and whether a person is a limited-purpose public figure," professor Keats Citron told Engadget.

A massive AI partnership is tapping civil rights and economic experts to keep AI safe


The Partnership also added Apple as a "founding member," putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board. "In its most ideal form, [the Partnership] puts on the agenda the idea of human rights and civil liberties in the science and data science community," says Carol Rose, the executive director of the ACLU of Massachusetts who is joining the Partnership's board. "While there will be many benefits from AI, it is important to ensure that challenges such as protecting and advancing civil rights, civil liberties, and security are accounted for," Sears says. Google will be represented by director of augmented intelligence research Greg Corrado; Facebook by its director of AI research, Yann LeCun; Amazon by its director of machine learning, Ralf Herbrich; Microsoft by the director of its research lab, Horvitz; and IBM by a research scientist at its T.J. Watson Research Centre, Francesca Rossi.