Civil Rights & Constitutional Law

'Least Desirable'? How Racial Discrimination Plays Out In Online Dating


In 2014, user data on OkCupid showed that most men on the site rated black women as less attractive than women of other races and ethnicities. That resonated with Ari Curtis, 28, and inspired her blog, Least Desirable.

Google is opening an artificial intelligence center in China


"The science of AI has no borders, neither do its benefits," Fei-Fei Li, chief scientist at Google's AI business, said in a blog post Wednesday announcing the new center.

Egyptian Concertgoers Wave a Flag, and Land in Jail

NYT > Middle East

On Monday Egypt's top prosecutor, Nabil Sadek, ordered an investigation and by evening the police had arrested seven people, most of whom were said to have waved rainbow flags. An official at Mr. Sadek's office said the seven had been charged with "promoting sexual deviancy" and could be detained for 15 days. The state paper Al Ahram said one of the men had been detained for posting approvingly on Facebook about the concert. "Legal actions against him are underway," the paper reported. On Monday, one man who had been photographed with a rainbow flag at the concert wrote on Facebook, "Had I raised the ISIS flag I wouldn't be facing half of what I am facing now."

Google's comment ranking system will be a hit with the alt-right


A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who's ever heard the phrase "don't read the comments." According to The Great Tech Panic: Trolls Across America, Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia "is the least toxic city in the US." The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. The API, called Perspective, is made by Google's Alphabet within its Jigsaw incubator.

The Racists of OkCupid Don't Usually Carry Tiki Torches


In the days before white supremacists descended on Charlottesville, Bumble had already been in the process of strengthening its anti-racism efforts, partly in response to an attack the Daily Stormer had waged on the company, encouraging its readers to harass the staff of Bumble in order to protest the company's public support of women's empowerment. Bumble bans any user who disrespects their customer service team, figuring that a guy who harasses women who work for Bumble would probably harass women who use Bumble. After the neo-Nazi attack, Bumble contacted the Anti-Defamation League for help identifying hate symbols and rooting out users who include them in their Bumble profiles. Now, the employees who respond to user reports have the ADL's glossary of hate symbols as a guide to telltale signs of hate-group membership, and any profile with language from the glossary will get flagged as potentially problematic. The platform has also added the Confederate flag to its list of prohibited images.

Rise of the racist robots – how AI is learning all our worst impulses


In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica. Compas and programs similar to it were in use in hundreds of courts across the US, potentially informing the decisions of judges and other officials. The message seemed clear: the US justice system, reviled for its racial bias, had turned to technology for help, only to find that the algorithms had a racial bias too. How could this have happened?

Inside Google's Internet Justice League and Its AI-Powered War on Trolls


The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.

Banned In China: Why Live Streaming Video Has Been Censored

International Business Times

A recent ban affecting three of China's biggest online platforms aimed at "cleaning up the air in cyberspace" is just the latest government crackdown on user-generated content, and especially live streaming. This edict, issued by China's State Administration of Press, Publication, Radio, Film and Television (SAPPRFT) in June, affects video on the social media platform Sina Weibo, as well as video platforms Ifeng and AcFun. In 2014, for example, one of China's biggest online video platforms LETV began removing its app that allowed TV users to access online video, reportedly due to SAPPRFT requirements. China's largest social media network, Sina Weibo, launched an app named Yi Zhibo in 2016 that allows live streaming of games, talent shows and news.

Pew Research Center: Internet, Science and Tech on the Future of Free Speech


They believe: Technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI). They predict more online platforms will require clear identification of participants; some expect that online reputation systems will be widely used in the future. She said, "Until we have a mechanism users trust with their unique online identities, online communication will be increasingly shaped by negative activities, with users increasingly forced to engage in avoidance behaviors to dodge trolls and harassment. Public discourse forums will increasingly use artificial intelligence, machine learning, and wisdom-of-crowds reputation-management techniques to help keep dialog civil.

People are incensed that an elitist dating app is promoting itself with racist slurs


An elitist, racist dating app is making waves in Singapore -- and its founder is defending it vehemently. SEE ALSO: Teen creates Facebook page to spotlight immigrants' weekly achievements A week ago, it made a Facebook post advertising itself. The term "banglas" is a racist term for the Bangladeshi migrant workers in Singapore. In an earlier Medium post he made in December, Eng said his app would allow filtering by "prestigious schools."