Civil Rights & Constitutional Law


'Least Desirable'? How Racial Discrimination Plays Out In Online Dating

NPR

In 2014, user data on OkCupid showed that most men on the site rated black women as less attractive than women of other races and ethnicities. That resonated with Ari Curtis, 28, and inspired her blog, Least Desirable. In 2014, user data on OkCupid showed that most men on the site rated black women as less attractive than women of other races and ethnicities. That resonated with Ari Curtis, 28, and inspired her blog, Least Desirable. I don't date Asians -- sorry, not sorry.


Google's comment ranking system will be a hit with the alt-right

Engadget

The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. To broadly determine what is and isn't toxic, Disqus uses the Perspective API--software from Alphabet's Jigsaw division that plugs into its system. Pasting her "Dear white people" into Perspective's API got a score of 61 percent toxicity. It's possible that the tool is seeking out comments with terms like black, gay, and woman as high potential for being abusive or negative, but that would make Perspective an expensive, overkill wrapper for the equivalent of using Command-F to demonize words that some people might find upsetting.


Inside Google's Internet Justice League and Its AI-Powered War on Trolls

#artificialintelligence

The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.


Pew Research Center: Internet, Science and Tech on the Future of Free Speech

#artificialintelligence

They believe: Technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI). They predict more online platforms will require clear identification of participants; some expect that online reputation systems will be widely used in the future. She said, "Until we have a mechanism users trust with their unique online identities, online communication will be increasingly shaped by negative activities, with users increasingly forced to engage in avoidance behaviors to dodge trolls and harassment. Public discourse forums will increasingly use artificial intelligence, machine learning, and wisdom-of-crowds reputation-management techniques to help keep dialog civil.


Artificial intelligence: How to avoid racist algorithms

BBC News

There is growing concern that many of the algorithms that make decisions about our lives - from what we see on the internet to how likely we are to become victims or instigators of crime - are trained on data sets that do not include a diverse range of people. The result can be that the decision-making becomes inherently biased, albeit accidentally. Try searching online for an image of "hands" or "babies" using any of the big search engines and you are likely to find largely white results. In 2015, graphic designer Johanna Burai created the World White Web project after searching for an image of human hands and finding exclusively white hands in the top image results on Google. Her website offers "alternative" hand pictures that can be used by content creators online to redress the balance and thus be picked up by the search engine.


AI programs exhibit racist and sexist biases, research reveals

The Guardian

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases. The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons. In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained. However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.


The 'robot lawyer' giving free legal advice to refugees

BBC News

A technology initially used to fight traffic fines is now helping refugees with legal claims. When Joshua Browder developed DoNotPay he called it "the world's first robot lawyer". It's a chatbot - a computer program that carries out conversations through texts or vocal commands - and it uses Facebook Messenger to gather information about a case before spitting out advice and legal documents. It was originally designed to help people wiggle out of parking or speeding tickets. But now Browder - a 20-year-old British man currently studying at Stanford University - has adapted his bot to help asylum seekers.


Fighting Words Not Ideas: Google's New AI-Powered Toxic Speech Filter Is The Right Approach

#artificialintelligence

Alphabet Jigsaw (formerly Google Ideas) officially unveiled this morning their new tool for fighting toxic speech online, appropriately called Perspective. Powered by a deep-learning model trained on more than 17 million manually reviewed reader comments provided by the New York Times, the model assigns a score to a given passage of text, rating it on a scale from 0 to 100%, similar to statements that human reviewers have previously rated as "toxic." What makes this new approach from Google so different than past approaches is that it largely focuses on language rather than ideas: for the most part you can express your thoughts freely and without fear of censorship as long as you express them clinically and clearly, while if you resort to emotional diatribes and name calling, regardless of what you talk about, you will be flagged. What does this tell us about the future of toxic speech online and the notion of machines guiding humans to a more "perfect" humanity? One of the great challenges in filtering out "toxic" speech online is first defining what precisely counts as "toxic" and then determining how to remove such speech without infringing on people's ability to freely express their ideas.


Fighting Words Not Ideas: Google's New AI-Powered Toxic Speech Filter Is The Right Approach

Forbes

Alphabet Jigsaw (formerly Google Ideas) officially unveiled this morning their new tool for fighting toxic speech online, appropriately called Perspective. Powered by a deep learning model trained on more than 17 million manually reviewed reader comments provided by the New York Times, the model assigns a score to a given passage of text, rating it on a scale from 0 to 100% similar to statements that human reviewers have previously rated as "toxic." What makes this new approach from Google so different than past approaches is that it largely focuses on language rather than ideas: for the most part you can express your thoughts freely and without fear of censorship as long as you express them clinically and clearly, while if you resort to emotional diatribes and name calling, regardless of what you talk about, you will be flagged. What does this tell us about the future of toxic speech online and the notion of machines guiding humans to a more "perfect" humanity? One of the great challenges in filtering out "toxic" speech online is first defining what precisely counts as "toxic" and then determining how to remove such speech without infringing on people's ability to freely express their ideas.


A collection of 13,500 insults lobbed by Wikipedia editors is helping researchers understand and fight trolls

#artificialintelligence

They say the data will boost efforts to train software to understand and police online harassment. The collaborators have already used the data to train machine-learning algorithms that rival crowdsourced workers at spotting personal attacks. When they ran it through the full collection of 63 million discussion posts made by Wikipedia editors, they found that only around one in 10 attacks had resulted in action by moderators. Wikimedia Foundation made reducing harassment among Wikipedia editors a priority last year.