Civil Rights & Constitutional Law


When will AI stop being so racist?

#artificialintelligence

Learned bias can occur as the result of incomplete data or researcher bias in generating training data. Because sentencing systems are based on historical data, and black people have historically been arrested and convicted of more crimes, an algorithm could be designed in order to correct for bias that already exists in the system. When humans make mistakes, we tend to rationalize their shortcomings and forgive their mistakes--they're only human!--even if the bias displayed by human judgment is worse than bias displayed by an algorithm. In a follow-up study, Dietvorst shows that algorithm aversion can be reduced by giving people control over an algorithm's forecast.


Big Data will be biased, if we let it

@machinelearnbot

And since we're on the car insurance subject, minorities pay morefor car insurance than white people in similarly risky neighborhoods. If we don't put in place reliable, actionable, and accessible solutions to approach bias in data science, these type of usually unintentional discrimination will become more and more normal, opposing a society and institutions that on the human side are trying their best to evolve past bias, and move forward in history as a global community. Last but definitely not least, there's a specific bias and discrimination section, preventing organizations from using data which might promote bias such as race, gender, religious or political beliefs, health status, and more, to make automated decisions (except some verified exceptions). It's time to make that training broader, and teach all people involved about the ways their decisions while building tools may affect minorities, and accompany that with the relevant technical knowledge to prevent it from happening.


Instagram CEO Kevin Systrom on Free Speech, Artificial Intelligence, and Internet Addiction.

WIRED

It was a long conversation, but here is a 20-minute overview in which Systrom talks about the artificial intelligence Instagram has been developing to filter out toxic comments before you even see them. NT: These are the comments: "Succ," "Succ," "Succ me," "Succ," "Can you make Instagram have auto-scroll feature? And what we realized was there was this giant wave of machine learning and artificial intelligence--and Facebook had developed this thing that basically--it's called deep text NT: Which launches in June of 2016, so it's right there. And then you say, "Okay, machine, go and rate these comments for us based on the training set," and then we see how well it does and we tweak it over time, and now we're at a point where basically this machine learning can detect a bad comment or a mean comment with amazing accuracy--basically a 1 percent false positive rate.


Racist artificial intelligence? Maybe not, if computers explain their 'thinking'

#artificialintelligence

Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their "thinking." "In fact, it can get much worse where if the AI agents are part of a loop where they're making decisions, even the future data, the biases get reinforced," he added. Researchers hope that, by seeing the thought process of the computers, they can make sure AI doesn't pick up any gender or racial biases that humans have. But Singh says understanding the decision process is critical for future use, particularly in cases where AI is making decisions, like approving loan applications, for example.


Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case

#artificialintelligence

One of the biggest civil liberties issues raised by technology today is whether, when, and how we allow computer algorithms to make decisions that affect people's lives. And bad data produces bad results. Idaho's Medicaid bureaucracy was making arbitrary and irrational decisions with big impacts on people's lives, and fighting efforts to make it explain how it was reaching those decisions. As our technological train hurtles down the tracks, we need policymakers at the federal, state, and local level who have a good understanding of the pitfalls involved in using computers to make decisions that affect people's lives.


When algorithms are racist

The Guardian

Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with. I discovered I had a hard time being detected by the robot compared to lighter-skinned people. Thinking about yourself – growing up in Mississippi, a Rhodes Scholar, a Fulbright Fellow and now at MIT – do you wonder that if those admissions decisions had been taken by algorithms you might not have ended up where you are?


Sorry, Dave, I can't code that: AI's prejudice problem

#artificialintelligence

Algorithms are increasingly making decisions that have significant personal ramifications, warns Matthews: "When we're making decisions in regulated areas – should someone be hired, lose their job or get credit," she says. Advertising networks served women fewer instances of ads encouraging high-paying jobs. Bias can also make its way into the data sets used to train AI algorithms. The software tended to predict higher recidivism rates along racial lines, said the ProPublica investigation.


Why can artificial intelligence be racist and sexist?

#artificialintelligence

To do this, they resorted to a very non-standard method – the test for hidden associations (Implicit Association Test, IAT), used to study social attitudes and stereotypes in people. Using IAT tests as a model, Kaliskan and her colleagues created the WEAT (Word-Embedding Association Test) algorithm, which analyzes entire fragments of texts to find out which linguistic entities are more closely connected than others. As an example, Kaliskan cites the way in which the Google Translate translator's interpreter algorithm incorrectly translates words into English from other languages, based on the stereotypes that it has learned based on gender information. In one of the tests, the researchers found a strong associative relationship between the concepts "woman" and "motherhood".


Robots are racist and sexist. Just like the people who created them Laurie Penny

#artificialintelligence

If those patterns are used to make decisions that affect people's lives you end up with unacceptable discrimination." Robots have been racist and sexist for as long as the people who created them have been racist and sexist, because machines can work only from the information given to them, usually by the white, straight men who dominate the fields of technology and robotics. This doesn't mean robots are racist: it means people are racist, and we're raising robots to reflect our own prejudices. The encoded bigotries of machine learning systems give us an opportunity to see how this works in practice.


How artificial intelligence learns to be racist

#artificialintelligence

Open up the photo app on your phone and search "dog," and all the pictures you have of dogs will come up. This was no easy feat. Your phone knows what a dog "looks" like. This and other modern-day marvels are the result of machine learning. These are programs that comb through millions of pieces of data and start making correlations and predictions about the world.