Civil Rights & Constitutional Law


Artificial Intelligence: ARTICLE 19 calls for protection of freedom… · Article 19

#artificialintelligence

The submission stresses the need to critically evaluate the impact of Artificial Intelligence (AI) and automated decision making systems (AS) on human rights. Machine learning – the most successful subset of AI techniques – enables an algorithm to learn from a dataset using statistical methods. As such, AI has a direct impact on the ability of individuals to exercise their right to freedom of expression in the digital age. Development of AI is not new but advances in the digital environment – greater volumes of data, computational power, and statistical methods – will make it more enabling in the future.


When will AI stop being so racist?

#artificialintelligence

Learned bias can occur as the result of incomplete data or researcher bias in generating training data. Because sentencing systems are based on historical data, and black people have historically been arrested and convicted of more crimes, an algorithm could be designed in order to correct for bias that already exists in the system. When humans make mistakes, we tend to rationalize their shortcomings and forgive their mistakes--they're only human!--even if the bias displayed by human judgment is worse than bias displayed by an algorithm. In a follow-up study, Dietvorst shows that algorithm aversion can be reduced by giving people control over an algorithm's forecast.


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


ai-research-is-in-desperate-need-of-an-ethical-watchdog

WIRED

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


AI robots are sexist and racist, experts warn

#artificialintelligence

He said the deep learning algorithms which drive AI software are "not transparent", making it difficult to to redress the problem. Currently approximately 9 per cent of the engineering workforce in the UK is female, with women making up only 20 per cent of those taking A Level physics. "We have a problem," Professor Sharkey told Today. Professor Sharkey said researchers at Boston University had demonstrated the inherent bias in AI algorithms by training a machine to analyse text collected from Google News.


Big Data will be biased, if we let it

@machinelearnbot

And since we're on the car insurance subject, minorities pay morefor car insurance than white people in similarly risky neighborhoods. If we don't put in place reliable, actionable, and accessible solutions to approach bias in data science, these type of usually unintentional discrimination will become more and more normal, opposing a society and institutions that on the human side are trying their best to evolve past bias, and move forward in history as a global community. Last but definitely not least, there's a specific bias and discrimination section, preventing organizations from using data which might promote bias such as race, gender, religious or political beliefs, health status, and more, to make automated decisions (except some verified exceptions). It's time to make that training broader, and teach all people involved about the ways their decisions while building tools may affect minorities, and accompany that with the relevant technical knowledge to prevent it from happening.


AIs that learn from photos become sexist

Daily Mail

In the fourth example, the person pictured is labeled'woman' even though it is clearly a man because of sexist biases in the set that associate kitchens with women Researchers tested two of the largest collections of photos used to train image recognition AIs and discovered that sexism was rampant. However, they AIs associated men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment. 'For example, the activity cooking is over 33 percent more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68 percent at test time,' reads the paper, titled'Men Also Like Shopping,' which published as part of the 2017 Conference on Empirical Methods on Natural Language Processing. A user shared a photo depicting another scenario in which technology failed to detect darker skin, writing'reminds me of this failed beta test Princeton University conducted a word associate task with the algorithm GloVe, an unsupervised AI that uses online text to understand human language.


Beyond science fiction: Artificial Intelligence and human rights

#artificialintelligence

Today, however, the convergence of complex algorithms, big data, and exponential increases in computational power has resulted in a world where AI raises significant ethical and human rights dilemmas, involving rights ranging from the right to privacy to due process. Although less dramatic than military applications, the development of AI in the domestic sector also opens the door to significant human rights issues such as discrimination and systemic racism. Police forces across the country, for example, are increasingly turning to automated "predictive policing" systems that ingest large amounts of data on criminal activity, demographics, and geospatial patterns to produce maps of where algorithms predict crime is likely to occur. The development of AI in the domestic sector also opens the door to significant human rights issues such as discrimination and systemic racism.


Do we still need human judges in the age of Artificial Intelligence?

#artificialintelligence

Casetext for example--a legal tech-startup providing Artificial Intelligence (AI)-based research for lawyers--recently secured $12 million in one of the industry's largest funding rounds, but research is just one area where AI is being used to assist the legal profession. The idea of AI judges raises important ethical issues around bias and autonomy. For example, an AI judge recently developed by computer scientists at University College London drew on extensive data from 584 cases before the European Court of Human Rights (ECHR). If AI can examine the case record and accurately decide cases based on the facts, human judges could be reserved for higher courts where more complex legal questions need to be examined.


Rise of the racist robots – how AI is learning all our worst impulses

#artificialintelligence

Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages. Lum and her co-author took PredPol – the program that suggests the likely location of future crimes based on recent crime and arrest statistics – and fed it historical drug-crime data from the city of Oakland's police department. As if that wasn't bad enough, the researchers also simulated what would happen if police had acted directly on PredPol's hotspots every day and increased their arrests accordingly: the program entered a feedback loop, predicting more and more crime in the neighbourhoods that police visited most.