Results


AIs that learn from photos become sexist

Daily Mail

In the fourth example, the person pictured is labeled'woman' even though it is clearly a man because of sexist biases in the set that associate kitchens with women Researchers tested two of the largest collections of photos used to train image recognition AIs and discovered that sexism was rampant. However, they AIs associated men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment. 'For example, the activity cooking is over 33 percent more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68 percent at test time,' reads the paper, titled'Men Also Like Shopping,' which published as part of the 2017 Conference on Empirical Methods on Natural Language Processing. A user shared a photo depicting another scenario in which technology failed to detect darker skin, writing'reminds me of this failed beta test Princeton University conducted a word associate task with the algorithm GloVe, an unsupervised AI that uses online text to understand human language.


If you weren't raised in the Internet age, you may need to worry about workplace age discrimination

Los Angeles Times

Although people of both genders struggle with age discrimination, research has shown women begin to experience age discrimination in hiring practices before they reach 50, whereas men don't experience it until several years later. Just as technology is causing barriers inside the workplace for older employees, online applications and search engines could be hurting older workers looking for jobs. Many applications have required fields asking for date of birth and high school graduation, something many older employees choose to leave off their resumes. Furthermore, McCann said, some search engines allow people to filter their search based on high school graduation date, thereby allowing employers and employees to screen people and positions out of the running.


Princeton researchers discover why AI become racist and sexist

#artificialintelligence

Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender. Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. "Language reflects facts about the world," Caliskan told Ars.


Artificial intelligence can be sexist and racist just like humans

#artificialintelligence

Researchers at Princeton University and Britain's University of Bath found that machine learning "absorbs stereotyped biases" when trained on words from the internet. Their findings, published in the journal Science on Thursday, showed that machines learn word associations from written texts that mirror those learned by humans. A psychological tool called the implicit association test (IAT)--an assessment of a person's unconscious associations between certain words--inspired the development of a similar test for machines called a word-embedding association test (WEAT). The WEAT found that male names were associated with work, math, and science, and female names with family and the arts, meaning these stereotypes were held by the computer.


Why can artificial intelligence be racist and sexist?

#artificialintelligence

To do this, they resorted to a very non-standard method – the test for hidden associations (Implicit Association Test, IAT), used to study social attitudes and stereotypes in people. Using IAT tests as a model, Kaliskan and her colleagues created the WEAT (Word-Embedding Association Test) algorithm, which analyzes entire fragments of texts to find out which linguistic entities are more closely connected than others. As an example, Kaliskan cites the way in which the Google Translate translator's interpreter algorithm incorrectly translates words into English from other languages, based on the stereotypes that it has learned based on gender information. In one of the tests, the researchers found a strong associative relationship between the concepts "woman" and "motherhood".


5 AI Solutions Showing Signs of Racism

#artificialintelligence

Several artificial intelligence projects have been created over the past few years, most of which still had some kinks to work out. For some reason, multiple AI solutions showed signs of racism once they were deployed in a live environment. It turned out the creators of the AI-driven algorithm powering Pokemon Go did not provide a diverse training set, nor did they spend time in those neighborhoods. It is becoming evident a lot of these artificial intelligence solutions show signs of "white supremacy" for some reason.


AI 'lawyer' correctly predicts outcomes of human rights trials

#artificialintelligence

Researchers from the University of Sheffield, the University of Pennsylvania and University College London programmed the machine to analyse text from cases heard at the European Court of Human Rights (ECtHR) and predict the outcome of the judicial decision. "We don't see AI replacing judges or lawyers, but we think they'd find it useful for rapidly identifying patterns in cases that lead to certain outcomes," explained Dr Nikolaos Aletras, who led the study at UCL Computer Science. The team of computer and legal scientists extracted case information published by the ECtHR in their openly accessible database. The researchers identified English language data sets for 584 cases relating to Articles 3, 6 and 8 of the Convention and applied an AI algorithm to find patterns in the text.


AI judge predicts human rights rulings with 79% accuracy rate

#artificialintelligence

A group of researchers from the University College London (UCL), University of Sheffield, and University of Pennsylvania, created an Artificial Intelligence system to judge 584 human rights cases and had released its findings recently. The cases analyzed by the AI method were previously heard at the European Court of Human Rights (ECHR) and were equally divided into violation and non-violation cases to prevent bias. Basing its judgment on the case text, the AI judge managed to predict the decisions on the cases with 79% accuracy. Team leader Dr. Nikolaos Aletras, also from UCL Computer Science, thinks that the AI method can be used as a tool for determining which cases might be violations of the European Convention on Human Rights.


AI judge predicts outcome of human rights cases with remarkable accuracy

#artificialintelligence

An artificial intelligence algorithm has predicted the outcome of human rights trials with 79 percent accuracy, according to a study published today in PeerJ Computer Science. Developed by researchers from the University College London (UCL), the University of Sheffield, and the University of Pennsylvania, the system is the first of its kind trained solely on case text from a major international court, the European Court of Human Rights (ECtHR). "Our motivation was twofold," co-author Vasileios Lampos of UCL Computer Science told Digital Trends. The algorithm analyzed texts from nearly 600 cases related to human right's issues including fair trials, torture, and privacy in an effort to identify patterns.


AI lawyer: I know how you ruled next summer

#artificialintelligence

RotM Artificial Intelligence can predict the outcomes of European Court of Human Rights trials to a high accuracy, according to research published today. It can judge the final result of legal trials based on the information in human rights cases to 79 per cent accuracy. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights," said Dr Nikolaos Aletras, lead-author of the research and researcher at the Department of Computer Science at University College London. The software uses natural language processing and machine learning to analyse case information from both sides, Aletras told The Register.