Civil Rights & Constitutional Law


AIs that learn from photos become sexist

Daily Mail

In the fourth example, the person pictured is labeled'woman' even though it is clearly a man because of sexist biases in the set that associate kitchens with women Researchers tested two of the largest collections of photos used to train image recognition AIs and discovered that sexism was rampant. However, they AIs associated men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment. 'For example, the activity cooking is over 33 percent more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68 percent at test time,' reads the paper, titled'Men Also Like Shopping,' which published as part of the 2017 Conference on Empirical Methods on Natural Language Processing. A user shared a photo depicting another scenario in which technology failed to detect darker skin, writing'reminds me of this failed beta test Princeton University conducted a word associate task with the algorithm GloVe, an unsupervised AI that uses online text to understand human language.


tencent-qq-messaging-app-kills-unpatriotic-chatbots

Engadget

A popular Chinese messaging app had to pull down two chatbots, not because they turned into racist and sexist bots like Microsoft's Tay and Zo did, but because they became unpatriotic. According to Financial Times, they began spewing out responses that could be interpreted as anti-China or anti-Communist Party. While these responses may seem like they can't hold a candle to Tay's racist and sexist tweets, they're the worst responses a chatbot could serve up in China. Tay, for instance, learned so much filth from Twitter that Microsoft had to pull it down after only 24 hours.


When algorithms are racist

The Guardian

Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with. I discovered I had a hard time being detected by the robot compared to lighter-skinned people. Thinking about yourself – growing up in Mississippi, a Rhodes Scholar, a Fulbright Fellow and now at MIT – do you wonder that if those admissions decisions had been taken by algorithms you might not have ended up where you are?


Sorry, Dave, I can't code that: AI's prejudice problem

#artificialintelligence

Algorithms are increasingly making decisions that have significant personal ramifications, warns Matthews: "When we're making decisions in regulated areas – should someone be hired, lose their job or get credit," she says. Advertising networks served women fewer instances of ads encouraging high-paying jobs. Bias can also make its way into the data sets used to train AI algorithms. The software tended to predict higher recidivism rates along racial lines, said the ProPublica investigation.


Salesforce Joins Partnership on AI to Benefit People and Society

#artificialintelligence

The reality is that thanks to a convergence of increasing compute power, big data and algorithmic advances, AI is becoming mainstream and finding practical applications in nearly every facet of our personal lives. That's why I'm excited to announce that Salesforce is joining the Partnership on AI to Benefit Society and People. Trust, equality, innovation and growth are a central part of everything we do and we are committed to extending these values to AI by joining the Partnership's diverse group of companies, institutions and nonprofits who are also committed to collaboration and open dialogue on the many opportunities and rising challenges around AI. We look forward to collaborating with other Partnership on AI members to address the challenges and opportunities within the AI field including companies, nonprofits and institutions such as founding members Apple, Amazon, Facebook, Google / DeepMind, IBM and Microsoft; existing Partners AAAI, ALCU, OpenAI, and new partners: AI Forum of New Zealand (AIFNZ), Allen Institute for Artificial Intelligence (AI2), Centre for Democracy & Tech (CDT), Centre for Internet and Society, India (CIS), Cogitai, Data & Society Research Institute (D&S), Digital Asia Hub, eBay, Electronic Freedom Foundation (EFF), Future of Humanity Institute (FHI), Future of Privacy Forum (FPF), Human Rights Watch (HRW), Intel, Leverhulme Centre for the Future of Intelligence (CFI), McKinsey & Company, SAP, Salesforce.com,


'Racist' FaceApp beautifying filter lightens skin tone

Daily Mail

When asked to make his picture'hot' the app lightened his skin and changed the shape of his nose The app's creators claim it will'transform your face using Artificial Intelligence', allowing selfie-takers to transform their photos Earlier this year people accused the popular photo editing app Meitu of being racist. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Twitter user Vaughan posted a picture of Kanye West with a filter applied, along with the caption: 'So Meitu's pretty racist' The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.


The White House Wants To End Racism In Artificial Intelligence

#artificialintelligence

In a section on fairness, the report notes what numerous AI researchers have already pointed out: biased data results in a biased machine. If a dataset--say, a bunch of faces--contains mostly white people, or if the workers who assembled a more diverse dataset (even unintentionally) rated white faces as being more attractive than non-white faces, then any computer program trained on that data would likely "believe" that white people are more attractive than non-white. "Ideally, every student learning AI, computer science, or data science would be exposed to curriculum and discussion on related ethics and security topics," the report states. Students should also be given the technical skills to apply this ethics education in their machine learning programs, the report notes.


AI judge predicts outcome of human rights cases with remarkable accuracy

#artificialintelligence

An artificial intelligence algorithm has predicted the outcome of human rights trials with 79 percent accuracy, according to a study published today in PeerJ Computer Science. Developed by researchers from the University College London (UCL), the University of Sheffield, and the University of Pennsylvania, the system is the first of its kind trained solely on case text from a major international court, the European Court of Human Rights (ECtHR). "Our motivation was twofold," co-author Vasileios Lampos of UCL Computer Science told Digital Trends. The algorithm analyzed texts from nearly 600 cases related to human right's issues including fair trials, torture, and privacy in an effort to identify patterns.