Civil Rights & Constitutional Law


AIs that learn from photos become sexist

Daily Mail

In the fourth example, the person pictured is labeled'woman' even though it is clearly a man because of sexist biases in the set that associate kitchens with women Researchers tested two of the largest collections of photos used to train image recognition AIs and discovered that sexism was rampant. However, they AIs associated men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment. 'For example, the activity cooking is over 33 percent more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68 percent at test time,' reads the paper, titled'Men Also Like Shopping,' which published as part of the 2017 Conference on Empirical Methods on Natural Language Processing. A user shared a photo depicting another scenario in which technology failed to detect darker skin, writing'reminds me of this failed beta test Princeton University conducted a word associate task with the algorithm GloVe, an unsupervised AI that uses online text to understand human language.


tencent-qq-messaging-app-kills-unpatriotic-chatbots

Engadget

A popular Chinese messaging app had to pull down two chatbots, not because they turned into racist and sexist bots like Microsoft's Tay and Zo did, but because they became unpatriotic. According to Financial Times, they began spewing out responses that could be interpreted as anti-China or anti-Communist Party. While these responses may seem like they can't hold a candle to Tay's racist and sexist tweets, they're the worst responses a chatbot could serve up in China. Tay, for instance, learned so much filth from Twitter that Microsoft had to pull it down after only 24 hours.


When algorithms are racist

The Guardian

Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with. I discovered I had a hard time being detected by the robot compared to lighter-skinned people. Thinking about yourself – growing up in Mississippi, a Rhodes Scholar, a Fulbright Fellow and now at MIT – do you wonder that if those admissions decisions had been taken by algorithms you might not have ended up where you are?


Sorry, Dave, I can't code that: AI's prejudice problem

#artificialintelligence

Algorithms are increasingly making decisions that have significant personal ramifications, warns Matthews: "When we're making decisions in regulated areas – should someone be hired, lose their job or get credit," she says. Advertising networks served women fewer instances of ads encouraging high-paying jobs. Bias can also make its way into the data sets used to train AI algorithms. The software tended to predict higher recidivism rates along racial lines, said the ProPublica investigation.


Salesforce Joins Partnership on AI to Benefit People and Society

#artificialintelligence

The reality is that thanks to a convergence of increasing compute power, big data and algorithmic advances, AI is becoming mainstream and finding practical applications in nearly every facet of our personal lives. That's why I'm excited to announce that Salesforce is joining the Partnership on AI to Benefit Society and People. Trust, equality, innovation and growth are a central part of everything we do and we are committed to extending these values to AI by joining the Partnership's diverse group of companies, institutions and nonprofits who are also committed to collaboration and open dialogue on the many opportunities and rising challenges around AI. We look forward to collaborating with other Partnership on AI members to address the challenges and opportunities within the AI field including companies, nonprofits and institutions such as founding members Apple, Amazon, Facebook, Google / DeepMind, IBM and Microsoft; existing Partners AAAI, ALCU, OpenAI, and new partners: AI Forum of New Zealand (AIFNZ), Allen Institute for Artificial Intelligence (AI2), Centre for Democracy & Tech (CDT), Centre for Internet and Society, India (CIS), Cogitai, Data & Society Research Institute (D&S), Digital Asia Hub, eBay, Electronic Freedom Foundation (EFF), Future of Humanity Institute (FHI), Future of Privacy Forum (FPF), Human Rights Watch (HRW), Intel, Leverhulme Centre for the Future of Intelligence (CFI), McKinsey & Company, SAP, Salesforce.com,


'Racist' FaceApp beautifying filter lightens skin tone

Daily Mail

When asked to make his picture'hot' the app lightened his skin and changed the shape of his nose The app's creators claim it will'transform your face using Artificial Intelligence', allowing selfie-takers to transform their photos Earlier this year people accused the popular photo editing app Meitu of being racist. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Twitter user Vaughan posted a picture of Kanye West with a filter applied, along with the caption: 'So Meitu's pretty racist' The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.


How artificial intelligence learns to be racist

#artificialintelligence

Open up the photo app on your phone and search "dog," and all the pictures you have of dogs will come up. This was no easy feat. Your phone knows what a dog "looks" like. This and other modern-day marvels are the result of machine learning. These are programs that comb through millions of pieces of data and start making correlations and predictions about the world.


Artificial intelligence remains key as Intel buys Nervana

AITopics Original Links

With Intel's acquisition of the deep learning startup Nervana Systems on Monday, the chipmaker is moving further into artificial intelligence, a field that's become a key focus for tech companies in recent years. Intel's purchase of the company comes in the wake of Apple's acquisition of Seattle-based Turi Inc. last week, while Google, Yahoo, Microsoft, Twitter, and Samsung have also made similar deals. Those purchases – large tech firms have bought 31 AI startups since 2011 – according to the research firm CB Insights, also underscore a shift. While AI was once thought of as a sci-fi concept, the technology behind it has come to propel a slew of innovations by hardware and software companies that both attract attention – like self-driving cars – and ones that often go unnoticed, like product recommendations on Amazon. "Intel's acquisition of [Nervana] is an acknowledgment that this area of deep learning, machine learning, artificial intelligence, is really an important part of all companies going forward," says David Schubmehl, an analyst who focuses on the field at the research firm IDC.


The White House Wants To End Racism In Artificial Intelligence

#artificialintelligence

In a section on fairness, the report notes what numerous AI researchers have already pointed out: biased data results in a biased machine. If a dataset--say, a bunch of faces--contains mostly white people, or if the workers who assembled a more diverse dataset (even unintentionally) rated white faces as being more attractive than non-white faces, then any computer program trained on that data would likely "believe" that white people are more attractive than non-white. "Ideally, every student learning AI, computer science, or data science would be exposed to curriculum and discussion on related ethics and security topics," the report states. Students should also be given the technical skills to apply this ethics education in their machine learning programs, the report notes.


Microsoft is Soon Releasing Another Artificial Intelligence Powered Chatbot

#artificialintelligence

Earlier this year, Microsoft launched one of their AI-powered Chatbot called'Tay' but it soon caused controversy with its racist and unpleasant comments, leaving the company with no choice but to pull off. New reports according to Gadgets Now show that the, Redmond-based software firm is reportedly releasing another artificial intelligence powered Chatbot dubbed Zo on the social messaging app'Kik'. The app is believed to come to Twitter, Facebook Messenger and Snapchat once it's officially announced. "Zo is essentially a censored Tay or an English-variant of Microsoft's Chinese chatbot Xiaoice," MSPoweruser reported. At the initial launch of the app, the Chatbot does a "super abbreviated personality test" in which it asks if the user would rather study in school or learn from experience.