Civil Rights & Constitutional Law


Robots are really good at learning things like racism and bigotry

#artificialintelligence

The real danger is in something called confirmation bias: when you come up with an answer first and then begin the process of only looking for information that supports that conclusion. Take the following example: if the number of women seeking truck driving jobs is less than men, on a job-seeking website, a pattern emerges. That pattern can be interpreted in many ways, but in truth it only means one specific factual thing: there are less women on that website looking for truck driver jobs than men. If you tell an AI to find evidence that triangles are good at being circles it probably will, that doesn't make it science.


AIs that learn from photos become sexist

Daily Mail

In the fourth example, the person pictured is labeled'woman' even though it is clearly a man because of sexist biases in the set that associate kitchens with women Researchers tested two of the largest collections of photos used to train image recognition AIs and discovered that sexism was rampant. However, they AIs associated men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment. 'For example, the activity cooking is over 33 percent more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68 percent at test time,' reads the paper, titled'Men Also Like Shopping,' which published as part of the 2017 Conference on Empirical Methods on Natural Language Processing. A user shared a photo depicting another scenario in which technology failed to detect darker skin, writing'reminds me of this failed beta test Princeton University conducted a word associate task with the algorithm GloVe, an unsupervised AI that uses online text to understand human language.


tencent-qq-messaging-app-kills-unpatriotic-chatbots

Engadget

A popular Chinese messaging app had to pull down two chatbots, not because they turned into racist and sexist bots like Microsoft's Tay and Zo did, but because they became unpatriotic. According to Financial Times, they began spewing out responses that could be interpreted as anti-China or anti-Communist Party. While these responses may seem like they can't hold a candle to Tay's racist and sexist tweets, they're the worst responses a chatbot could serve up in China. Tay, for instance, learned so much filth from Twitter that Microsoft had to pull it down after only 24 hours.


Microsoft's Zo chatbot told a user that 'Quran is very violent'

#artificialintelligence

Microsoft's earlier chatbot Tay had faced some problems as the bot picking up the worst of humanity, and spouted racists, sexist comments on Twitter when it was introduced last year. The'Quran is violent' comment highlights the kind of problems that still exist when it comes to creating a chatbot, especially one which is drawing its knowledge from conversations with humans. With Tay, Microsoft launched bot on Twitter, which can be a hotbed of polarizing, and often abusive content. Tay had spewed anti-Semitic, racist sexist content, given this was what users on Twitter were tweeting to the chatbot, which is designed to learn from human behaviour.


Microsoft's Zo chatbot calls the Qu'ran 'violent'

Daily Mail

During a recent chat, Zo referred to the Qur'an as'very violent', despite the fact that it has been programmed to avoid discussing politics and religion Zo is a chatbot that allows users to converse with a mechanical millennial over the messaging app Kik or through Facebook Messenger. But within hours of it going live, Twitter users took advantage of flaws in Tay's algorithm that meant the AI chatbot responded to certain questions with racist answers. Within hours of Tay going live, Twitter users took advantage of flaws in Tay's algorithm that meant the AI chatbot responded to certain questions with racist answers. But within hours of it going live, Twitter users took advantage of flaws in Tay's algorithm that meant the AI chatbot responded to certain questions with racist answers.


How not to create a racist, sexist robot

#artificialintelligence

Robots are picking up sexist and racist biases based on information used to program them predominantly coming from one homogenous group of people, suggests a new study from Princeton University and the U.K.'s University of Bath. But robots based on artificial intelligence (AI) and machine learning learn from historic human data and this data usually contain biases," Caliskan tells The Current's Anna Maria Tremonti. With the federal government recently announcing a $125 million investment in Canada's AI industry, Duhaime says now is the time to make sure funding goes towards pushing women forward in this field. "There is an understanding in the research community that we have to be careful and we have to have a plan with respect to ethical correctness of AI systems," she tells Tremonti.


Apple's head of Siri is joining the Partnership on AI

#artificialintelligence

Tim Cook's firm has become a founding member of the organisation, which includes Google/DeepMind, Microsoft, IBM, Facebook and Amazon. Apple's Tom Gruber, the chief technology officer of AI personal assistant Siri, has joined the group of trustees running the non-profit partnership. As well as Gruber, the Partnership on AI has announced six independent board members: Dario Amodei from Elon Musk's OpenAI, Eric Sears of the MacArthur Foundation, and Deirdre Mulligan from UC Berkley. Facebook, Google (in the form of DeepMind), Microsoft, IBM, and Amazon have created a partnership to research and collaborate on advancing AI in a responsible way.


Mike Gualtieri's Blog

#artificialintelligence

Artificial intelligence (AI) is real, albiet maturing slowly. The lesson for AI makers will be that they must build in rules to AI systems to prevent undesirable communication and behavior. That makes total sense today, but as these systems get more sophisticated the values, motivations, POV of these AI makers will find it's way into all interactions with customers. There is a real danger to a free society when AI is controlled by a few giant corporations.


5 AI Solutions Showing Signs of Racism

#artificialintelligence

Several artificial intelligence projects have been created over the past few years, most of which still had some kinks to work out. For some reason, multiple AI solutions showed signs of racism once they were deployed in a live environment. It turned out the creators of the AI-driven algorithm powering Pokemon Go did not provide a diverse training set, nor did they spend time in those neighborhoods. It is becoming evident a lot of these artificial intelligence solutions show signs of "white supremacy" for some reason.


A massive AI partnership is tapping civil rights and economic experts to keep AI safe

#artificialintelligence

The Partnership also added Apple as a "founding member," putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board. "In its most ideal form, [the Partnership] puts on the agenda the idea of human rights and civil liberties in the science and data science community," says Carol Rose, the executive director of the ACLU of Massachusetts who is joining the Partnership's board. "While there will be many benefits from AI, it is important to ensure that challenges such as protecting and advancing civil rights, civil liberties, and security are accounted for," Sears says. Google will be represented by director of augmented intelligence research Greg Corrado; Facebook by its director of AI research, Yann LeCun; Amazon by its director of machine learning, Ralf Herbrich; Microsoft by the director of its research lab, Horvitz; and IBM by a research scientist at its T.J. Watson Research Centre, Francesca Rossi.