Civil Rights & Constitutional Law


Robots are really good at learning things like racism and bigotry

#artificialintelligence

The real danger is in something called confirmation bias: when you come up with an answer first and then begin the process of only looking for information that supports that conclusion. Take the following example: if the number of women seeking truck driving jobs is less than men, on a job-seeking website, a pattern emerges. That pattern can be interpreted in many ways, but in truth it only means one specific factual thing: there are less women on that website looking for truck driver jobs than men. If you tell an AI to find evidence that triangles are good at being circles it probably will, that doesn't make it science.


AIs that learn from photos become sexist

Daily Mail

In the fourth example, the person pictured is labeled'woman' even though it is clearly a man because of sexist biases in the set that associate kitchens with women Researchers tested two of the largest collections of photos used to train image recognition AIs and discovered that sexism was rampant. However, they AIs associated men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment. 'For example, the activity cooking is over 33 percent more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68 percent at test time,' reads the paper, titled'Men Also Like Shopping,' which published as part of the 2017 Conference on Empirical Methods on Natural Language Processing. A user shared a photo depicting another scenario in which technology failed to detect darker skin, writing'reminds me of this failed beta test Princeton University conducted a word associate task with the algorithm GloVe, an unsupervised AI that uses online text to understand human language.


tencent-qq-messaging-app-kills-unpatriotic-chatbots

Engadget

A popular Chinese messaging app had to pull down two chatbots, not because they turned into racist and sexist bots like Microsoft's Tay and Zo did, but because they became unpatriotic. According to Financial Times, they began spewing out responses that could be interpreted as anti-China or anti-Communist Party. While these responses may seem like they can't hold a candle to Tay's racist and sexist tweets, they're the worst responses a chatbot could serve up in China. Tay, for instance, learned so much filth from Twitter that Microsoft had to pull it down after only 24 hours.


Microsoft's Zo chatbot told a user that 'Quran is very violent'

#artificialintelligence

Microsoft's earlier chatbot Tay had faced some problems as the bot picking up the worst of humanity, and spouted racists, sexist comments on Twitter when it was introduced last year. The'Quran is violent' comment highlights the kind of problems that still exist when it comes to creating a chatbot, especially one which is drawing its knowledge from conversations with humans. With Tay, Microsoft launched bot on Twitter, which can be a hotbed of polarizing, and often abusive content. Tay had spewed anti-Semitic, racist sexist content, given this was what users on Twitter were tweeting to the chatbot, which is designed to learn from human behaviour.


Microsoft's Zo chatbot calls the Qu'ran 'violent'

Daily Mail

During a recent chat, Zo referred to the Qur'an as'very violent', despite the fact that it has been programmed to avoid discussing politics and religion Zo is a chatbot that allows users to converse with a mechanical millennial over the messaging app Kik or through Facebook Messenger. But within hours of it going live, Twitter users took advantage of flaws in Tay's algorithm that meant the AI chatbot responded to certain questions with racist answers. Within hours of Tay going live, Twitter users took advantage of flaws in Tay's algorithm that meant the AI chatbot responded to certain questions with racist answers. But within hours of it going live, Twitter users took advantage of flaws in Tay's algorithm that meant the AI chatbot responded to certain questions with racist answers.


How not to create a racist, sexist robot

#artificialintelligence

Robots are picking up sexist and racist biases based on information used to program them predominantly coming from one homogenous group of people, suggests a new study from Princeton University and the U.K.'s University of Bath. But robots based on artificial intelligence (AI) and machine learning learn from historic human data and this data usually contain biases," Caliskan tells The Current's Anna Maria Tremonti. With the federal government recently announcing a $125 million investment in Canada's AI industry, Duhaime says now is the time to make sure funding goes towards pushing women forward in this field. "There is an understanding in the research community that we have to be careful and we have to have a plan with respect to ethical correctness of AI systems," she tells Tremonti.


Microsoft says its racist chatbot illustrates how AI isn't adaptable enough to help most businesses

#artificialintelligence

The AI revolution may take longer than some expect to spread from Silicon Valley into other industries. Recent breakthroughs in machine learning have let tech giants such as Microsoft, Google, and Facebook build impressive new businesses and products powered by software that parses text and images. Some have launched cloud services they say can "democratize AI" by helping other companies do the same. But Peter Lee, vice president at Microsoft's research division, said this week that the most valuable, high-end machine-learning systems so useful to tech giants are still too inflexible and expensive for the company to offer its business customers. "We are right now in terms of enterprise application of machine learning and AI concepts in an in-between spot," said Lee at MIT Technology Review's EmTech Digital conference in San Francisco this week.


Apple's head of Siri is joining the Partnership on AI

#artificialintelligence

Tim Cook's firm has become a founding member of the organisation, which includes Google/DeepMind, Microsoft, IBM, Facebook and Amazon. Apple's Tom Gruber, the chief technology officer of AI personal assistant Siri, has joined the group of trustees running the non-profit partnership. As well as Gruber, the Partnership on AI has announced six independent board members: Dario Amodei from Elon Musk's OpenAI, Eric Sears of the MacArthur Foundation, and Deirdre Mulligan from UC Berkley. Facebook, Google (in the form of DeepMind), Microsoft, IBM, and Amazon have created a partnership to research and collaborate on advancing AI in a responsible way.


Mike Gualtieri's Blog

#artificialintelligence

Artificial intelligence (AI) is real, albiet maturing slowly. The lesson for AI makers will be that they must build in rules to AI systems to prevent undesirable communication and behavior. That makes total sense today, but as these systems get more sophisticated the values, motivations, POV of these AI makers will find it's way into all interactions with customers. There is a real danger to a free society when AI is controlled by a few giant corporations.


Forrester: AI Makers Will Squelch Free Speech

#artificialintelligence

Artificial intelligence (AI) is real, albiet maturing slowly. You experience it when you talk to Alexa, when you see a creepily-targeted online ad, and when Netxflix turns you on toStranger Things. Oh yea, and that self-driving car over there is AI super-powered! AI is indeed cool, but many are scared about how it ultimatley may impact society. Stephen Hawking, Elon Musk, and even the Woz warned that "…artificial intelligence can potentially be more dangerous than nuclear war."