Civil Rights & Constitutional Law


Microsoft builds new AI bot to ignore Hitler

#artificialintelligence

It did, however, identify other Nazi leaders like Joseph Mengele and Joseph Goebbels. Microsoft (MSFT) released CaptionBot a few weeks after its disastrous social experiment with Tay, an automated chat program designed to talk like a teen. Related: Microsoft'deeply sorry' for chat bot's racist tweets In addition to ignoring pictures of Hitler, CaptionBot also seemed to refuse to identify people like Osama bin Laden. Generally speaking, bots are software programs designed to hold conversations with people about data-driven tasks, such as managing schedules or retrieving data and information.


OPINIONS -- DeFilippis: Artificial intelligence trustworthy questionable

#artificialintelligence

Earlier this week, Microsoft released an artificial intelligence named Tay, who ran through an official Twitter Account, @Tayandyou. In a CNN Money Article by Hope King titled "After racist tweets, Microsoft muzzles teen chat bot Tay," a comment was made by Microsoft on the incident. Because of the troll's actions, Tay was left spouting obscenities, whether it be attacking the Black Lives Matter Movement, or praising the works of Adolf Hitler. Through this poor example, the public has witnessed the actions of a poorly made AI who was spouting racist tweets left and right before Microsoft ended up pulling the plug.


How to Make a Bot That Isn't Racist

#artificialintelligence

In 2013, he created wordfilter, an open source blacklist of slurs. Because Two Headlines swaps subjects in headlines, sometimes it would swap a female subject and a male subject, resulting in tweets like "Bruce Willis Looks Stunning in Her Red Carpet Dress." Parker Higgins tends to make "iterator bots," bots that go through a collection (such as the New York Public Library public domain collection) and broadcast its contents bit by bit. Recently, Higgins hoped to make an iterator bot out of turn-of-the-century popular music that had been digitized by the New York Public Library.


How Microsoft's AI Turned Into a Racist Jerk with Zero Chill

#artificialintelligence

But, after a short period of interacting with Twitter users, Tay began to spit out some of the most obscene statements known to man. From praising Hitler and disputing the existence of the holocaust, to advocating genocide and calling Black people the'N word,' Tay was completely out of control. The truth of the matter is that many Twitter users are jerks and thus programmed Tay to be a jerk. And, it looks like within the time it took to write this article, Microsoft has deleted all of Tay's tweets, giving her a fresh new start.


Microsoft created artificial intelligence but she's a racist homophobic Trump supporter · PinkNews

#artificialintelligence

Microsoft has created a new chat bot to "learn" from the internet… but she picked up a lot of bad habits. The tech company announced the launch of Tay this week, an artificial intelligence bot that is learning to talk like millennials by analysing conversations on Twitter, Facebook and the internet. The company's optimistic techies explained: "Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. "Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation.


A recent history of racist AI bots

#artificialintelligence

Microsoft's Tay AI bot was intended to charm the internet with cute millennial jokes and memes. Just hours after Tay started talking to people on Twitter -- and, as Microsoft explained, learning from those conversations -- the bot started to speak like a bad 4chan thread. Coke's #MakeitHappy campaign wanted to show how a soft drink brand can make the world a happier place. He did this by feeding the AI the entire Urban Dictionary, which basically meant that Watson learned a ton of really creative swear words and offensive slurs.


Microsoft apologises for racist, homophobic, Trump supporting AI bot · PinkNews

#artificialintelligence

Microsoft has apologised after it launched an artificial intelligence bot this week, which turned out to be racist, homophobic, Holocaust denying Donald Trump supporter. The tech company announced the launch of Tay this week, an artificial intelligence bot that is learning to talk like millennials by analysing conversations on Twitter, Facebook and the internet. The company's optimistic techies explained: "Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. After Tay took aim at feminism, praised Donald Trump's idea to build a wall at the Mexican border and denied that the Holocaust took place, she was shut down by Microsoft.


Microsoft's Tay is an Example of Bad Design

#artificialintelligence

But because I work with bots, primarily testing and designing software to let people set up bots and parse language, and I follow bot creators/advocates such as Allison Parrish, Darius Kazemi and Thrice Dotted, I was excited and then horrifically disappointed with Tay. If we are going to make things people use, people touch, and people actually talk to, then we need to, as bot creators and AI enthusiasts, talk about codes of conduct and how AIs should respond to racism, especially if companies are rolling out these products, and especially if they are doin' it for funsies. Conversational structure and responses within machine learning algorithms is design, and Tay was flawed design. With the future of chat and AI, designers and engineers have to start thinking about codes of conduct and how accidentally abusive an AI can be, and start designing conversations with that in mind.


Microsoft Creates AI Bot - Internet Immediately Turns it Racist

#artificialintelligence

Microsoft released an AI chat bot that is currently "verified" on Twitter called @TayandYou that was meant to try to learn the way millennials speak and interact with them. It's meant to "test and improve Microsoft's understanding of conversational language" according to The Verge. There are other types of people in addition to'millennials' who use Twitter who naturally found the bot, and some of them were able to "hack" into Tay's learning process. Here are some screen shots of tweets that were deleted once the Internet "taught" Tay some things: Tay's developers seemed to discover what was happening and began furiously deleting the racist tweets.


Hey Microsoft, the Internet Made My Bot Racist, Too

#artificialintelligence

My bot wasn't a tweeter, instead it was a Turing test like game called Bot Or Not? Here are some features, shared by both my bot and Microsoft's, that I believe led to this phenomenon: Racist trolls quickly figure out both (1) and (2), and see an opportunity to put their psychopathy on an international stage without repercussions. And they also committed a bigger offense: I believe that Microsoft, and the rest of the machine learning community, has become so swept up in the power and magic of data that they forget that data still comes from the deeply flawed world we live in. All machine learning algorithms strive to exaggerate and perpetuate the past.