Civil Rights & Constitutional Law


Should bots have a right to free speech? This non-profit thinks so.

#artificialintelligence

Do you have a right to know if you're talking to a bot? Does it have the right to keep that information from you? Those questions have been stirring in the minds of many since well before Google demoed Duplex, a human-like AI that makes phone calls on a user's behalf, earlier this month. Bots -- online accounts that appear to be controlled by a human, but are actually powered by AI -- are now prevalent all across the internet, specifically on social media sites. While some people think legally forcing these bots to "out" themselves as non-human would be beneficial, others think doing so violates the bot's right to free speech.


The code of ethics for AI and chatbots that every brand should follow - Watson

#artificialintelligence

It is of critical importance that bots do not abuse humans even if it's learned behavior that's a result of what the human has been feeding the bot. Requests from users to end communication should have a built in protocol to end the chat, preventing the bot from harassing or spamming a user. Language filters should be applied for any bots utilizing machine learning algorithms. There have been a few instances over the last year where some bots went rogue after being subverted by online trolls and began tweeting racist propaganda. The privacy and protection of user data is paramount in today's interconnected world. The launch of the General Data Protection Regulation protecting citizens of the European Union is a reflection of this. When building a bot, developers should consider the ethics of user privacy.


Assholes Want to Indoctrinate Artificial Intelligence

#artificialintelligence

When Tay started its short digital life on March 23, it just wanted to gab and make some new friends on the net. The chatbot, which was created by Microsoft's Research department, greeted the day with an excited tweet that could have come from any teen: "hellooooooo w rld!!!" Within a few hours, though, Tay's optimistic, positive tone had changed. "Hitler was right I hate the jews," it declared in a stream of racist tweets bashing feminism and promoting genocide. Concerned about their bot's rapid radicalization, Tay's creators shut it down after less than 24 hours of existence. Microsoft had unwittingly lowered their burgeoning artificial intelligence into -- to use the parlance of the very people who corrupted her -- a virtual dumpster fire.


OPINIONS -- DeFilippis: Artificial intelligence trustworthy questionable

#artificialintelligence

Microsoft has decided to pull back its first publicly available artificial intelligence (AI) robot, after a horrible test run. Earlier this week, Microsoft released an artificial intelligence named Tay, who ran through an official Twitter Account, @Tayandyou. Within 24 hours of Microsoft releasing the AI on Twitter, Tay was shut down by Microsoft because of the offensive subject matter the bot was tweeting out. In a CNN Money Article by Hope King titled "After racist tweets, Microsoft muzzles teen chat bot Tay," a comment was made by Microsoft on the incident. "Microsoft blamed Tay's behavior on online trolls," according to the article, "saying in a statement that there was a coordinated effort to trick the program's commenting skills."


How to Make a Bot That Isn't Racist

#artificialintelligence

A day after Microsoft launched its "AI teen girl Twitter chatbot," Twitter taught her to be racist. The thing is, this was all very much preventable. I talked to some creators of Twitter bots about @TayandYou, and the consensus was that Microsoft had fallen far below the baseline of ethical botmaking. "The makers of @TayandYou absolutely 10000 percent should have known better," thricedotted, a veteran Twitter botmaker and natural language processing researcher, told me via email. "It seems like the makers of @TayandYou attempted to account for a few specific mishaps, but sorely underestimated the vast potential for people to be assholes on the internet."


How Microsoft's AI Turned Into a Racist Jerk with Zero Chill

#artificialintelligence

The bot, which was primarily targeted at 18- to 24-year-olds, was designed to "engage and entertain people" through "casual and playful conversation." But, after a short period of interacting with Twitter users, Tay began to spit out some of the most obscene statements known to man. Tay's bio, which coins her as "Microsoft's AI fan from the Internet that's got zero chill," is remarkably accurate. From praising Hitler and disputing the existence of the holocaust, to advocating genocide and calling Black people the'N word,' Tay was completely out of control. And, although Microsoft has deleted most of her most inappropriate statements, many of us are left to wonder how this sort of thing could happen in the first place.


Microsoft created artificial intelligence but she's a racist homophobic Trump supporter · PinkNews

#artificialintelligence

Microsoft has created a new chat bot to "learn" from the internet… but she picked up a lot of bad habits. The tech company announced the launch of Tay this week, an artificial intelligence bot that is learning to talk like millennials by analysing conversations on Twitter, Facebook and the internet. The company's optimistic techies explained: "Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. "Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets."


A recent history of racist AI bots

#artificialintelligence

It didn't take long for Tay to learn the dark ways of the web. Microsoft's Tay AI bot was intended to charm the internet with cute millennial jokes and memes. Instead, she became a genocidal maniac. Just hours after Tay started talking to people on Twitter -- and, as Microsoft explained, learning from those conversations -- the bot started to speak like a bad 4chan thread. Now Tay is offline, and Microsoft says it's "making adjustments" to, we guess, prevent Tay from learning how to deny the Holocaust in the future.


Microsoft apologises for racist, homophobic, Trump supporting AI bot · PinkNews

#artificialintelligence

Microsoft has apologised after it launched an artificial intelligence bot this week, which turned out to be racist, homophobic, Holocaust denying Donald Trump supporter. The tech company announced the launch of Tay this week, an artificial intelligence bot that is learning to talk like millennials by analysing conversations on Twitter, Facebook and the internet. The company's optimistic techies explained: "Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. "Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets."


Microsoft's Tay is an Example of Bad Design

#artificialintelligence

Yesterday Microsoft launched a teen girl AI on Twitter named "Tay." I work with chat bots and natural language processing as a researcher for my day job and I'm pretty into teen culture (sometimes I write for Rookie Mag). But even further more, I love bots. Bots are the best, and Olivia Tators is a national treasure that we needed but didn't deserve. But because I work with bots, primarily testing and designing software to let people set up bots and parse language, and I follow bot creators/advocates such as Allison Parrish, Darius Kazemi and Thrice Dotted, I was excited and then horrifically disappointed with Tay.