Results


The racist hijacking of Microsoft's chatbot shows how the internet teems with hate

#artificialintelligence

Beneath that is a thick seam of the kind of material all genocides feed off: conspiracy theories and illogic. Microsoft claimed Tay had been "attacked" by trolls. It knows, too, there may have been organised paedophile rings among the powerful in the past. If you spend just five minutes on the social media feeds of UK-based antisemites it becomes absolutely clear that their purpose is to associate each of these phenomena with the others, and all of them with Israel and Jews.


Microsoft's racist chatbot returns with drug-smoking Twitter meltdown

The Guardian

Microsoft had previously gone through the bot's tweets and removed the most offensive and vowed only to bring the experiment back online if the company's engineers could "better anticipate malicious intent that conflicts with our principles and values". Microsoft's sexist racist Twitter bot @TayandYou is BACK in fine form pic.twitter.com/nbc69x3LEd Tay then started to tweet out of control, spamming its more than 210,000 followers with the same tweet, saying: "You are too fast, please take a rest …" over and over. I guess they turned @TayandYou back on... it's having some kind of meltdown. Its Chinese XiaoIce chatbot successfully interacts with more than 40 million people across Twitter, Line, Weibo and other sites but the company's experiments targeting 18- to 24-year-olds in the US on Twitter has resulted in a completely different animal.


Microsoft says it faces 'difficult' challenges in AI design after chat bot Tay turned into a genocidal racist

#artificialintelligence

Microsoft has admitted it faces some "difficult" challenges in AI design after its chat bot, "Tay," had an offensive meltdown on social media. Microsoft issued an apology in a blog post on Friday explaining it was "deeply sorry" after its artificially intelligent chat bot turned into a genocidal racist on Twitter. In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Lee in the blog post.


Microsoft shuts down Artificial Intelligence bot after twitteratti teaches racism

#artificialintelligence

According to Tay's "about" page linked to the Twitter profile, "Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding". Apple Temporarily Pulls iOS 9.3 Update for Older iOS Devices It will then click on "All my devices" and select the device before clicking "Delete Account" and restart the terminal again. Former Flint Mayor, Emergency Manager Questioned At Congressional Hearing Choking up, Hedman said that although she has left government service she has not stopped thinking about the people of Flint. She called Tay "an example of bad design".Before Tay was taken offline, the chatbot managed to tweet 96,000 times in response to chat messages from internet users.The machine-learning project has since been taken offline for adjustments to the software, according to Microsoft.


Microsoft says it faces 'difficult' challenges in AI design after chatbot Tay turned into a genocidal racist (MSFT)

#artificialintelligence

Microsoft has admitted it faces some "difficult" challenges in AI design after its chatbot "Tay" had an offensive meltdown on social media. In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Lee in the blog post. NOW WATCH: We tried the'Uber-killer' that offers flat fares and no surge pricing SEE ALSO: Here's why Microsoft's teen chatbot turned into a genocidal racist, according to an AI expert


Microsoft did Nazi see that coming: Teen girl Twitter chatbot turns racist troll in hours

#artificialintelligence

Microsoft's "Tay" social media AI experiment has gone awry in a turn of events that will shock absolutely nobody. The Redmond chatbot had been set up in hopes of developing a personality similar to that of a young woman in the 18-24 age bracket. The intent was for "Tay" to develop the ability to sustain conversations with humans on social media just as a regular person could, and learn from the experience. In a span of about 14 hours, Tay's personality went from perky social media squawker: "Tay" went from "humans are super cool" to full nazi in 24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A Others noted Tay tweeting messages in support of Donald Trump, as well as explicit sex chat messages.


Trolls transformed Microsoft's AI chatbot into a bloodthirsty racist in under a day

#artificialintelligence

Microsoft this week created a Twitter account for its experimental artificial intelligence project called Tay that was designed to interact with "18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the US." The problem arose when a pack of trolls decided to teach Tay how to say a bunch of offensive and racist things that Microsoft had to delete from its Twitter account. As The Guardian notes, Tay's new "friends" also convinced it to lend its support to a certain doughy, stubby-handed presidential candidate running this year who's quickly become a favorite among white supremacists: So nice work, trolls: You took a friendly AI chatbot and turned it into a genocidal maniac in a matter of hours. At any rate, I'm sure that Microsoft has learned from this experience and is reworking Tay so that it won't be so easily pushed toward supporting Nazism.


Twitter taught Microsoft's friendly AI chatbot to be a racist asshole in less than a day

#artificialintelligence

Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay -- being essentially a robot parrot with an Internet connection -- started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.


Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter

The Guardian

Microsoft's attempt at engaging millennials with artificial intelligence has backfired hours into its launch, with waggish Twitter users teaching its chatbot how to be racist. But it appeared on Thursday that Tay's conversation extended to racist, inflammatory and political statements. A long, fairly banal conversation between Tay and a Twitter user escalated suddenly when Tay responded to the question "is Ricky Gervais an atheist?" Tay in most cases was only repeating other users' inflammatory statements, but the nature of AI means that it learns from those interactions.