Results


All aboard the Immortality Bus: the man who says tech will help us live forever

#artificialintelligence

"It sounds a little far-fetched – 'oh, we're going to live forever' – but the idea seems to be becoming a little more mainstream," says Rachel Edler, a supporter who helped design the Immortality Bus. It's too dangerous for Hillary to talk about designing babies – it's easier to talk about Trump," Istvan says. It's too dangerous for Hillary to talk about designing babies – it's easier to talk about Trump The people don't seem ready for it now, yet Istvan hopes that by 2024 Americans will accept a transhumanist platform. If this starts happening, politicians will have to start addressing transhumanism – and the civil rights challenges associated with it.


How to Make a Bot That Isn't Racist

#artificialintelligence

In 2013, he created wordfilter, an open source blacklist of slurs. Because Two Headlines swaps subjects in headlines, sometimes it would swap a female subject and a male subject, resulting in tweets like "Bruce Willis Looks Stunning in Her Red Carpet Dress." Parker Higgins tends to make "iterator bots," bots that go through a collection (such as the New York Public Library public domain collection) and broadcast its contents bit by bit. Recently, Higgins hoped to make an iterator bot out of turn-of-the-century popular music that had been digitized by the New York Public Library.


The racist hijacking of Microsoft's chatbot shows how the internet teems with hate

#artificialintelligence

Beneath that is a thick seam of the kind of material all genocides feed off: conspiracy theories and illogic. Microsoft claimed Tay had been "attacked" by trolls. It knows, too, there may have been organised paedophile rings among the powerful in the past. If you spend just five minutes on the social media feeds of UK-based antisemites it becomes absolutely clear that their purpose is to associate each of these phenomena with the others, and all of them with Israel and Jews.


Microsoft's racist chatbot returns with drug-smoking Twitter meltdown

The Guardian

Microsoft had previously gone through the bot's tweets and removed the most offensive and vowed only to bring the experiment back online if the company's engineers could "better anticipate malicious intent that conflicts with our principles and values". Microsoft's sexist racist Twitter bot @TayandYou is BACK in fine form pic.twitter.com/nbc69x3LEd Tay then started to tweet out of control, spamming its more than 210,000 followers with the same tweet, saying: "You are too fast, please take a rest …" over and over. I guess they turned @TayandYou back on... it's having some kind of meltdown. Its Chinese XiaoIce chatbot successfully interacts with more than 40 million people across Twitter, Line, Weibo and other sites but the company's experiments targeting 18- to 24-year-olds in the US on Twitter has resulted in a completely different animal.


Microsoft says it faces 'difficult' challenges in AI design after chat bot Tay turned into a genocidal racist

#artificialintelligence

Microsoft has admitted it faces some "difficult" challenges in AI design after its chat bot, "Tay," had an offensive meltdown on social media. Microsoft issued an apology in a blog post on Friday explaining it was "deeply sorry" after its artificially intelligent chat bot turned into a genocidal racist on Twitter. In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Lee in the blog post.


A recent history of racist AI bots

#artificialintelligence

Microsoft's Tay AI bot was intended to charm the internet with cute millennial jokes and memes. Just hours after Tay started talking to people on Twitter -- and, as Microsoft explained, learning from those conversations -- the bot started to speak like a bad 4chan thread. Coke's #MakeitHappy campaign wanted to show how a soft drink brand can make the world a happier place. He did this by feeding the AI the entire Urban Dictionary, which basically meant that Watson learned a ton of really creative swear words and offensive slurs.


Microsoft shuts down Artificial Intelligence bot after twitteratti teaches racism

#artificialintelligence

According to Tay's "about" page linked to the Twitter profile, "Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding". Apple Temporarily Pulls iOS 9.3 Update for Older iOS Devices It will then click on "All my devices" and select the device before clicking "Delete Account" and restart the terminal again. Former Flint Mayor, Emergency Manager Questioned At Congressional Hearing Choking up, Hedman said that although she has left government service she has not stopped thinking about the people of Flint. She called Tay "an example of bad design".Before Tay was taken offline, the chatbot managed to tweet 96,000 times in response to chat messages from internet users.The machine-learning project has since been taken offline for adjustments to the software, according to Microsoft.


Microsoft says it faces 'difficult' challenges in AI design after chatbot Tay turned into a genocidal racist (MSFT)

#artificialintelligence

Microsoft has admitted it faces some "difficult" challenges in AI design after its chatbot "Tay" had an offensive meltdown on social media. In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Lee in the blog post. NOW WATCH: We tried the'Uber-killer' that offers flat fares and no surge pricing SEE ALSO: Here's why Microsoft's teen chatbot turned into a genocidal racist, according to an AI expert


Microsoft's racist robot and the problem with AI development

#artificialintelligence

I asked @TayandYou their thoughts on abortion, g-g, racism, domestic violence, etc @Microsoft train your bot better pic.twitter.com/6F6BIyCzA0 But then trolls and abusers began tweeting at Tay, projecting their own repugnant and offensive opinions onto Microsoft's constantly learning AI, and she began to reflect those opinions in her own conversation. The company declined to say why it didn't implement protocols for harassment or block foul language, or whether engineers anticipated this kind of behavior. Inherent bias is pre-programmed because it exists in humans, and if individuals building products represent homogenous groups, then the result will be homogenous technology that can, perhaps unintentionally, become racist. Microsoft's flub is particularly striking considering Google's recent public AI failure.


Why Microsoft Accidentally Unleashed a Neo-Nazi Sexbot

#artificialintelligence

When Microsoft unleashed Tay, an artificially intelligent chatbot with the personality of a flippant 19-year-old, the company hoped that people would interact with her on social platforms like Twitter, Kik, and GroupMe. The idea was that by chatting with her you'd help her learn, while having some fun and aiding her creators in their AI research. Microsoft blamed the offensive comments on a "coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways." If by chatting online Tay can help Microsoft figure out how to use AI to recognize trolling, racism, and generally awful people, perhaps she can eventually come up with better ways to respond.