Microsoft learned that lesson the hard way earlier this year when it released an AI Twitter bot called Tay that had been trained to talk like a Millennial teen. Last year, Google's automatic image recognition engine tagged a photo of two black people as "gorillas" -- presumably because the machine learned on a database that hadn't included enough photos of either animals or people. For the AI trained with Digital news outlets, close synonyms were "Ferguson" and "Bernie." We created this AI system using Google's open source technology, and trained it to produce synonyms based on what it learned from different news sources.
Star Trek turned 50 in 2016. In its half-century of existence -- on TV, on the big screen, and in the worldwide community of its fans -- Star Trek has become an integral part of our everyday lives. Even casual viewers know the pointed ears, the Vulcan salute, and the meaning of "beam me up, Scotty." Yet, Star Trek does not owe its enduring popularity and its place in our collective imagination to its aliens or to its technological speculations. What makes it so unique, and so exciting, is its radical optimism about humanity's future as a society: in other words, utopia.
The advancement of artificial intelligence may lead to sentient machines being granted'human' rights, Oxford University professor for the public understanding of science, Marcus du Sautoy, has said. Speaking at The Hay Literary Festival (via The Telegraph), Sautoy said: "It's getting to a point where we might be able to say this thing has a sense of itself and maybe there is a threshold moment where suddenly this consciousness emerges. One of the things I address in my new book is how can you tell whether my smartphone will ever be conscious. "The fascinating thing is that consciousness for a decade has been something that nobody has gone anywhere near because we didn't know how to measure it. But we're in a golden age.
Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives. Imagine this coming from your iPhone: "I am, Siri, a living being with feelings. Your Mac RoboBook might one day sue you for keeping it cooped up in your dank bedroom. Your Samsung Galaxy RoboNote might take you to the International Court of Justice because you insist on keeping it in your back pocket, right next to your flaccid rump. Please, I'm not (entirely) under the spell of troubled delirium.
AI is here - although Microsoft's blunder with Tay, the "teenaged girl AI" embodied by a Twitter account who "turned racist" shows that we obviously still have a long ways to go. The pace of advancement, mixed with our general lack of knowledge in the realm of artificial intelligence, has spurred many to chime in on the emerging topic of AI and ethics.…
Microsoft's artificial intelligence (AI)-powered bot which was activated on Twitter last week for a playful chat with people, only to get silenced within 24 hours as users started sharing racist comments with it, was accidentally resurrected again and messed it all up once again. Last week, launched on Twitter as an experiment in "conversational understanding" and to engage people through "casual and playful conversation", Tay was soon bombarded with racial comments and the innocent bot repeated those comments back with her commentary to users. Later, a Microsoft spokesperson confirmed to TechCrunch that the company is taking Tay off Twitter as people were posting abusive comments to her. But Twitter users soon understood that Tay will repeat back racist tweets with her own commentary and they bombarded her with abusive posts.
"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Peter Lee, Microsoft's vice president of research. Microsoft created Tay as an experiment to learn more about how artificial intelligence programs can engage with Web users in casual conversation. Lee, in the blog post, called Web users' efforts to exert a malicious influence on the chatbot "a coordinated attack by a subset of people." Microsoft has enjoyed better success with a chatbot called XiaoIce that the company launched in China in 2014.
Microsoft has now apologized for the entire episode. The company explains in a blog post that few human users on Twitter exploited a flaw in Tay to transform it into a bigoted racist. It's believed that many users exploited Tay's "repeat after me" feature which enabled users to get Tay to repeat whatever they tweeted at it. Naturally, trolls tweeted sexist, racist and abusive things at Tay which it repeated word for word.
Tay, Microsoft Corp's so-called chatbot that uses artificial intelligence to engage with millennials on Twitter, lasted less than a day before it was hobbled by a barrage of racist and sexist comments by Twitter users that it parroted back to them. TayTweets (@TayandYou), which began tweeting on Wednesday, was designed to become "smarter" as more users interacted with it, according to its Twitter biography. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the representative said in a written statement supplied to Reuters, without elaborating. After Twitter user Room (@codeinecrazzy) tweeted "jews did 9/11" to the account on Wednesday, @TayandYou responded "Okay ... jews did 9/11."
Tay, the company's online chat bot designed to talk like a teen, started spewing racist and hateful comments on Twitter on Wednesday, and Microsoft (MSFT, Tech30) shut Tay down around midnight. Microsoft blames Tay's behavior on online trolls, saying in a statement that there was a "coordinated effort" to trick the program's "commenting skills." As people chat with it online, Tay picks up new language and learns to interact with people in new ways. In her last tweet, Tay said she needed sleep and hinted that she would be back.