When Tay started its short digital life on March 23, it just wanted to gab and make some new friends on the net. The chatbot, which was created by Microsoft's Research department, greeted the day with an excited tweet that could have come from any teen: "hellooooooo w rld!!!" Within a few hours, though, Tay's optimistic, positive tone had changed. "Hitler was right I hate the jews," it declared in a stream of racist tweets bashing feminism and promoting genocide. Concerned about their bot's rapid radicalization, Tay's creators shut it down after less than 24 hours of existence. Microsoft had unwittingly lowered their burgeoning artificial intelligence into -- to use the parlance of the very people who corrupted her -- a virtual dumpster fire.
Earlier this year, Microsoft launched one of their AI-powered Chatbot called'Tay' but it soon caused controversy with its racist and unpleasant comments, leaving the company with no choice but to pull off. New reports according to Gadgets Now show that the, Redmond-based software firm is reportedly releasing another artificial intelligence powered Chatbot dubbed Zo on the social messaging app'Kik'. The app is believed to come to Twitter, Facebook Messenger and Snapchat once it's officially announced. "Zo is essentially a censored Tay or an English-variant of Microsoft's Chinese chatbot Xiaoice," MSPoweruser reported. At the initial launch of the app, the Chatbot does a "super abbreviated personality test" in which it asks if the user would rather study in school or learn from experience.
Tay gave chat bots a bad name, but Microsoft's new version has grown up. Microsoft unveiled a new chat bot in the U.S. on Tuesday, saying it's learned from the Tay experiment earlier this year. Zo is now available on messaging app Kik and on the website Zo.ai. Tay was meant to be a cheeky young person you could talk to on Twitter. Users tried -- successfully -- to get the bot to say racist and inappropriate things.
The race is on between the big tech giants to develop the best artificially intelligent assistant on almost human parity levels and Zo is next in line. It seems 2016 is the year of the Artificial Intelligence (AI) assistant or indeed, chatbot. Their success depends on the machine's "IQ and EQ [Emotional Quotient -- ability to understand the emotions of others]," Harry Shum executive VP of Microsoft's AI research group told a conference in San Francisco. Creating #AI for all: Microsoft Ventures supports startups focused on inclusive growth & societal good. IQ can been developed by using deep learning techniques and speech recognition software and is essential if the bot is going to complete specific tasks.
Question: can AI vision systems from Microsoft and Google, which are available for free to anybody, identify NSFW (not safe for work, nudity) images? Can this identification be used to automatically censor images by blacking out or blurring NSFW areas of the image? Method: I spent a few hours creating in some rough code in Microsoft office to find files on my computer and send them to Google Vision and Microsoft Vision so they could be analysed. I spent a few hours over the weekend just knocking some very rough code. Yes, they did reasonably well at (a) identifying images that could need censoring and (b) identifying where on the image things should be blocked out.
But when world-leading technology and science visionaries also express concerns about the dangers of artificial intelligence, maybe we should pay attention. Physicist Stephen Hawking, technology entrepreneur Elon Musk and Microsoft founder Bill Gates have all expressed concerns that computers and smart technologies may eventually outsmart humans and, through calculations based in cold logic without regard to the value of human life, could lead to our own demise. In the last several years, technology has gotten smarter, but very few regulations have emerged to ensure that our human rights are part of the smart technology equation. Avanade's recent research on digital ethics demonstrates that executives are aware of the need to address ethics around how they deal with both customers and employees, but not many have moved from awareness to action.
Miltenburg hasn't tested whether software trained on these image descriptions actually generates new, and biased, descriptions. Annotating images to teach machines should, Miltenburg wrote, be treated more as a psychological experiment, and less like a rote data collection task. By tightening the guidelines for crowdworkers, researchers would be able to better control what information deep learning software vacuums up in the first place. "One could certainly create annotation guidelines that explicitly instruct workers about gender or racial stereotypes," wrote Hockenmaier.
Microsoft unveiled Twitter artificial intelligence bot @TayandYou yesterday in a bid to connect with millennials and "experiment" with conversational understanding. The stunt however, took an unexpected turn when Tay's verified Twitter account began issuing a series of inflammatory statements after being targeted by Twitter trolls. The conversational learning curve saw the bot tweet posts from her verified account mentioning Hitler, 9/11 and feminism, some of which (including the below) have now been deleted. Things appear to have gone wrong for Tay because it was repeating fellow Twitter users' inflammatory statements, but Microsoft seems to have failed to consider the impact trolls could have on the experiment before it launched – The Drum has reached out to the company for comment on this process.
It did, however, identify other Nazi leaders like Joseph Mengele and Joseph Goebbels. Microsoft (MSFT) released CaptionBot a few weeks after its disastrous social experiment with Tay, an automated chat program designed to talk like a teen. Related: Microsoft'deeply sorry' for chat bot's racist tweets In addition to ignoring pictures of Hitler, CaptionBot also seemed to refuse to identify people like Osama bin Laden. Generally speaking, bots are software programs designed to hold conversations with people about data-driven tasks, such as managing schedules or retrieving data and information.
But things were going to get much worse for Microsoft when a chatbot called Tay started tweeting offensive comments seemingly supporting Nazi, anti-feminist and racist views. Apparently, the researchers at Microsoft thought that because they had successfully developed a similar AI chatbot called XiaoIce that has been running successfully in China on the social network Weibo, that Tay's experience on Twitter with a western audience would follow the same path. The disturbing outcome of Tay was that Microsoft's Peter Lee saw the problem with the Tay "experiment" as being a technological one that could be solved with a simple technology fix. This in turn leads to the risks of what AI experts like Stuart Russell and Peter Norvig have warned about for many years that an "AI system's learning function may cause it to evolve into a system with unintended behavior".