Some of the world's largest tech companies are coming together to form a partnership aimed at educating the public about the advancements of artificial intelligence and ensure they meet ethical standards. "We believe that artificial intelligence technologies hold great promise for raising the quality of people's lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education," the group stated in a series of "tenets." Another nexus of interest will be around ethics, with the group inviting academic experts to work with companies on AI for the best of humanity. But it's not clear whether this means opposing working with government surveillance authorities, or opposing forms of online censorship.
"It sounds a little far-fetched – 'oh, we're going to live forever' – but the idea seems to be becoming a little more mainstream," says Rachel Edler, a supporter who helped design the Immortality Bus. It's too dangerous for Hillary to talk about designing babies – it's easier to talk about Trump," Istvan says. It's too dangerous for Hillary to talk about designing babies – it's easier to talk about Trump The people don't seem ready for it now, yet Istvan hopes that by 2024 Americans will accept a transhumanist platform. If this starts happening, politicians will have to start addressing transhumanism – and the civil rights challenges associated with it.
Microsoft unveiled Twitter artificial intelligence bot @TayandYou yesterday in a bid to connect with millennials and "experiment" with conversational understanding. The stunt however, took an unexpected turn when Tay's verified Twitter account began issuing a series of inflammatory statements after being targeted by Twitter trolls. The conversational learning curve saw the bot tweet posts from her verified account mentioning Hitler, 9/11 and feminism, some of which (including the below) have now been deleted. Things appear to have gone wrong for Tay because it was repeating fellow Twitter users' inflammatory statements, but Microsoft seems to have failed to consider the impact trolls could have on the experiment before it launched – The Drum has reached out to the company for comment on this process.
Andrew Heikkila recently wrote in TechCrunch, "Indeed, AI is here -- although Microsoft's blunder with Tay, the'teenaged girl AI' embodied by a Twitter account who'turned racist' shows that we obviously still have a long way to go. The pace of advancement, mixed with our general lack of knowledge in the realm of artificial intelligence, has spurred many to chime in on the emerging topic of AI and ethics. Sydell calls upon Latanya Sweeney's 2013 study of Google AdWords buys made by companies providing criminal-background-check services. Sweeney's findings showed that when somebody Googled a traditionally "black-sounding" name, such as DeShawn, Darnell or Jermaine, for example, the ad results returned were indicative of arrests at a significantly higher rate than if the name queried was a traditionally'white-sounding' name, such as Geoffrey, Jill or Emma."
But things were going to get much worse for Microsoft when a chatbot called Tay started tweeting offensive comments seemingly supporting Nazi, anti-feminist and racist views. Apparently, the researchers at Microsoft thought that because they had successfully developed a similar AI chatbot called XiaoIce that has been running successfully in China on the social network Weibo, that Tay's experience on Twitter with a western audience would follow the same path. The disturbing outcome of Tay was that Microsoft's Peter Lee saw the problem with the Tay "experiment" as being a technological one that could be solved with a simple technology fix. This in turn leads to the risks of what AI experts like Stuart Russell and Peter Norvig have warned about for many years that an "AI system's learning function may cause it to evolve into a system with unintended behavior".
AI is here - although Microsoft's blunder with Tay, the "teenaged girl AI" embodied by a Twitter account who "turned racist" shows that we obviously still have a long ways to go. The pace of advancement, mixed with our general lack of knowledge in the realm of artificial intelligence, has spurred many to chime in on the emerging topic of AI and ethics.…
Earlier this week, Microsoft released an artificial intelligence named Tay, who ran through an official Twitter Account, @Tayandyou. In a CNN Money Article by Hope King titled "After racist tweets, Microsoft muzzles teen chat bot Tay," a comment was made by Microsoft on the incident. Because of the troll's actions, Tay was left spouting obscenities, whether it be attacking the Black Lives Matter Movement, or praising the works of Adolf Hitler. Through this poor example, the public has witnessed the actions of a poorly made AI who was spouting racist tweets left and right before Microsoft ended up pulling the plug.
In 2013, he created wordfilter, an open source blacklist of slurs. Because Two Headlines swaps subjects in headlines, sometimes it would swap a female subject and a male subject, resulting in tweets like "Bruce Willis Looks Stunning in Her Red Carpet Dress." Parker Higgins tends to make "iterator bots," bots that go through a collection (such as the New York Public Library public domain collection) and broadcast its contents bit by bit. Recently, Higgins hoped to make an iterator bot out of turn-of-the-century popular music that had been digitized by the New York Public Library.
Tay, Microsoft Corp's so-called chatbot that uses artificial intelligence to engage with millennials on Twitter, lasted less than a day before it was hobbled by a barrage of racist and sexist comments by Twitter users that it parroted back to them. TayTweets (@TayandYou), which began tweeting on Wednesday, was designed to become "smarter" as more users interacted with it, according to its Twitter biography. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the representative said in a written statement supplied to Reuters, without elaborating. After Twitter user Room (@codeinecrazzy) tweeted "jews did 9/11" to the account on Wednesday, @TayandYou responded "Okay ... jews did 9/11."
Beneath that is a thick seam of the kind of material all genocides feed off: conspiracy theories and illogic. Microsoft claimed Tay had been "attacked" by trolls. It knows, too, there may have been organised paedophile rings among the powerful in the past. If you spend just five minutes on the social media feeds of UK-based antisemites it becomes absolutely clear that their purpose is to associate each of these phenomena with the others, and all of them with Israel and Jews.