Some of the world's largest tech companies are coming together to form a partnership aimed at educating the public about the advancements of artificial intelligence and ensure they meet ethical standards. "We believe that artificial intelligence technologies hold great promise for raising the quality of people's lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education," the group stated in a series of "tenets." Another nexus of interest will be around ethics, with the group inviting academic experts to work with companies on AI for the best of humanity. But it's not clear whether this means opposing working with government surveillance authorities, or opposing forms of online censorship.
The bot was designed to learn by talking with real people on Twitter and the messaging apps Kik and GroupMe. And, after less than a day on Twitter, the bot had itself started spouting racist, sexist, anti-Semitic comments. "Tay" went from "humans are super cool" to full nazi in 24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A Now, you might wonder why Microsoft would unleash a bot upon the world that was so unhinged. The AI chatbot Tay is a machine learning project, designed for human engagement.