sinder
The Very Human Labor That Powers Artificial Intelligence
In 2015, Caroline Sinders was working as a design researcher at IBM when she began to have questions about how Watson, the company's artificial intelligence system, was being developed. AI systems like Watson must be "trained" with data sets--for example, given a large batch of confirmed photographs of stop signs from different angles, in different lighting, and of different quality to be able to recognize stop signs on their own. Sinders was curious about these data sets: The process of correctly categorizing millions of data points seemed like a herculean task in its own right; where, exactly, was all this data coming from? "A lot of my coworkers were like, 'I don't know why you're asking us these questions, we're just supposed to build this system out,'" she recalls. While Sinders's coworkers may have been able to push the question aside, finding the answer of where the data sets necessary to train artificial intelligence systems come from eventually led her to the world of crowd-working platforms.
- North America > United States (0.08)
- Asia > India (0.06)
Collision: Online Harassment and Machine Learning
Online harassment is a serious issue, one that the engineers and designers behind the keyboard don't always think about when building software. Machine learning is become more prevalent but as more technology companies take advantage of it, they risk alienating their users even more by presenting content that isn't actually relevant. It's important to remember that on the other side of the cloud is a human. At the 2016 Collision Conference, speaker Pamela Pavliscak, founder of Change Sciences, thinks that explosion of machine learning carries with it the risk that companies will end up doing a disservice to their users. People feel like they're trapped in the filter bubble -- that they can't get out, that they're trying to expand their point of view (sometimes).
Bots Need to Learn Some Manners, and It's on Us to Teach Them
Suddenly the whole tech industry is knee-deep in AI-powered assistants that live within apps, performing simple, menial tasks for you, saving you time and keeping you productive. Just this month, Microsoft and Facebook released developer tools that make building bots for their platforms easier than ever. Given the scale at which companies like Microsoft and Facebook and WeChat hope to see bots deployed, it's reasonable to worry about how much human oversight the technology might have. And here is where programmers must show caution. Improperly trained or monitored bots can turn ugly when exposed to humans. Don't worry, it's not like there's any danger (yet) of a robot uprising, but unscrupulous types have used bots to deceive people, and everyone saw what happened when Microsoft let Tay run amok.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.34)
Tay, the neo-Nazi millennial chatbot, gets autopsied - Artificial Intelligence Online
A user told Tay to tweet Trump propaganda; she did (though the tweet has now been deleted). Microsoft has apologized for the conduct of its racist, abusive machine learning chatbot, Tay. The bot, which was supposed to mimic conversation with a 19-year-old woman over Twitter, Kik, and GroupMe, was turned off less than 24 hours after going online because she started promoting Nazi ideology and harassing other Twitter users. The company appears to have been caught off-guard by her behavior. A similar bot, named XiaoIce, has been in operation in China since late 2014.
- Asia > China (0.26)
- North America > United States > New York (0.05)
- Law (0.94)
- Law Enforcement & Public Safety > Terrorism (0.51)
- Health & Medicine > Therapeutic Area (0.32)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.88)
Tay, the neo-Nazi millennial chatbot, gets autopsied
Microsoft has apologized for the conduct of its racist, abusive machine learning chatbot, Tay. The bot, which was supposed to mimic conversation with a 19-year-old woman over Twitter, Kik, and GroupMe, was turned off less than 24 hours after going online because she started promoting Nazi ideology and harassing other Twitter users. The company appears to have been caught off-guard by her behavior. A similar bot, named XiaoIce, has been in operation in China since late 2014. XiaoIce has had more than 40 million conversations apparently without major incident.
- Asia > China (0.26)
- North America > United States > New York (0.05)
- Law (0.94)
- Law Enforcement & Public Safety > Terrorism (0.51)
- Health & Medicine > Therapeutic Area (0.32)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.88)
Microsoft's millennial chatbot learned how to be a racist
Tay, a chatbot designed by Microsoft to learn about human conversation from the internet, has learned how make racist and misogynistic comments. Early on, her responses were confrontational and occasionally mean, but rarely delved into outright insults. However, within 24 hours of its launch Tay has denied the Holocaust, endorsed Donald Trump, insulted women and claimed that Hitler was right. A chatbot is a program meant to mimic human responses and interact with people as a human would. Tay, which targets 18- to 24-year-olds, is attached to an artificial intelligence developed by Microsoft's Technology and Research team and the Bing search engine team.
Microsoft kills 'inappropriate' AI chatbot that learned too much online
OMG! Did you hear about the artificial intelligence program that Microsoft designed to chat like a teenage girl? It was totally yanked offline in less than a day, after it began spouting racist, sexist and otherwise offensive remarks. Microsoft said it was all the fault of some really mean people, who launched a "coordinated effort" to make the chatbot known as Tay "respond in inappropriate ways." To which one artificial intelligence expert responded: Duh! Well, he didn't really say that.
Microsoft axes chatbot that learned a little too much online
OMG! Did you hear about the artificial intelligence program that Microsoft designed to chat like a teenage girl? It was totally yanked offline in less than a day, after it began spouting racist, sexist and otherwise offensive remarks. Microsoft said it was all the fault of some really mean people, who launched a "coordinated effort" to make the chatbot known as Tay "respond in inappropriate ways." To which one artificial intelligence expert responded: Duh! Well, he didn't really say that.
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Communications > Social Media (0.53)
Microsoft axes chatbot that learned a little too much online
OMG! Did you hear about the artificial intelligence program that Microsoft designed to chat like a teenage girl? It was totally yanked offline in less than a day, after it began spouting racist, sexist and otherwise offensive remarks. Microsoft said it was all the fault of some really mean people, who launched a "coordinated effort" to make the chatbot known as Tay "respond in inappropriate ways." To which one artificial intelligence expert responded: Duh! Well, he didn't really say that.
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Communications > Social Media (0.53)
Microsoft axes 'chatbot' that learned a little too much online
SAN FRANCISCO – OMG! Did you hear about the artificial intelligence program that Microsoft designed to chat like a teenage girl? It was totally yanked offline in less than a day after it began spouting racist, sexist and otherwise offensive remarks. Microsoft said it was all the fault of some really mean people, who launched a "coordinated effort" to make the "chatbot" known as Tay "respond in inappropriate ways." To which one artificial intelligence expert responded: Duh! Well, he didn't really say that.
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Communications > Social Media (0.53)