Cultural Bias in Artificial Intelligence - The New Stack

#artificialintelligence 

Advertising and white papers may make artificial intelligence seem like a pie in the sky proposition, with easy analysis, deep insights, and fair algorithms available everywhere. The reality, however, is that AI can expose an even darker side of our own humanity, acting as more of a mirror than as sky-pie. We saw this when Microsoft put an AI-driven bot up on Twitter, only to have it spout racist statements shortly thereafter. Camille Eddy, currently a student pursuing a mechanical engineering bachelor's degree at Boise State, already has a long career as a high-tech robotics intern at places like Alphabet and HP. At OSCON, she spoke on the topic of recognizing cultural bias in AI. "Some of the things we've seen are misclassification or misidentification. For example, Microsoft's Tay AI, a bot that was released on Twitter was famously easily influenced by people talking to it in racist and sexist ways, and it reflected that. People would say'This is an idea, you should hold this idea,' and it did. Talking about ways it can reflect our own biases as a society, and how that might not be something that we want," said Eddy.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found