As data scientists, we are aware that bias exists in the world. We read up on stories about how cognitive biases can affect decision-making. We know that, for instance, a resume with a white-sounding name will receive a different response than the same resume with a black-sounding name, and that writers of performance reviews use different language to describe contributions by women and men in the workplace. We read stories in the news about ageism in healthcare and racism in mortgage lending. Data scientists are problem solvers at heart, and we love our data and our algorithms that sometimes seem to work like magic, so we may be inclined to try to solve these problems stemming from human bias by turning the decisions over to machines.
We like to think we've moved past the workplace sexism of the 1950s, when men were professionals and women were secretaries. But while women have managed to break out of those subservient roles, the genders we assign to artificial-intelligence robots suggests our prejudices haven't made as much progress. After law firm BakerHostetler hired an AI "lawyer" named ROSS, journalist Rose Eveleth noted that the male name was somewhat unusual in the world of AI. I would just like to note that all the assistant AIs are given female names, but the lawyer AI is named Ross. Critics have previously noted that most AI assistants--including Apple's Siri, Google Now, Amazon's Alexa, and Microsoft's Cortana--sound like women.
After law firm BakerHostetler hired an AI "lawyer" named ROSS, journalist Rose Eveleth noted that the male name was somewhat unusual in the world of AI. I would just like to note that all the assistant AIs are given female names, but the lawyer AI is named Ross. Some have attempted to excuse the trend, pointing to research that shows people respond more positively to women's voices. Meanwhile, the AI lawyer is called ROSS, and IBM's advanced AI computer system, which recently beat a human competitor in the ancient board game Go, goes by Watson.
"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. Tay's faux pas served as both a technical feat -- proving the wonders of AI by increasing the bot's language skills in a matter of hours -- and as a stark cultural reminder that things get ugly fast when diversity isn't continuously part of the conversation. By not building in language filters or canned responses to ward off taunting messages about Adolf Hitler, black people, or women into Tay's programming, Microsoft's engineers neglected a major issue people face online -- targeted harassment. Women only make up 27 percent of the tech giant's global staff, according to the company's 2015 diversity report.