A new algorithm trains AI to erase its biases

#artificialintelligence 

In recent years, artificial intelligence has struggled with a major PR problem: whether or not it's intentional, developers keep programming biases into their systems, creating algorithms that reflect the same prejudiced perspectives common in society. That's why it's intriguing that engineers from MIT and Harvard University say they've developed an algorithm that can scrub the bias from AI -- like sensitivity training for algorithms. The tool audits algorithms for biases and helps re-train them to behave more equitably, according to new research presented this week at the Conference on Artificial Intelligence, Ethics and Society. And even then, once complex AI systems deploy in the real world, it becomes very difficult to evaluate how exactly they're making their decisions. That's why automating the process is so important -- the new tool can go in and reconfigure how much value the AI system gives to each aspect of its training data, according to the research.