IBM Researchers Explain Machine Learning Models By Exploring What Isn't There


In "The Adventure of the Silver Blaze," Sherlock Holmes famously solved a case not by discovering a clue–but by noting its absence. In that case, it was a dog that didn't bark, and that lack of barking helped identify the culprit. The fact that humans are able to make deductions and learn from something that's missing isn't something that's yet been widely applied to machine learning, but that's something that a team of researchers a IBM want to change. In a paper published earlier this year, the team outlined a means of using missing results to get a better understanding of how machine learning models work. "One of the pitfalls of deep learning is that it's more or less black box," explained Amit Dhurandhar, one of the members of the research team.