Researchers Build an 'Interpretable' AI That Shows How It Thinks - The New Stack

#artificialintelligence 

The use of machine learning is increasing as automation becomes more widespread in our workplaces, financial institutions and even courts of law -- telling us whom to hire, whom to lend money to, and who might re-offend. But it's becoming painfully clear that these complex algorithms can conceal any number of hidden biases -- leading them to inadvertently discriminate against people based on their gender or race -- oftentimes with terrible, life-changing consequences. The problem is that such AI systems are notoriously opaque; more often than not, the mechanisms and reasoning behind their predictions aren't immediately apparent, even to the people who created these systems. So it's little wonder that a growing number of experts are now working to build what is called "interpretable" or "explainable" AI, where the processes that underlie machine predictions are made more transparent and therefore, also more understandable (at least by us humans). In aiming to better understand how and why machines classify images the way they do, one research team from Duke University created a new deep learning neural network whose reasoning process can be deconstructed, analyzed and understood more easily than comparable models.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found