Unlocking the black box of AI reasoning -- GCN

#artificialintelligence 

While artificial intelligence has proved effective at many tasks critical to government -- such as protecting power grids against hacking -- some agencies have been reluctant to employ AI tools because their inner workings are unintelligible to humans. How can a solution be trusted if nobody knows how it works? With advanced technologies like artificial intelligence and machine learning, manipulated digital media will be easier to create and more difficult to detect. David Bau, a Ph.D. student at the Massachusetts Institute of Technology, thinks generative adversarial networks may help show how AI algorithms reach their conclusions. Bau and others are testing GANs not only as tools for performing tasks, such as pattern recognition, but for examining how neural networks made decisions.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found