Unlocking the black box of AI reasoning -- GCN
While artificial intelligence has proved effective at many tasks critical to government -- such as protecting power grids against hacking -- some agencies have been reluctant to employ AI tools because their inner workings are unintelligible to humans. How can a solution be trusted if nobody knows how it works? With advanced technologies like artificial intelligence and machine learning, manipulated digital media will be easier to create and more difficult to detect. David Bau, a Ph.D. student at the Massachusetts Institute of Technology, thinks generative adversarial networks may help show how AI algorithms reach their conclusions. Bau and others are testing GANs not only as tools for performing tasks, such as pattern recognition, but for examining how neural networks made decisions.
Oct-25-2019, 12:31:54 GMT
- Country:
- North America > United States > Massachusetts (0.25)
- Industry:
- Transportation > Air (0.51)
- Technology: