Deep learning is creating computer systems we don't fully understand

#artificialintelligence

To compare where the humans and machines looked, the researchers created "attention" heat maps that could be laid over one another. On a scale of 0 to 1, where 0 is no overlap at all and 1 is complete overlap, the researchers found that the attention maps from the humans lined up at a rate of 0.63. But when comparing humans to machines, this figure was just 0.26. Explaining this difference is tricky. In one question in the study, for example, the humans and neural networks where shown a picture of a bedroom, and asked: "What is covering the windows?"


Can we open the black box of AI?

#artificialintelligence

Dean Pomerleau can still remember his first tussle with the black-box problem. The year was 1991, and he was making a pioneering attempt to do something that has now become commonplace in autonomous-vehicle research: teach a computer how to drive. This meant taking the wheel of a specially equipped Humvee military vehicle and guiding it through city streets, says Pomerleau, who was then a robotics graduate student at Carnegie Mellon University in Pittsburgh, Pennsylvania. With him in the Humvee was a computer that he had programmed to peer through a camera, interpret what was happening out on the road and memorize every move that he made in response. Eventually, Pomerleau hoped, the machine would make enough associations to steer on its own.


Can we open the black box of AI?

#artificialintelligence

Dean Pomerleau can still remember his first tussle with the black-box problem. The year was 1991, and he was making a pioneering attempt to do something that has now become commonplace in autonomous-vehicle research: teach a computer how to drive. This meant taking the wheel of a specially equipped Humvee military vehicle and guiding it through city streets, says Pomerleau, who was then a robotics graduate student at Carnegie Mellon University in Pittsburgh, Pennsylvania. With him in the Humvee was a computer that he had programmed to peer through a camera, interpret what was happening out on the road and memorize every move that he made in response. Eventually, Pomerleau hoped, the machine would make enough associations to steer on its own.


Bye black boxes: Researchers are building neural networks that explain decisions

#artificialintelligence

But that is not to say it is perfect by any stretch of the imagination. "Deep learning has led to some big advances in computer vision, natural language processing, and other areas," Tommi Jaakkola, a Massachusetts Institute of Technology professor of electrical engineering and computer science, told Digital Trends. "It's tremendously flexible in terms of learning input/output mappings, but the flexibility and power comes at a cost. That is it that it's very difficult to work out why it is performing a certain prediction in a particular context." This black-boxed lack of transparency would be one thing if deep learning systems were still confined to being lab experiments, but they are not.


MIT researchers are working to create neural networks that are no longer black boxes

#artificialintelligence

But that is not to say it is perfect by any stretch of the imagination. "Deep learning has led to some big advances in computer vision, natural language processing, and other areas," Tommi Jaakkola, a Massachusetts Institute of Technology professor of electrical engineering and computer science, told Digital Trends. "It's tremendously flexible in terms of learning input/output mappings, but the flexibility and power comes at a cost. That is it that it's very difficult to work out why it is performing a certain prediction in a particular context." This black-boxed lack of transparency would be one thing if deep learning systems were still confined to being lab experiments, but they are not.