Known as the "Nobel Prize of computing," the Turing Award is regarded as the highest honor in computer science. The three researchers received this prestigious accolade for their contributions to deep learning, a subset of artificial intelligence (AI) development that's largely responsible for the technology's current renaissance. While deep learning has unlocked vast advances in facial recognition, natural language processing, and autonomous vehicles, it still struggles to explain causal relationships in data. Not one to rest on his laurels, Bengio is now on a new mission: To teach AI to ask "Why?". Bengio views AI's inability to "connect the dots" as a serious problem.
As debates around AI's value continue, the risk of an AI winter is real. We need to level set what is real and what is imagined so that the next press release you see describing some amazing breakthrough is properly contextualized. Unquestionably, the latest spike of interest in AI technology using machine learning and the neuron-inspired deep learning is behind incredible advancements in many software categories. Achievements such as language translation, image and scene recognition and conversational UIs that were once the stuff of sci-fi dreams are now a reality. Even as software using AI-labeled techniques continues to yield tremendous improvements in most software categories, both academics and skeptical observers have observed that such algorithms fall far short of what can be reasonably considered intelligent.
There is a brief description of the probabilistic causal graph model for representing, reasoning with, and learning causal structure using Bayesian networks. It is then argued that this model is closely related to how humans reason with and learn causal structure. It is shown that studies in psychology on discounting (reasoning concerning how the presence of one cause of an effect makes another cause less probable) support the hypothesis that humans reach the same judgments as algorithms for doing inference in Bayesian networks. Next, it is shown how studies by Piaget indicate that humans learn causal structure by observing the same independencies and dependencies as those used by certain algorithms for learning the structure of a Bayesian network. Based on this indication, a subjective definition of causality is forwarded. Finally, methods for further testing the accuracy of these claims are discussed.
A self-driving car approaches a stop sign, but instead of slowing down, it accelerates into the busy intersection. An accident report later reveals that four small rectangles had been stuck to the face of the sign. These fooled the car's onboard artificial intelligence (AI) into misreading the word'stop' as'speed limit 45'. Such an event hasn't actually happened, but the potential for sabotaging AI is very real. Researchers have already demonstrated how to fool an AI system into misreading a stop sign, by carefully positioning stickers on it1. They have deceived facial-recognition systems by sticking a printed pattern on glasses or hats. And they have tricked speech-recognition systems into hearing phantom phrases by inserting patterns of white noise in the audio.
IMAGE: Yoshua Bengio, Co-recipient of the ACM A.M. Turing Award, will present his Turing Lecture at the Heidelberg Laureate Forum on September 23, 2019. ACM, the Association for Computing Machinery, today announced that Yoshua Bengio, co-recipient of the 2018 ACM A.M. Turing Award, will present his Turing Award Lecture, "Deep Learning for AI," at the Heidelberg Laureate Forum on September 23 in Heidelberg, Germany. Bengio is a professor at the University of Montreal and Scientific Director at Mila, Quebec's Artificial Intelligence Institute. He received the 2018 ACM A.M. Turing Award with Geoffrey Hinton, VP and Engineering Fellow of Google, and Yann LeCun, VP and Chief AI Scientist at Facebook. Bengio, Hinton and LeCun were recognized for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.