Goto

Collaborating Authors

Can This AI Pioneer Make Algorithms Understand Cause and Effect?

#artificialintelligence

Known as the "Nobel Prize of computing," the Turing Award is regarded as the highest honor in computer science. The three researchers received this prestigious accolade for their contributions to deep learning, a subset of artificial intelligence (AI) development that's largely responsible for the technology's current renaissance. While deep learning has unlocked vast advances in facial recognition, natural language processing, and autonomous vehicles, it still struggles to explain causal relationships in data. Not one to rest on his laurels, Bengio is now on a new mission: To teach AI to ask "Why?". Bengio views AI's inability to "connect the dots" as a serious problem.


Introduction to Causality in Machine Learning

#artificialintelligence

Despite the hype around AI, most Machine Learning (ML)-based projects focus on predicting outcomes rather than understanding causality. Indeed, after several AI projects, I realized that ML is great at finding correlations in data, but not causation. In our projects, we try to not fall into the trap of equating correlation with causation. This issue significantly limits our ability to rely on ML for decision-making. From a business perspective, we need to have tools that can understand the causal relationships between data and create ML solutions that can generalize well.


What AI still can't do

#artificialintelligence

Machine-learning systems can be duped or confounded by situations they haven't seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it's liable to lose some of the expertise it had in the original task. Computer scientists call this problem "catastrophic forgetting."


AI today and tomorrow is mostly about curve fitting, not intelligence

#artificialintelligence

As debates around AI's value continue, the risk of an AI winter is real. We need to level set what is real and what is imagined so that the next press release you see describing some amazing breakthrough is properly contextualized. Unquestionably, the latest spike of interest in AI technology using machine learning and the neuron-inspired deep learning is behind incredible advancements in many software categories. Achievements such as language translation, image and scene recognition and conversational UIs that were once the stuff of sci-fi dreams are now a reality. Even as software using AI-labeled techniques continues to yield tremendous improvements in most software categories, both academics and skeptical observers have observed that such algorithms fall far short of what can be reasonably considered intelligent.


AI today and tomorrow is mostly about curve fitting, not intelligence

#artificialintelligence

As debates around AI's value continue, the risk of an AI winter is real. We need to level set what is real and what is imagined so that the next press release you see describing some amazing breakthrough is properly contextualized. Unquestionably, the latest spike of interest in AI technology using machine learning and the neuron-inspired deep learning is behind incredible advancements in many software categories. Achievements such as language translation, image and scene recognition and conversational UIs that were once the stuff of sci-fi dreams are now a reality. Even as software using AI-labeled techniques continues to yield tremendous improvements in most software categories, both academics and skeptical observers have observed that such algorithms fall far short of what can be reasonably considered intelligent.