Causal Invariance and Machine Learning

#artificialintelligence

One of the problems with these algorithms and the features they leverage is that they are based on correlational relationships that may not be causal. As Russ states: "Because there could be a correlation that's not causal. And I think that's the distinction that machine learning is unable to make--even though "it fit the data really well," it's really good for predicting what happened in the past, it may not be good for predicting what happens in the future because those correlations may not be sustained." This echoes a theme in a recent blog post by Paul Hunermund: "All of the cutting-edge machine learning tools--you know, the ones you've heard about, like neural nets, random forests, support vector machines, and so on--remain purely correlational, and can therefore not discern whether the rooster's crow causes the sunrise, or the other way round" I've made similar analogies before myself and still think this makes a lot of sense. However, a talk at the International Conference on Learning Representations definitely made me stop and think about the kind of progress that has been made in the last decade and the direction research is headed.


UCLA faculty voice: Artificial intelligence can't reason why

#artificialintelligence

Judea Pearl is chancellor's professor of computer science and statistics at UCLA and co-author of "The Book of Why: The Science of Cause and Effect" with Dana Mackenzie, a mathematics writer. This column originally appeared in the Wall Street Journal. Computer programs have reached a bewildering point in their long and unsteady journey toward artificial intelligence. They outperform people at tasks we once felt to be uniquely human, such as playing poker or recognizing faces in a crowd. Meanwhile, self-driving cars using similar technology run into pedestrians and posts and we wonder whether they can ever be trustworthy.


AI Can't Reason Why

WSJ.com: WSJD - Technology

Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. From the time we are infants, we organize our experiences into causes and effects. The questions "Why did this happen?" Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie.


AI Can't Reason Why

#artificialintelligence

Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. From the time we are infants, we organize our experiences into causes and effects. The questions "Why did this happen?" Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie.


Causal Inference

#artificialintelligence

"Correlation does not imply causation" we all know this mantra from statistics. And we think that we fully understand it. Human (and not human) brains, being machines to find patterns, quickly understand that my coffee mug broke because it fell to the floor. One event (the falling) occurred just before the other (mug breaking) and without the first event, we would never see the second. So not only exists a correlation between mugs falling and mugs breaking there is also a causal relation (with lots of physics going on).