AI Can't Reason Why WSJD - Technology

Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. From the time we are infants, we organize our experiences into causes and effects. The questions "Why did this happen?" Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie.

UCLA faculty voice: Artificial intelligence can't reason why


Judea Pearl is chancellor's professor of computer science and statistics at UCLA and co-author of "The Book of Why: The Science of Cause and Effect" with Dana Mackenzie, a mathematics writer. This column originally appeared in the Wall Street Journal. Computer programs have reached a bewildering point in their long and unsteady journey toward artificial intelligence. They outperform people at tasks we once felt to be uniquely human, such as playing poker or recognizing faces in a crowd. Meanwhile, self-driving cars using similar technology run into pedestrians and posts and we wonder whether they can ever be trustworthy.

Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution Machine Learning

Current machine learning systems operate, almost exclusively, in a statistical, or model-free mode, which entails severe theoretical limits on their power and performance. Such systems cannot reason about interventions and retrospection and, therefore, cannot serve as the basis for strong AI. To achieve human level intelligence, learning machines need the guidance of a model of reality, similar to the ones used in causal inference tasks. To demonstrate the essential role of such models, I will present a summary of seven tasks which are beyond reach of current machine learning systems and which have been accomplished using the tools of causal modeling.

The Seven Tools of Causal Inference, with Reflections on Machine Learning

Communications of the ACM

The dramatic success in machine learning has led to an explosion of artificial intelligence (AI) applications and increasing expectations for autonomous systems that exhibit human-level intelligence. These expectations have, however, met with fundamental obstacles that cut across many application areas. One such obstacle is adaptability, or robustness. Machine learning researchers have noted current systems lack the ability to recognize or react to new circumstances they have not been specifically programmed or trained for. Intensive theoretical and experimental efforts toward "transfer learning," "domain adaptation," and "lifelong learning"4 are reflective of this obstacle. Another obstacle is "explainability," or that "machine learning models remain mostly black boxes"26 unable to explain the reasons behind their predictions or recommendations, thus eroding users' trust and impeding diagnosis and repair; see Hutson8 and Marcus.11 A third obstacle concerns the lack of understanding of cause-effect connections.

Artificial intelligence pioneer's new book examines the science of cause and effect


Judea Pearl, chancellor's professor of computer science and statistics at UCLA, has written his first book intended for a general audience, "The Book of Why: The New Science of Cause and Effect." The book, which was written with co-author Dana Mackenzie, explores causality -- the study of cause and effect -- from its origins to its applications at the leading edges of science. Pearl, a UCLA faculty member since 1970, received the 2011 A.M. Turing Award, considered the "Nobel Prize" in computing, for his landmark work in processing information under uncertainty. His new book will be published on May 15. That same day, Pearl will deliver a talk at the Charles E. Young Research Library as part of the UCLA Library Writer Series.