Goto

Collaborating Authors

Causal Invariance and Machine Learning

#artificialintelligence

One of the problems with these algorithms and the features they leverage is that they are based on correlational relationships that may not be causal. As Russ states: "Because there could be a correlation that's not causal. And I think that's the distinction that machine learning is unable to make--even though "it fit the data really well," it's really good for predicting what happened in the past, it may not be good for predicting what happens in the future because those correlations may not be sustained." This echoes a theme in a recent blog post by Paul Hunermund: "All of the cutting-edge machine learning tools--you know, the ones you've heard about, like neural nets, random forests, support vector machines, and so on--remain purely correlational, and can therefore not discern whether the rooster's crow causes the sunrise, or the other way round" I've made similar analogies before myself and still think this makes a lot of sense. However, a talk at the International Conference on Learning Representations definitely made me stop and think about the kind of progress that has been made in the last decade and the direction research is headed.


UCLA faculty voice: Artificial intelligence can't reason why

#artificialintelligence

Judea Pearl is chancellor's professor of computer science and statistics at UCLA and co-author of "The Book of Why: The Science of Cause and Effect" with Dana Mackenzie, a mathematics writer. This column originally appeared in the Wall Street Journal. Computer programs have reached a bewildering point in their long and unsteady journey toward artificial intelligence. They outperform people at tasks we once felt to be uniquely human, such as playing poker or recognizing faces in a crowd. Meanwhile, self-driving cars using similar technology run into pedestrians and posts and we wonder whether they can ever be trustworthy.


Wanna Build an AI-powered Organization? Start by Getting EVERYONE to "Think Like A Data Scientist"

#artificialintelligence

In a recent blog I stated that "Crossing the AI Chasm" is primarily an organizational and cultural challenge, not a technology challenge. That "Crossing the AI Chasm" not only requires organizational buy-in, but more importantly, necessitates creating a culture of adoption and continuous learning fueled at the front-lines of customer and/or operational engagement (see Figure 1). A recent Harvard Business Review (HBR) article "Building the AI-Powered Organization" agrees that despite the promise of AI, many organizations' efforts with it are falling short because of a failure by senior management to rewire the organization from the bottom up. The above points – interdisciplinary collaboration, data-driven at the front-line, and experimental and adaptive – are the hallmarks of an organization where everyone has been trained to "Think Like a Data Scientist." So, how can your organization embrace the liberating and innovative process of getting everyone to "Think Like a Data Scientist"?


Wanna Build an AI-powered Organization? Start by Getting EVERYONE to "Think Like A Data Scientist"

#artificialintelligence

In a recent blog I stated that "Crossing the AI Chasm" is primarily an organizational and cultural challenge, not a technology challenge. That "Crossing the AI Chasm" not only requires organizational buy-in, but more importantly, necessitates creating a culture of adoption and continuous learning fueled at the front-lines of customer and/or operational engagement (see Figure 1). A recent Harvard Business Review (HBR) article "Building the AI-Powered Organization" agrees that despite the promise of AI, many organizations' efforts with it are falling short because of a failure by senior management to rewire the organization from the bottom up. The above points – interdisciplinary collaboration, data-driven at the front-line, and experimental and adaptive – are the hallmarks of an organization where everyone has been trained to Think Like a Data Scientist." So, how can your organization embrace the liberating and innovative process of getting everyone to "Think Like a Data Scientist"?


AI Can't Reason Why

#artificialintelligence

Put simply, today's machine-learning programs can't tell whether a crowing rooster makes the sun rise, or the other way around. Whatever volumes of data a machine analyzes, it cannot understand what a human gets intuitively. From the time we are infants, we organize our experiences into causes and effects. The questions "Why did this happen?" Suppose, for example, that a drugstore decides to entrust its pricing to a machine learning program that we'll call Charlie.