Goto

Collaborating Authors

A theory of inferred causation

Classics

This paper concerns the empirical basis of causation, and addresses the following issues: 1. the clues that might prompt people to perceive causal relationships in uncontrolled observations.


Appropriate Causal Models and Stability of Causation

AAAI Conferences

Causal models defined in terms of structural equations have proved to be quite a powerful way of representing knowledge regarding causality. However, a number of authors have given examples that seem to show that the Halpern-Pearl (HP) definition of causality (Halpern & Pearl 2005) gives intuitively unreasonable answers. Here it is shown that, for each of these examples, we can give two stories consistent with the description in the example, such that intuitions regarding causality are quite different for each story. By adding additional variables, we can disambiguate the stories. Moreover, in the resulting causal models, the HP definition of causality gives the intuitively correct answer. It is also shown that, by adding extra variables, a modification to the original HP definition made to deal with an example of Hopkins and Pearl (2003) may not be necessary. Given how much can be done by adding extra variables, there might be a concern that the notion of causality is somewhat unstable. Can adding extra variables in a "conservative" way (i.e., maintaining all the relations between the variables in the original model) cause the answer to the question "Is X = x a cause of Y =  y ?" to alternate between "yes" and "no"? Here it is shown that adding an extra variable can change the answer from "yes' to "no", but after that, it cannot cannot change back to "yes".


No Causation without representation!

#artificialintelligence

My free book Bayesuvius now has 39 chapters. I had been postponing writing the chapters on Pearl causality until now, because I consider them to be the most important chapters in the whole book, and I wanted to nail them, to the best of my limited abilities. Well, I finally bit the bullet and wrote them. Please check them out, and send me feedback. Please keep in mind that this is the **first** released version of these chapters.


Correlation does not equal causation but How exactly do you determine causation?

#artificialintelligence

How exactly do you determine causation? This problem is further compounded because most books and examples are based on standard datasets (ex: Boston, Iris etc) . So, if we start from the beginning (without simplified examples) how do you know if a particular variable is a causal variable? Firstly, causality cannot be determined from data alone. In a statistical sense, two or more variables are related if their values change correspondingly i.e. increase or decrease together.


Causation in a Nutshell

#artificialintelligence

Knowing the who, what, when, where, etc., is vital in marketing. Predictive analytics can also be useful for many organizations. However, also knowing the why helps us better understand the who, what, when, where, and so on, and the ways they are tied together. It also helps us predict them more accurately. Knowing the why increases their value to marketers and increases the value of marketing.