Using AI to Understand Complex Causation - DZone AI

#artificialintelligence

Whenever something serious happens, we usually try and determine cause and effect. What was it that caused this thing to unfold the way it did? There have been attempts in the past to generate mathematical models for general causality, but they haven't been particularly effective, especially for more complex problems. A new study from the University of Johannesburg, South Africa and National Institute of Technology Rourkela, India, has attempted to use AI to do a better job. The model creates significant opportunities to analyze complex phenomena in areas such as economics, disease outbreaks, climate change and conservation," the researchers say.


Using AI to Understand Complex Causation - DZone AI

#artificialintelligence

Whenever something serious happens, we usually try and determine cause and effect. What was it that caused this thing to unfold the way it did? Whilst the theory is nice, we often employ some rather dubious explanations to try and explain the series of events. There have been attempts in the past to generate mathematical models for general causality, but they haven't been particularly effective, especially for more complex problems. A new study from the University of Johannesburg, South Africa and National Institute of Technology Rourkela, India, has attempted to use AI to do a better job.


new-math-untangles-the-mysterious-nature-of-causality-consciousness

WIRED

Using the mathematical language of information theory, Hoel and his collaborators claim to show that new causes--things that produce effects--can emerge at macroscopic scales. They say coarse-grained macroscopic states of a physical system (such as the psychological state of a brain) can have more causal power over the system's future than a more detailed, fine-grained description of the system possibly could. Just as codes reduce noise (and thus uncertainty) in transmitted data--Claude Shannon's 1948 insight that formed the bedrock of information theory--Hoel claims that macro states also reduce noise and uncertainty in a system's causal structure, strengthening causal relationships and making the system's behavior more deterministic. With Albantakis and Tononi, Hoel formalized a measure of causal power called "effective information," which indicates how effectively a particular state influences the future state of a system.


Discovering Causal Relations by Experimentation: Causal Trees

AAAI Conferences

Generally, the less background knowledge needed, the better; the robot should be able to start 92 MAICS-97 out with the "mind of an infant" and learn everything it needs.


On Logics and Semantics of Indeterminate Causation

AAAI Conferences

We will explore the use of disjunctive causal rules for representing indeterminate causation. We provide first a logical formalization of such rules in the form of a disjunctive inference relation, and describe its logical semantics. Then we consider a nonmonotonic semantics for such rules, described in (Turner 1999). It will be shown, however, that, under this semantics, disjunctive causal rules admit a stronger logic in which these rules are reducible to ordinary, singular causal rules. This semantics also tends to give an exclusive interpretation of disjunctive causal effects, and so excludes some reasonable models in particular cases. To overcome these shortcomings, we will introduce an alternative nonmonotonic semantics for disjunctive causal rules, called a covering semantics, that permits an inclusive interpretation of indeterminate causal information. Still, it will be shown that even in this case there exists a systematic procedure, that we will call a normalization, that allows us to capture precisely the covering semantics using only singular causal rules. This normalization procedure can be viewed as a kind of nonmonotonic completion, and it generalizes established ways of representing indeterminate effects in current theories of action.