Jaimini, Utkarshani
HyperCausalLP: Causal Link Prediction using Hyper-Relational Knowledge Graph
Jaimini, Utkarshani, Henson, Cory, Sheth, Amit
Causal networks are often incomplete with missing causal links. This is due to various issues, such as missing observation data. Recent approaches to the issue of incomplete causal networks have used knowledge graph link prediction methods to find the missing links. In the causal link A causes B causes C, the influence of A to C is influenced by B which is known as a mediator. Existing approaches using knowledge graph link prediction do not consider these mediated causal links. This paper presents HyperCausalLP, an approach designed to find missing causal links within a causal network with the help of mediator links. The problem of missing links is formulated as a hyper-relational knowledge graph completion. The approach uses a knowledge graph link prediction model trained on a hyper-relational knowledge graph with the mediators. The approach is evaluated on a causal benchmark dataset, CLEVRER-Humans. Results show that the inclusion of knowledge about mediators in causal link prediction using hyper-relational knowledge graph improves the performance on an average by 5.94% mean reciprocal rank.
Influence of Backdoor Paths on Causal Link Prediction
Jaimini, Utkarshani, Henson, Cory, Sheth, Amit
The current method for predicting causal links in knowledge graphs uses weighted causal relations. For a given link between cause-effect entities, the presence of a confounder affects the causal link prediction, which can lead to spurious and inaccurate results. We aim to block these confounders using backdoor path adjustment. Backdoor paths are non-causal association flows that connect the \textit{cause-entity} to the \textit{effect-entity} through other variables. Removing these paths ensures a more accurate prediction of causal links. This paper proposes CausalLPBack, a novel approach to causal link prediction that eliminates backdoor paths and uses knowledge graph link prediction methods. It extends the representation of causality in a neuro-symbolic framework, enabling the adoption and use of traditional causal AI concepts and methods. We demonstrate our approach using a causal reasoning benchmark dataset of simulated videos. The evaluation involves a unique dataset splitting method called the Markov-based split that's relevant for causal link prediction. The evaluation of the proposed approach demonstrates atleast 30\% in MRR and 16\% in Hits@K inflated performance for causal link prediction that is due to the bias introduced by backdoor paths for both baseline and weighted causal relations.
CausalLP: Learning causal relations with weighted knowledge graph link prediction
Jaimini, Utkarshani, Henson, Cory, Sheth, Amit P.
Causal networks are useful in a wide variety of applications, from medical diagnosis to root-cause analysis in manufacturing. In practice, however, causal networks are often incomplete with missing causal relations. This paper presents a novel approach, called CausalLP, that formulates the issue of incomplete causal networks as a knowledge graph completion problem. More specifically, the task of finding new causal relations in an incomplete causal network is mapped to the task of knowledge graph link prediction. The use of knowledge graphs to represent causal relations enables the integration of external domain knowledge; and as an added complexity, the causal relations have weights representing the strength of the causal association between entities in the knowledge graph. Two primary tasks are supported by CausalLP: causal explanation and causal prediction. An evaluation of this approach uses a benchmark dataset of simulated videos for causal reasoning, CLEVRER-Humans, and compares the performance of multiple knowledge graph embedding algorithms. Two distinct dataset splitting approaches are used for evaluation: (1) random-based split, which is the method typically employed to evaluate link prediction algorithms, and (2) Markov-based split, a novel data split technique that utilizes the Markovian property of causal relations. Results show that using weighted causal relations improves causal link prediction over the baseline without weighted relations.
CausalKG: Causal Knowledge Graph Explainability using interventional and counterfactual reasoning
Jaimini, Utkarshani, Sheth, Amit
Humans use causality and hypothetical retrospection in their daily decision-making, planning, and understanding of life events. The human mind, while retrospecting a given situation, think about questions such as "What was the cause of the given situation?", "What would be the effect of my action?", or "Which action led to this effect?". It develops a causal model of the world, which learns with fewer data points, makes inferences, and contemplates counterfactual scenarios. The unseen, unknown, scenarios are known as counterfactuals. AI algorithms use a representation based on knowledge graphs (KG) to represent the concepts of time, space, and facts. A KG is a graphical data model which captures the semantic relationships between entities such as events, objects, or concepts. The existing KGs represent causal relationships extracted from texts based on linguistic patterns of noun phrases for causes and effects as in ConceptNet and WordNet. The current causality representation in KGs makes it challenging to support counterfactual reasoning. A richer representation of causality in AI systems using a KG-based approach is needed for better explainability, and support for intervention and counterfactuals reasoning, leading to improved understanding of AI systems by humans. The causality representation requires a higher representation framework to define the context, the causal information, and the causal effects. The proposed Causal Knowledge Graph (CausalKG) framework, leverages recent progress of causality and KG towards explainability. CausalKG intends to address the lack of a domain adaptable causal model and represent the complex causal relations using the hyper-relational graph representation in the KG. We show that the CausalKG's interventional and counterfactual reasoning can be used by the AI system for the domain explainability.