Goto

Collaborating Authors

 Abductive Reasoning




Abducing Compliance of Incomplete Event Logs

Chesani, Federico, De Masellis, Riccardo, Di Francescomarino, Chiara, Ghidini, Chiara, Mello, Paola, Montali, Marco, Tessaris, Sergio

arXiv.org Artificial Intelligence

The capability to store data about business processes execution in so-called Event Logs has brought to the diffusion of tools for the analysis of process executions and for the assessment of the goodness of a process model. Nonetheless, these tools are often very rigid in dealing with with Event Logs that include incomplete information about the process execution. Thus, while the ability of handling incomplete event data is one of the challenges mentioned in the process mining manifesto, the evaluation of compliance of an execution trace still requires an end-to-end complete trace to be performed. This paper exploits the power of abduction to provide a flexible, yet computationally effective, framework to deal with different forms of incompleteness in an Event Log. Moreover it proposes a refinement of the classical notion of compliance into strong and conditional compliance to take into account incomplete logs. Finally, performances evaluation in an experimental setting shows the feasibility of the presented approach.


Abductive Computational Systems: Creative Abduction and Future Directions

Sood, Abhinav, Grace, Kazjon, Wan, Stephen, Paris, Cecile

arXiv.org Artificial Intelligence

Abductive reasoning, reasoning for inferring explanations for observations, is often mentioned in scientific, design-related and artistic contexts, but its understanding varies across these domains. This paper reviews how abductive reasoning is discussed in epistemology, science and design, and then analyses how various computational systems use abductive reasoning. Our analysis shows that neither theoretical accounts nor computational implementations of abductive reasoning adequately address generating creative hypotheses. Theoretical frameworks do not provide a straightforward model for generating creative abductive hypotheses, and computational systems largely implement syllogistic forms of abductive reasoning. We break down abduc-tive computational systems into components and conclude by identifying specific directions for future research that could advance the state of creative abductive reasoning in computational systems.


How Rules Represent Causal Knowledge: Causal Modeling with Abductive Logic Programs

Rückschloß, Kilian, Weitkämper, Felix

arXiv.org Artificial Intelligence

Pearl observes that causal knowledge enables predicting the effects of interventions, such as actions, whereas descriptive knowledge only permits drawing conclusions from observation. This paper extends Pearl's approach to causality and interventions to the setting of stratified abductive logic programs. It shows how stable models of such programs can be given a causal interpretation by building on philosophical foundations and recent work by Bochman and Eelink et al. In particular, it provides a translation of abductive logic programs into causal systems, thereby clarifying the informal causal reading of logic program rules and supporting principled reasoning about external actions. The main result establishes that the stable model semantics for stratified programs conforms to key philosophical principles of causation, such as causal sufficiency, natural necessity, and irrelevance of unobserved effects. This justifies the use of stratified abductive logic programs as a framework for causal modeling and for predicting the effects of interventions.


Controllable Logical Hypothesis Generation for Abductive Reasoning in Knowledge Graphs

Gao, Yisen, Bai, Jiaxin, Zheng, Tianshi, Sun, Qingyun, Zhang, Ziwei, Li, Jianxin, Song, Yangqiu, Fu, Xingcheng

arXiv.org Artificial Intelligence

Abductive reasoning in knowledge graphs aims to generate plausible logical hypotheses from observed entities, with broad applications in areas such as clinical diagnosis and scientific discovery. However, due to a lack of controllability, a single observation may yield numerous plausible but redundant or irrelevant hypotheses on large-scale knowledge graphs. To address this limitation, we introduce the task of controllable hypothesis generation to improve the practical utility of abductive reasoning. This task faces two key challenges when controlling for generating long and complex logical hypotheses: hypothesis space collapse and hypothesis oversensitivity. To address these challenges, we propose CtrlHGen, a Controllable logcial Hypothesis Generation framework for abductive reasoning over knowledge graphs, trained in a two-stage paradigm including supervised learning and subsequent reinforcement learning. To mitigate hypothesis space collapse, we design a dataset augmentation strategy based on sub-logical decomposition, enabling the model to learn complex logical structures by leveraging semantic patterns in simpler components. To address hypothesis oversensitivity, we incorporate smoothed semantic rewards including Dice and Overlap scores, and introduce a condition-adherence reward to guide the generation toward user-specified control constraints. Extensive experiments on three benchmark datasets demonstrate that our model not only better adheres to control conditions but also achieves superior semantic similarity performance compared to baselines.


UniBench: Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling

Neural Information Processing Systems

Significant research efforts have been made to scale and improve vision-language model (VLM) training approaches. Yet, with an ever-growing number of benchmarks,researchers are tasked with the heavy burden of implementing each protocol, bearing a non-trivial computational cost, and making sense of how all these benchmarks translate into meaningful axes of progress.To facilitate a systematic evaluation of VLM progress, we introduce UniBench: a unified implementation of 50 VLM benchmarks spanning a range of carefully categorized vision-centric capabilities from object recognition to spatial awareness, counting, and much more. We showcase the utility of UniBench for measuring progress by evaluating nearly 60 publicly available vision-language models, trained on scales of up to 12.8B samples. We find that while scaling training data or model size can boost many vision-language model capabilities, scaling offers little benefit for reasoning or relations. Surprisingly, we also discover today's best VLMs struggle on simple digit recognition and counting tasks, e.g.


Abductive Reasoning in Logical Credal Networks

Neural Information Processing Systems

Logical Credal Networks or LCNs were recently introduced as a powerful probabilistic logic framework for representing and reasoning with imprecise knowledge. Unlike many existing formalisms, LCNs have the ability to represent cycles and allow specifying marginal and conditional probability bounds on logic formulae which may be important in many realistic scenarios. Previous work on LCNs has focused exclusively on marginal inference, namely computing posterior lower and upper probability bounds on a query formula. In this paper, we explore abductive reasoning tasks such as solving MAP and Marginal MAP queries in LCNs given some evidence. We first formally define the MAP and Marginal MAP tasks for LCNs and subsequently show how to solve these tasks exactly using search-based approaches.


Emerging categories in scientific explanations

Magnifico, Giacomo, Barbu, Eduard

arXiv.org Artificial Intelligence

Clear and effective explanations are essential for human understanding and knowledge dissemination. The scope of scientific research aiming to understand the essence of explanations has recently expanded from the social sciences to include the fields of machine learning and artificial intelligence. Important contributions from social sciences include [18, 17, 22, 13, 5, 11] with works that examine critical aspects such as causality (cause-and-effect relationships), contrast (distinctions between differing scenarios), relevance (applicability of explanations), and truth (accuracy and verifiability of explanations). However, machine learning and natural language processing focus more on operational definitions and on the importance of constructing datasets, as seen in studies by [21, 23, 6]. Since explanations for machine learning decisions must be both impactful and human-like [10, 3, 20, 12, 4], a major challenge lies in developing explanations that emphasize proximal aspects -- details that are immediately relevant, direct and related to the user -- over broad algorithmic processes [21].


A Fine-Grained Complexity View on Propositional Abduction -- Algorithms and Lower Bounds

Lagerkvist, Victor, Maizia, Mohamed, Schmidt, Johannes

arXiv.org Artificial Intelligence

The Boolean satisfiability problem is a well-known NP-complete problem. Due to the rapid advance of SA T solvers, many combinatorial problems are today solved by reducing to SA T, which can then be solved with off-the-shelf solvers. SA T fundamentally encodes a form of monotonic reasoning in the sense that conclusions remain valid regardless if new information is added. However, the real world is non-monotonic, meaning that one should be able to retract a statement if new data is added which violates the previous conclusion. One of the best known examples of non-monotonic reasoning is abductive reasoning where we are interested in finding an explanation, vis- ` a-vis a hypothesis, of some observed manifestation. Abduction has many practical applications, e.g., scientific discovery [10], network security [1], computational biology [22], medical diagnosis [18], knowledge base updates [23], and explainability issues in machine learning and decision support systems [7, 8, 2, 21]. This may be especially poignant in forthcoming decades due to the continued emergence of AI in new and surprising applications, which need to be made GDPR compliant [26] and explainable. The incitement for solving abduction fast, even when it is classically intractable, thus seems highly practically motivated. Can non-monotonic reasoning be performed as efficiently as monotonic reasoning, or are there fundamental differences between the two?