Goto

Collaborating Authors

Abductive Reasoning


Interpretable machine learning as a tool for scientific discovery in chemistry

#artificialintelligence

There has been an upsurge of interest in applying machine-learning (ML) techniques to chemistry, and a number of these applications have achieved impressive predictive accuracies; however, they have done so without providing any insight into what has been learnt from the training data. The interpretation of ML systems (i.e., a statement of what an ML system has learnt from data) is still in its infancy, but interpretation can lead to scientific discovery, and examples of this are given in the areas of drug discovery and quantum chemistry. It is proposed that a research programme be designed that systematically compares the various model-agnostic and model-specific approaches to interpretable ML within a range of chemical scenarios.


Scientific discovery must be redefined. Quantum and AI can help

#artificialintelligence

Industry partners are often rivals, but not in the current coronavirus vaccine endeavour. Every member of the Consortium is united by a common goal: to accelerate our search for a new treatment or vaccine against COVID-19. The benefits of collaboration are greater speed and accuracy; a freer exchange of ideas and data; and full access to cutting-edge technology. In sum, it supercharges innovation and hopefully means the pandemic will be halted faster than otherwise.


Noisy Deductive Reasoning: How Humans Construct Math, and How Math Constructs Universes

arXiv.org Artificial Intelligence

We present a computational model of mathematical reasoning according to which mathematics is a fundamentally stochastic process. That is, on our model, whether or not a given formula is deemed a theorem in some axiomatic system is not a matter of certainty, but is instead governed by a probability distribution. We then show that this framework gives a compelling account of several aspects of mathematical practice. These include: 1) the way in which mathematicians generate research programs, 2) the applicability of Bayesian models of mathematical heuristics, 3) the role of abductive reasoning in mathematics, 4) the way in which multiple proofs of a proposition can strengthen our degree of belief in that proposition, and 5) the nature of the hypothesis that there are multiple formal systems that are isomorphic to physically possible universes. Thus, by embracing a model of mathematics as not perfectly predictable, we generate a new and fruitful perspective on the epistemology and practice of mathematics.


ExplanationLP: Abductive Reasoning for Explainable Science Question Answering

arXiv.org Artificial Intelligence

We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains. This paper frames question answering as an abductive reasoning problem, constructing plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer. Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and extracting the facts that satisfy certain structural and semantic constraints. To extract the explanations, we employ a linear programming formalism designed to select the optimal subgraph. The graphs' weighting function is composed of a set of parameters, which we fine-tune to optimize answer selection performance. We carry out our experiments on the WorldTree and ARC-Challenge corpus to empirically demonstrate the following conclusions: (1) Grounding-Abstract inference chains provides the semantic control to perform explainable abductive reasoning (2) Efficiency and robustness in learning with a fewer number of parameters by outperforming contemporary explainable and transformer-based approaches in a similar setting (3) Generalisability by outperforming SOTA explainable approaches on general science question sets.


Abductive Knowledge Induction From Raw Data

arXiv.org Artificial Intelligence

For many reasoning-heavy tasks, it is challenging to find an appropriate end-to-end differentiable approximation to domain-specific inference mechanisms. Neural-Symbolic (NeSy) AI divides the end-to-end pipeline into neural perception and symbolic reasoning, which can directly exploit general domain knowledge such as algorithms and logic rules. However, it suffers from the exponential computational complexity caused by the interface between the two components, where the neural model lacks direct supervision, and the symbolic model lacks accurate input facts. As a result, they usually focus on learning the neural model with a sound and complete symbolic knowledge base while avoiding a crucial problem: where does the knowledge come from? In this paper, we present Abductive Meta-Interpretive Learning ($Meta_{Abd}$), which unites abduction and induction to learn perceptual neural network and first-order logic theories simultaneously from raw data. Given the same amount of domain knowledge, we demonstrate that $Meta_{Abd}$ not only outperforms the compared end-to-end models in predictive accuracy and data efficiency but also induces logic programs that can be re-used as background knowledge in subsequent learning tasks. To the best of our knowledge, $Meta_{Abd}$ is the first system that can jointly learn neural networks and recursive first-order logic theories with predicate invention.


Explaining AI as an Exploratory Process: The Peircean Abduction Model

arXiv.org Artificial Intelligence

Current discussions of "Explainable AI" (XAI) do not much consider the role of abduction in explanatory reasoning (see Mueller, et al., 2018). It might be worthwhile to pursue this, to develop intelligent systems that allow for the observation and analysis of abductive reasoning and the assessment of abductive reasoning as a learnable skill. Abductive inference has been defined in many ways. For example, it has been defined as the achievement of insight. Most often abduction is taken as a single, punctuated act of syllogistic reasoning, like making a deductive or inductive inference from given premises. In contrast, the originator of the concept of abduction---the American scientist/philosopher Charles Sanders Peirce---regarded abduction as an exploratory activity. In this regard, Peirce's insights about reasoning align with conclusions from modern psychological research. Since abduction is often defined as "inferring the best explanation," the challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked. We explore these linkages in this report. This analysis provides a theoretical framework for understanding what the XAI researchers are already doing, it explains why some XAI projects are succeeding (or might succeed), and it leads to design advice.


Explainable Natural Language Reasoning via Conceptual Unification

arXiv.org Artificial Intelligence

This paper presents an abductive framework for multi-hop and interpretable textual inference. The reasoning process is guided by the notions unification power and plausibility of an explanation, computed through the interaction of two major architectural components: (a) An analogical reasoning model that ranks explanatory facts by leveraging unification patterns in a corpus of explanations; (b) An abductive reasoning model that performs a search for the best explanation, which is realised via conceptual abstraction and subsequent unification. We demonstrate that the Step-wise Conceptual Unification can be effective for unsupervised question answering, and as an explanation extractor in combination with state-of-the-art Transformers. An empirical evaluation on the Worldtree corpus and the ARC Challenge resulted in the following conclusions: (1) The question answering model outperforms competitive neural and multi-hop baselines without requiring any explicit training on answer prediction; (2) When used as an explanation extractor, the proposed model significantly improves the performance of Transformers, leading to state-of-the-art results on the Worldtree corpus; (3) Analogical and abductive reasoning are highly complementary for achieving sound explanatory inference, a feature that demonstrates the impact of the unification patterns on performance and interpretability.


Tabling Optimization for Contextual Abduction

arXiv.org Artificial Intelligence

The requirement for artificial intelligence (AI) to provide explanations in making critical decision becomes increasingly important due to concerns of accountability, trust, as well as ethics. Such an explainable AI is expected to be capable of providing justifications that are human-understandable. A form of reasoning for providing explanations to an observation, known as abduction, has been well studied in AI, particularly in knowledge representation and reasoning. It extends to logic programming, dubbed abductive logic programming [3], and it has a wide variety of usage, e.g., in planning, scheduling, reasoning of rational agents, security protocols verification, biological systems, and machine ethics.


Machine Reasoning Explainability

arXiv.org Artificial Intelligence

As a field of AI, Machine Reasoning (MR) uses largely symbolic means to formalize and emulate abstract reasoning. Studies in early MR have notably started inquiries into Explainable AI (XAI) -- arguably one of the biggest concerns today for the AI community. Work on explainable MR as well as on MR approaches to explainability in other areas of AI has continued ever since. It is especially potent in modern MR branches, such as argumentation, constraint and logic programming, planning. We hereby aim to provide a selective overview of MR explainability techniques and studies in hopes that insights from this long track of research will complement well the current XAI landscape. This document reports our work in-progress on MR explainability.


Hundreds of astronomers warn Elon Musk's Starlink satellites could limit scientific discoveries

The Independent - Tech

Hundreds of astronomers have warned that satellite constellations like Elon Musk's Starlink network could prove "extremely impactful" to astronomy and scientific progress. A report by the Satellite Constellations 1 (Satcon1) workshop found that that constellations of bright satellites will fundamentally change ground-based optical and infrared astronomy and could impact the appearance of the night's sky for stargazers around the world. The research brought together more than 250 astronomers, satellite operators and dark-sky advocates to better understand the astronomical impact of large satellite constellations. "We find that the worst-case constellation designs prove extremely impactful to the most severely affected science programs," stated the report, which was published on Tuesday. Elon Musk's SpaceX plans to launch more than 30,000 Starlink satellites in order to beam high-speed internet down to Earth.