Goto

Collaborating Authors

Abductive Reasoning


Suga urged to take up abduction issue at summit with Biden

The Japan Times

Liberal Democratic Party lawmakers have urged Prime Minister Yoshihide Suga to take up the issue of Japanese nationals abducted by North Korea decades ago when he holds talks with U.S. President Joe Biden later this month. Eriko Yamatani, chairwoman of the LDP Headquarters for North Korean Abductions, met with Suga on Friday and handed him a resolution including the request. Suga said he will make efforts to gain U.S. cooperation on the abduction issue at the summit meeting, planned for April 16 at the White House. The resolution said a direct approach by Biden to North Korean leader Kim Jong Un would be effective in bringing abduction victims back to Japan. It urged Suga to ask Biden to put great value on North Korean issues, including the abduction problem, in his administration's strategy toward China, which has close ties with North Korea. The resolution also called for continued economic sanctions against North Korea and stricter crackdowns on ship-to-ship cargo transfers to smuggle supplies to the reclusive state.


AI 4 Proteins 2021 Sponsors : AI 4 Scientific Discovery

#artificialintelligence

If you are interested in sponsoring our event series, please contact Dr Samantha Kanza. Arctoris is an Oxford-based research company that is transforming drug discovery for biotech and AI-driven drug discovery companies, pharmaceutical corporations and academia. Arctoris developed and operates Ulysses, the world's first fully automated drug discovery platform. Accessible remotely, the platform enables researchers worldwide to perform their research rapidly, with more accuracy, transparency, and full reproducibility. Arctoris accelerates drug discovery programmes from idea to clinical testing, combining human ingenuity with the power of robotics.


Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences

arXiv.org Artificial Intelligence

In social settings, much of human behavior is governed by unspoken rules of conduct. For artificial systems to be fully integrated into social environments, adherence to such norms is a central prerequisite. We investigate whether contemporary NLG models can function as behavioral priors for systems deployed in social settings by generating action hypotheses that achieve predefined goals under moral constraints. Moreover, we examine if models can anticipate likely consequences of (im)moral actions, or explain why certain actions are preferable by generating relevant norms. For this purpose, we introduce 'Moral Stories', a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented social reasoning. Finally, we propose decoding strategies that effectively combine multiple expert models to significantly improve the quality of generated actions, consequences, and norms compared to strong baselines, e.g. though abductive reasoning.


Interpretable machine learning as a tool for scientific discovery in chemistry

#artificialintelligence

There has been an upsurge of interest in applying machine-learning (ML) techniques to chemistry, and a number of these applications have achieved impressive predictive accuracies; however, they have done so without providing any insight into what has been learnt from the training data. The interpretation of ML systems (i.e., a statement of what an ML system has learnt from data) is still in its infancy, but interpretation can lead to scientific discovery, and examples of this are given in the areas of drug discovery and quantum chemistry. It is proposed that a research programme be designed that systematically compares the various model-agnostic and model-specific approaches to interpretable ML within a range of chemical scenarios.


Scientific discovery must be redefined. Quantum and AI can help

#artificialintelligence

Industry partners are often rivals, but not in the current coronavirus vaccine endeavour. Every member of the Consortium is united by a common goal: to accelerate our search for a new treatment or vaccine against COVID-19. The benefits of collaboration are greater speed and accuracy; a freer exchange of ideas and data; and full access to cutting-edge technology. In sum, it supercharges innovation and hopefully means the pandemic will be halted faster than otherwise.


Noisy Deductive Reasoning: How Humans Construct Math, and How Math Constructs Universes

arXiv.org Artificial Intelligence

We present a computational model of mathematical reasoning according to which mathematics is a fundamentally stochastic process. That is, on our model, whether or not a given formula is deemed a theorem in some axiomatic system is not a matter of certainty, but is instead governed by a probability distribution. We then show that this framework gives a compelling account of several aspects of mathematical practice. These include: 1) the way in which mathematicians generate research programs, 2) the applicability of Bayesian models of mathematical heuristics, 3) the role of abductive reasoning in mathematics, 4) the way in which multiple proofs of a proposition can strengthen our degree of belief in that proposition, and 5) the nature of the hypothesis that there are multiple formal systems that are isomorphic to physically possible universes. Thus, by embracing a model of mathematics as not perfectly predictable, we generate a new and fruitful perspective on the epistemology and practice of mathematics.


ExplanationLP: Abductive Reasoning for Explainable Science Question Answering

arXiv.org Artificial Intelligence

We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains. This paper frames question answering as an abductive reasoning problem, constructing plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer. Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and extracting the facts that satisfy certain structural and semantic constraints. To extract the explanations, we employ a linear programming formalism designed to select the optimal subgraph. The graphs' weighting function is composed of a set of parameters, which we fine-tune to optimize answer selection performance. We carry out our experiments on the WorldTree and ARC-Challenge corpus to empirically demonstrate the following conclusions: (1) Grounding-Abstract inference chains provides the semantic control to perform explainable abductive reasoning (2) Efficiency and robustness in learning with a fewer number of parameters by outperforming contemporary explainable and transformer-based approaches in a similar setting (3) Generalisability by outperforming SOTA explainable approaches on general science question sets.


Abductive Knowledge Induction From Raw Data

arXiv.org Artificial Intelligence

For many reasoning-heavy tasks, it is challenging to find an appropriate end-to-end differentiable approximation to domain-specific inference mechanisms. Neural-Symbolic (NeSy) AI divides the end-to-end pipeline into neural perception and symbolic reasoning, which can directly exploit general domain knowledge such as algorithms and logic rules. However, it suffers from the exponential computational complexity caused by the interface between the two components, where the neural model lacks direct supervision, and the symbolic model lacks accurate input facts. As a result, they usually focus on learning the neural model with a sound and complete symbolic knowledge base while avoiding a crucial problem: where does the knowledge come from? In this paper, we present Abductive Meta-Interpretive Learning ($Meta_{Abd}$), which unites abduction and induction to learn perceptual neural network and first-order logic theories simultaneously from raw data. Given the same amount of domain knowledge, we demonstrate that $Meta_{Abd}$ not only outperforms the compared end-to-end models in predictive accuracy and data efficiency but also induces logic programs that can be re-used as background knowledge in subsequent learning tasks. To the best of our knowledge, $Meta_{Abd}$ is the first system that can jointly learn neural networks and recursive first-order logic theories with predicate invention.


Explaining AI as an Exploratory Process: The Peircean Abduction Model

arXiv.org Artificial Intelligence

Current discussions of "Explainable AI" (XAI) do not much consider the role of abduction in explanatory reasoning (see Mueller, et al., 2018). It might be worthwhile to pursue this, to develop intelligent systems that allow for the observation and analysis of abductive reasoning and the assessment of abductive reasoning as a learnable skill. Abductive inference has been defined in many ways. For example, it has been defined as the achievement of insight. Most often abduction is taken as a single, punctuated act of syllogistic reasoning, like making a deductive or inductive inference from given premises. In contrast, the originator of the concept of abduction---the American scientist/philosopher Charles Sanders Peirce---regarded abduction as an exploratory activity. In this regard, Peirce's insights about reasoning align with conclusions from modern psychological research. Since abduction is often defined as "inferring the best explanation," the challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked. We explore these linkages in this report. This analysis provides a theoretical framework for understanding what the XAI researchers are already doing, it explains why some XAI projects are succeeding (or might succeed), and it leads to design advice.


Explainable Natural Language Reasoning via Conceptual Unification

arXiv.org Artificial Intelligence

This paper presents an abductive framework for multi-hop and interpretable textual inference. The reasoning process is guided by the notions unification power and plausibility of an explanation, computed through the interaction of two major architectural components: (a) An analogical reasoning model that ranks explanatory facts by leveraging unification patterns in a corpus of explanations; (b) An abductive reasoning model that performs a search for the best explanation, which is realised via conceptual abstraction and subsequent unification. We demonstrate that the Step-wise Conceptual Unification can be effective for unsupervised question answering, and as an explanation extractor in combination with state-of-the-art Transformers. An empirical evaluation on the Worldtree corpus and the ARC Challenge resulted in the following conclusions: (1) The question answering model outperforms competitive neural and multi-hop baselines without requiring any explicit training on answer prediction; (2) When used as an explanation extractor, the proposed model significantly improves the performance of Transformers, leading to state-of-the-art results on the Worldtree corpus; (3) Analogical and abductive reasoning are highly complementary for achieving sound explanatory inference, a feature that demonstrates the impact of the unification patterns on performance and interpretability.