Abduction, of inference to the best explanation, is a form of inference that goes from data describing something to a hypothesis that best explains or accounts for the data.
D is a collection of data (facts, observations, givens).
H explains D (would, if true, explain D).
No other hypothesis can explain D as well as H does.
... Therefore, H is probably true.
– Josephson & Josephson, Abductive Inference
Abductive reasoning starts from some observations and aims at finding the most plausible explanation for these observations. To perform abduction, humans often make use of temporal and causal inferences, and knowledge about how some hypothetical situation can result in different outcomes. This work offers the first study of how such knowledge impacts the Abductive NLI task -- which consists in choosing the more likely explanation for given observations. We train a specialized language model LMI that is tasked to generate what could happen next from a hypothetical scenario that evolves from a given event. We then propose a multi-task model MTL to solve the Abductive NLI task, which predicts a plausible explanation by a) considering different possible events emerging from candidate hypotheses -- events generated by LMI -- and b) selecting the one that is most similar to the observed outcome. We show that our MTL model improves over prior vanilla pre-trained LMs fine-tuned on Abductive NLI. Our manual evaluation and analysis suggest that learning about possible next events from different hypothetical scenarios supports abductive inference.
Given a knowledge base and an observation as a set of facts, ABox abduction aims at computing a hypothesis that, when added to the knowledge base, is sufficient to entail the observation. In signature-based ABox abduction, the hypothesis is further required to use only names from a given set. This form of abduction has applications such as diagnosis, KB repair, or explaining missing entailments. It is possible that hypotheses for a given observation only exist if we admit the use of fresh individuals and/or complex concepts built from the given signature, something most approaches for ABox abduction so far do not support or only support with restrictions. In this paper, we investigate the computational complexity of this form of abduction -- allowing either fresh individuals, complex concepts, or both -- for various description logics, and give size bounds on the hypotheses if they exist.
Humans, on the other hand, need none of this. On the basis of very limited or incomplete data, we nonetheless come to the right conclusion about many things (yes, we are fallible, but the miracle is that we are right so often). Noam Chomsky's entire claim to fame in linguistics really amounts to exploring this underdetermination problem, which he referred to as "the poverty of the stimulus." Humans pick up language despite very varied experiences with other human language speakers. Babies born in abusive and sensory deprived environments pick up language.
Liberal Democratic Party lawmakers have urged Prime Minister Yoshihide Suga to take up the issue of Japanese nationals abducted by North Korea decades ago when he holds talks with U.S. President Joe Biden later this month. Eriko Yamatani, chairwoman of the LDP Headquarters for North Korean Abductions, met with Suga on Friday and handed him a resolution including the request. Suga said he will make efforts to gain U.S. cooperation on the abduction issue at the summit meeting, planned for April 16 at the White House. The resolution said a direct approach by Biden to North Korean leader Kim Jong Un would be effective in bringing abduction victims back to Japan. It urged Suga to ask Biden to put great value on North Korean issues, including the abduction problem, in his administration's strategy toward China, which has close ties with North Korea. The resolution also called for continued economic sanctions against North Korea and stricter crackdowns on ship-to-ship cargo transfers to smuggle supplies to the reclusive state.
If you are interested in sponsoring our event series, please contact Dr Samantha Kanza. Arctoris is an Oxford-based research company that is transforming drug discovery for biotech and AI-driven drug discovery companies, pharmaceutical corporations and academia. Arctoris developed and operates Ulysses, the world's first fully automated drug discovery platform. Accessible remotely, the platform enables researchers worldwide to perform their research rapidly, with more accuracy, transparency, and full reproducibility. Arctoris accelerates drug discovery programmes from idea to clinical testing, combining human ingenuity with the power of robotics.
In social settings, much of human behavior is governed by unspoken rules of conduct. For artificial systems to be fully integrated into social environments, adherence to such norms is a central prerequisite. We investigate whether contemporary NLG models can function as behavioral priors for systems deployed in social settings by generating action hypotheses that achieve predefined goals under moral constraints. Moreover, we examine if models can anticipate likely consequences of (im)moral actions, or explain why certain actions are preferable by generating relevant norms. For this purpose, we introduce 'Moral Stories', a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented social reasoning. Finally, we propose decoding strategies that effectively combine multiple expert models to significantly improve the quality of generated actions, consequences, and norms compared to strong baselines, e.g. though abductive reasoning.
There has been an upsurge of interest in applying machine-learning (ML) techniques to chemistry, and a number of these applications have achieved impressive predictive accuracies; however, they have done so without providing any insight into what has been learnt from the training data. The interpretation of ML systems (i.e., a statement of what an ML system has learnt from data) is still in its infancy, but interpretation can lead to scientific discovery, and examples of this are given in the areas of drug discovery and quantum chemistry. It is proposed that a research programme be designed that systematically compares the various model-agnostic and model-specific approaches to interpretable ML within a range of chemical scenarios.
Industry partners are often rivals, but not in the current coronavirus vaccine endeavour. Every member of the Consortium is united by a common goal: to accelerate our search for a new treatment or vaccine against COVID-19. The benefits of collaboration are greater speed and accuracy; a freer exchange of ideas and data; and full access to cutting-edge technology. In sum, it supercharges innovation and hopefully means the pandemic will be halted faster than otherwise.
We present a computational model of mathematical reasoning according to which mathematics is a fundamentally stochastic process. That is, on our model, whether or not a given formula is deemed a theorem in some axiomatic system is not a matter of certainty, but is instead governed by a probability distribution. We then show that this framework gives a compelling account of several aspects of mathematical practice. These include: 1) the way in which mathematicians generate research programs, 2) the applicability of Bayesian models of mathematical heuristics, 3) the role of abductive reasoning in mathematics, 4) the way in which multiple proofs of a proposition can strengthen our degree of belief in that proposition, and 5) the nature of the hypothesis that there are multiple formal systems that are isomorphic to physically possible universes. Thus, by embracing a model of mathematics as not perfectly predictable, we generate a new and fruitful perspective on the epistemology and practice of mathematics.
We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains. This paper frames question answering as an abductive reasoning problem, constructing plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer. Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer and extracting the facts that satisfy certain structural and semantic constraints. To extract the explanations, we employ a linear programming formalism designed to select the optimal subgraph. The graphs' weighting function is composed of a set of parameters, which we fine-tune to optimize answer selection performance. We carry out our experiments on the WorldTree and ARC-Challenge corpus to empirically demonstrate the following conclusions: (1) Grounding-Abstract inference chains provides the semantic control to perform explainable abductive reasoning (2) Efficiency and robustness in learning with a fewer number of parameters by outperforming contemporary explainable and transformer-based approaches in a similar setting (3) Generalisability by outperforming SOTA explainable approaches on general science question sets.