Abduction, of inference to the best explanation, is a form of inference that goes from data describing something to a hypothesis that best explains or accounts for the data.
D is a collection of data (facts, observations, givens).
H explains D (would, if true, explain D).
No other hypothesis can explain D as well as H does.
... Therefore, H is probably true.
– Josephson & Josephson, Abductive Inference
We have seen significant recent progress in pattern analysis and machine intelligence applied to images, audio and video signals, and natural language text, but not as much applied to another artifact produced by people: computer program source code. In a paper to be presented at the FEED Workshop at KDD 2018, we showcase a system that makes progress towards the semantic analysis of code. By doing so, we provide the foundation for machines to truly reason about program code and learn from it. The work, also recently demonstrated at IJCAI 2018, is conceived and led by IBM Science for Social Good fellow Evan Patterson and focuses specifically on data science software. Data science programs are a special kind of computer code, often fairly short, but full of semantically rich content that specifies a sequence of data transformation, analysis, modeling, and interpretation operations.
WASHINGTON – Japanese Prime Minister Shinzo Abe said Thursday he is willing to talk directly with North Korea in a bid to resolve the festering issue of abductions of Japanese citizens and foster better ties with Pyongyang. "I wish to directly face North Korea and talk with them so that the abduction problem can be resolved quickly," Abe said at a joint press conference with President Donald Trump. The U.S. leader promised to raise the highly sensitive issue of the Japanese nationals kidnapped by Pyongyang in the 1970s and 1980s with Kim Jong Un at next week's high-stakes summit in Singapore. Abe added there was no change in Japan's policy to pursue "real peace in Northeast Asia" and that if North Korea "is willing to take steps" in the right direction, it will have a "bright future."
Juba recently proposed a formulation of learning abductive reasoning from examples, in which both the relative plausibility of various explanations, as well as which explanations are valid, are learned directly from data. The main shortcoming of this formulation of the task is that it assumes access to full-information (i.e., fully specified) examples; relatedly, it offers no role for declarative background knowledge, as such knowledge is rendered redundant in the abduction task by complete information. In this work we extend the formulation to utilize such partially specified examples, along with declarative background knowledge about the missing data. We show that it is possible to use implicitly learned rules together with the explicitly given declarative knowledge to support hypotheses in the course of abduction. We also show how to use knowledge in the form of graphical causal models to refine the proposed hypotheses. Finally, we observe that when a small explanation exists, it is possible to obtain a much-improved guarantee in the challenging exception-tolerant setting. Such small, human-understandable explanations are of particular interest for potential applications of the task.
Our work extends Juba’s formulation of learning abductive reasoning from examples, in which both the relative plausibility of various explanations, as well as which explanations are valid, are learned directly from data. We extend the formulation to consider partially observed examples, along with declarative background knowledge about the missing data. We show that it is possible to use implicitly learned rules together with the explicitly given declarative knowledge to support hypotheses in the course of abduction. We observe that when a small explanation exists, it is possible to obtain a much-improved guarantee in the challenging exception-tolerant setting.
It's a type of fish, this one several feet long, with rows of sharp, serrated teeth that it uses to clamp down on prey. Sharks, specifically great whites, were catapulted into the public eye with the release of the film Jaws in the summer of 1975. The film is the story of a massive great white that terrorizes a seaside community, and the image of the cover alone--the exposed jaws of a massive shark rising upward in murky water--is enough to inject fear into the hearts of would-be swimmers. Other thrillers have perpetuated the theme of sharks as villans. We're going to need a bigger boat: Take a look at the design history of Jaws and its iconic cover https://t.co/dRdRPILF7L
The work described in my Ph.D. dissertation (Fischer 1991) It is the outcome of seven years of research focusing on abductive explanation generation and involving the departments of computer and information science, industrial and systems engineering, pathology, and allied medical professions at The Ohio State University. In the first phase of my work, I characterized abductive problem solving and performed a comparative analysis of two abductive problem solvers (Smith and Fischer 1990). Thus, I implemented two cognitively plausible heuristics to tackle the complexity of abductive reasoning and successfully experimented with them. This work, originally applied to the domain of alloantibody identification, was generalized to domain-independent abductive problem solving. Abduction, that is, inference to a hypothesis that best explains a set of data, appears to be ubiquitous in cognition.
There are however, a variety of different approaches that claim to capture the true nature of this concept. One reason for this diversity lies in the fact that abductive reasoning occurs in a multitude of contexts. It concerns cases that cover the simplest selection of already existing hypotheses to the generation of new concepts in science. It also concerns cases where the observation is puzzling because it is novel versus cases in which the surprise concerns an anomalous observation. For example, if we wake up, and the lawn is wet, we might explain this observation by assuming that it must have rained or that the sprinklers have been on.
Time series interpretation aims to provide an explanation of what is observed in terms of its underlying processes. The present work is based on the assumption that common classification-based approaches to time series interpretation suffer from a set of inherent weaknesses whose ultimate cause lies in the monotonic nature of the deductive reasoning paradigm. In this document we propose a new approach to this problem based on the initial hypothesis that abductive reasoning properly accounts for the human ability to identify and characterize patterns appearing in a time series. The result of the interpretation is a set of conjectures in the form of observations, organized into an abstraction hierarchy, and explaining what has been observed. A knowledge-based framework and a set of algorithms for the interpretation task are provided, implementing a hypothesize-and-test cycle guided by an attentional mechanism. As a representative application domain, the interpretation of the electrocardiogram allows us to highlight the strengths of the proposed approach in comparison with traditional classification-based approaches.