Abduction, of inference to the best explanation, is a form of inference that goes from data describing something to a hypothesis that best explains or accounts for the data.
D is a collection of data (facts, observations, givens).
H explains D (would, if true, explain D).
No other hypothesis can explain D as well as H does.
... Therefore, H is probably true.
– Josephson & Josephson, Abductive Inference
Abductive reasoning generates explanatory hypotheses for new observations using prior knowledge. This paper investigates the use of forgetting, also known as uniform interpolation, to perform ABox abduction in description logic (ALC) ontologies. Non-abducibles are specified by a forgetting signature which can contain concept, but not role, symbols. The resulting hypotheses are semantically minimal and each consist of a set of disjuncts. These disjuncts are each independent explanations, and are not redundant with respect to the background ontology or the other disjuncts, representing a form of hypothesis space. The observations and hypotheses handled by the method can contain both atomic or complex ALC concepts, excluding role assertions, and are not restricted to Horn clauses. Two approaches to redundancy elimination are explored for practical use: full and approximate. Using a prototype implementation, experiments were performed over a corpus of real world ontologies to investigate the practicality of both approaches across several settings.
Dr. Christine Blasey Ford responds to a question from Sen. Dianne Feinstein during testimony before the Senate Judiciary Committee on her sexual assault allegations against Supreme Court nominee Brett Kavanaugh. Christine Blasey Ford gave a detailed scientific explanation for her memory of the alleged incident involving Supreme Court nominee Judge Brett Kavanaugh at her highly anticipated Senate testimony Thursday. Senate Judiciary Committee Ranking Member Dianne Feinstein, D-Calif., pressed Ford over her level of certainty that it was, in fact, Kavanaugh who allegedly pinned her down 36 years ago, while in high school, and attempted to remove her clothing. "How are you so sure that it was he?" Feinstein asked. Ford, a California-based psychology professor, laid out a detailed scientific explanation.
We have seen significant recent progress in pattern analysis and machine intelligence applied to images, audio and video signals, and natural language text, but not as much applied to another artifact produced by people: computer program source code. In a paper to be presented at the FEED Workshop at KDD 2018, we showcase a system that makes progress towards the semantic analysis of code. By doing so, we provide the foundation for machines to truly reason about program code and learn from it. The work, also recently demonstrated at IJCAI 2018, is conceived and led by IBM Science for Social Good fellow Evan Patterson and focuses specifically on data science software. Data science programs are a special kind of computer code, often fairly short, but full of semantically rich content that specifies a sequence of data transformation, analysis, modeling, and interpretation operations.
WASHINGTON – Japanese Prime Minister Shinzo Abe said Thursday he is willing to talk directly with North Korea in a bid to resolve the festering issue of abductions of Japanese citizens and foster better ties with Pyongyang. "I wish to directly face North Korea and talk with them so that the abduction problem can be resolved quickly," Abe said at a joint press conference with President Donald Trump. The U.S. leader promised to raise the highly sensitive issue of the Japanese nationals kidnapped by Pyongyang in the 1970s and 1980s with Kim Jong Un at next week's high-stakes summit in Singapore. Abe added there was no change in Japan's policy to pursue "real peace in Northeast Asia" and that if North Korea "is willing to take steps" in the right direction, it will have a "bright future."
Our work extends Juba’s formulation of learning abductive reasoning from examples, in which both the relative plausibility of various explanations, as well as which explanations are valid, are learned directly from data. We extend the formulation to consider partially observed examples, along with declarative background knowledge about the missing data. We show that it is possible to use implicitly learned rules together with the explicitly given declarative knowledge to support hypotheses in the course of abduction. We observe that when a small explanation exists, it is possible to obtain a much-improved guarantee in the challenging exception-tolerant setting.
Juba recently proposed a formulation of learning abductive reasoning from examples, in which both the relative plausibility of various explanations, as well as which explanations are valid, are learned directly from data. The main shortcoming of this formulation of the task is that it assumes access to full-information (i.e., fully specified) examples; relatedly, it offers no role for declarative background knowledge, as such knowledge is rendered redundant in the abduction task by complete information. In this work we extend the formulation to utilize such partially specified examples, along with declarative background knowledge about the missing data. We show that it is possible to use implicitly learned rules together with the explicitly given declarative knowledge to support hypotheses in the course of abduction. We also show how to use knowledge in the form of graphical causal models to refine the proposed hypotheses. Finally, we observe that when a small explanation exists, it is possible to obtain a much-improved guarantee in the challenging exception-tolerant setting. Such small, human-understandable explanations are of particular interest for potential applications of the task.
It's a type of fish, this one several feet long, with rows of sharp, serrated teeth that it uses to clamp down on prey. Sharks, specifically great whites, were catapulted into the public eye with the release of the film Jaws in the summer of 1975. The film is the story of a massive great white that terrorizes a seaside community, and the image of the cover alone--the exposed jaws of a massive shark rising upward in murky water--is enough to inject fear into the hearts of would-be swimmers. Other thrillers have perpetuated the theme of sharks as villans. We're going to need a bigger boat: Take a look at the design history of Jaws and its iconic cover https://t.co/dRdRPILF7L
The work described in my Ph.D. dissertation (Fischer 1991) It is the outcome of seven years of research focusing on abductive explanation generation and involving the departments of computer and information science, industrial and systems engineering, pathology, and allied medical professions at The Ohio State University. In the first phase of my work, I characterized abductive problem solving and performed a comparative analysis of two abductive problem solvers (Smith and Fischer 1990). Thus, I implemented two cognitively plausible heuristics to tackle the complexity of abductive reasoning and successfully experimented with them. This work, originally applied to the domain of alloantibody identification, was generalized to domain-independent abductive problem solving. Abduction, that is, inference to a hypothesis that best explains a set of data, appears to be ubiquitous in cognition.
There are however, a variety of different approaches that claim to capture the true nature of this concept. One reason for this diversity lies in the fact that abductive reasoning occurs in a multitude of contexts. It concerns cases that cover the simplest selection of already existing hypotheses to the generation of new concepts in science. It also concerns cases where the observation is puzzling because it is novel versus cases in which the surprise concerns an anomalous observation. For example, if we wake up, and the lawn is wet, we might explain this observation by assuming that it must have rained or that the sprinklers have been on.