Learning Abduction Under Partial Observability

AAAI Conferences

Our work extends Juba’s formulation of learning abductive reasoning from examples, in which both the relative plausibility of various explanations, as well as which explanations are valid, are learned directly from data. We extend the formulation to consider partially observed examples, along with declarative background knowledge about the missing data. We show that it is possible to use implicitly learned rules together with the explicitly given declarative knowledge to support hypotheses in the course of abduction. We observe that when a small explanation exists, it is possible to obtain a much-improved guarantee in the challenging exception-tolerant setting.


Learning Abductive Reasoning Using Random Examples

AAAI Conferences

We consider a new formulation of abduction in which degrees of "plausibility" of explanations, along with the rules of the domain, are learned from concrete examples (settings of attributes). Our version of abduction thus falls in the " learning to reason " framework of Khardon and Roth. Such approaches enable us to capture a natural notion of "plausibility" in a domain while avoiding the extremely difficult problem of specifying an explicit representation of what is "plausible." We specifically consider the question of which syntactic classes of formulas have efficient algorithms for abduction. We find that the class of k -DNF explanations can be found in polynomial time for any fixed k ; but, we also find evidence that even weak versions of our abduction task are intractable for the usual class of conjunctions . This evidence is provided by a connection to the usual, inductive PAC-learning model proposed by Valiant. We also consider an exception-tolerant variant of abduction. We observe that it is possible for polynomial-time algorithms to tolerate a few adversarially chosen exceptions, again for the class of k -DNF explanations. All of the algorithms we study are particularly simple, and indeed are variants of a rule proposed by Mill.


An Improved Algorithm for Learning to Perform Exception-Tolerant Abduction

AAAI Conferences

Inference from an observed or hypothesized condition to a plausible cause or explanation for this condition is known as abduction. For many tasks, the acquisition of the necessary knowledge by machine learning has been widely found to be highly effective. However, the semantics of learned knowledge are weaker than the usual classical semantics, and this necessitates new formulations of many tasks. We focus on a recently introduced formulation of the abductive inference task that is thus adapted to the semantics of machine learning. A key problem is that we cannot expect that our causes or explanations will be perfect, and they must tolerate some error due to the world being more complicated than our formalization allows. This is a version of the qualification problem, and in machine learning, this is known as agnostic learning. In the work by Juba that introduced the task of learning to make abductive inferences, an algorithm is given for producing k-DNF explanations that tolerates such exceptions: if the best possible k-DNF explanation fails to justify the condition with probability ε, then the algorithm is promised to find a k-DNF explanation that fails to justify the condition with probability at most O(nkε), where n is the number of propositional attributes used to describe the domain. Here, we present an improved algorithm for this task. When the best k- DNF fails with probability ε, our algorithm finds a k-DNF that fails with probability at most O ̃(nk/2ε) (i.e., suppressing logarithmic factors in n and 1/ε). We also examine the empirical advantage of this new algorithm over the previous algorithm in two test domains, one of explaining conditions generated by a “noisy” k-DNF rule, and another of explaining conditions that are actually generated by a linear threshold rule.


Implicit Learning of Common Sense for Reasoning

AAAI Conferences

We consider the problem of how enormous databases of "common sense" knowledge can be both learned and utilized in reasoning in a computationally efficient manner. We propose that this is possible if the learning only occurs implicitly, i.e., without generating an explicit representation. We show that it is feasible to invoke such implicitly learned knowledge in essentially all natural tractable reasoning problems. This implicit learning also turns out to be provably robust to occasional counterexamples, as appropriate for such common sense knowledge.


A Connectionist Framework for Reasoning: Reasoning with Examples

AAAI Conferences

We present a connectionist architecture that supports almost instantaneous deductive and abductive reasoning. The deduction algorithm responds in few steps for single rule queries and in general, takes time that is linear with the number of rules in the query. The abduction algorithm produces an explanation in few steps and the best explanation in time linear with the size of the assumption set. The size of the network is polynomially related to the size of other representations of the domain, and may even be smaller. We base our connectionist model on Valiant's Neuroidal model (Va194) and thus make minimal assumptions about the computing elements, which are assumed to be classical threshold elements with states. Within this model we develop a reasoning framework that utilizes a model-based approach to reasoning (KKS93; KR94b). In particular, we suggest to interpret the connectionist architecture as encoding examples of the domain we reason about and show how to perform various reasoning tasks with this interpretation. We then show that the representations used can be acquired efficiently from interactions with the environment and discuss how this learning process influences the reasoning performance of the network.