A Plausibility-Based Approach to Incremental Inference

AAAI Conferences

Inference techniques play a central role in many cognitive systems. They transform low-level observations of the environment into high-level, actionable knowledge which then gets used by mechanisms that drive action, problem-solving, and learning. This paper presents an initial effort at combining results from AI and psychology into a pragmatic and scalable computational reasoning system. Our approach combines a numeric notion of plausibility with first-order logic to produce an incremental inference engine that is guided by heuristics derived from the psychological literature. We illustrate core ideas with detailed examples and discuss the advantages of the approach with respect to cognitive systems.


eyornd HSA: Structures for PBausible antic Networks

AAAI Conferences

We present a method for automatically deriving plausible inference rules from relations in a knowledge base. We describe two empirical studies of these rules. First, we derived approximately 300 plausible inference rules, generated over 3000 specific inferences, and presented them to human subjects to discover which rules were plausible. The second study tested the hypothesis that the plausibility of these rules can be predicted by whether they obey a kind of transitivity. The paper discusses four sources of variance in subjects' judgments, and concludes that relatively little knowledge is needed to achieve moderately accurate predictions of these judgments.


Learning Probabilities: Towards a Logic of Statistical Learning

arXiv.org Artificial Intelligence

We propose a new model for forming beliefs and learning about unknown probabilities (such as the probability of picking a red marble from a bag with an unknown distribution of coloured marbles). The most widespread model for such situations of 'radical uncertainty' is in terms of imprecise probabilities, i.e. representing the agent's knowledge as a set of probability measures. We add to this model a plausibility map, associating to each measure a plausibility number, as a way to go beyond what is known with certainty and represent the agent's beliefs about probability. There are a number of standard examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two types of information: (1) learning by repeated sampling from the unknown distribution (e.g. picking marbles from the bag); and (2) learning higher-order information about the distribution (in the shape of linear inequalities, e.g. we are told there are more red marbles than green marbles). The first changes only the plausibility map (via a 'plausibilistic' version of Bayes' Rule), but leaves the given set of measures unchanged; the second shrinks the set of measures, without changing their plausibility. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. But our belief change does not comply with standard AGM axioms, since the revision induced by (1) is of a non-AGM type. This is essential, as it allows our agents to learn the true probability: we prove that the beliefs obtained by repeated sampling converge almost surely to the correct belief (in the true probability). We end by sketching the contours of a dynamic doxastic logic for statistical learning.


Conditional Plausibility Measures and Bayesian Networks

arXiv.org Artificial Intelligence

A general notion of algebraic conditional plausibility measures is defined. Probability measures, ranking functions, possibility measures, and (under the appropriate definitions) sets of probability measures can all be viewed as defining algebraic conditional plausibility measures. It is shown that the technology of Bayesian networks can be applied to algebraic conditional plausibility measures.


A Default-Logic Framework for Legal Reasoning in Multiagent Systems

AAAI Conferences

Using law and evidence to achieve fair and accurate decisions in numerous legal cases requires a complex multiagent system. This paper discusses a framework based on many-valued, predicate, default logic that successfully captures legal knowledge, integrates and evaluates expert and non-expert evidence, coordinates agents working on different legal problems, and evolves the knowledge model over time. The graphical syntax and the semantics of this framework allow the automation of key tasks, and the emergence of dynamic structures for integrating human and nonhuman agents. The logical basis of the framework ensures its applicability to knowledge and problem domains of similar complexity to law.