A Plausibility-Based Approach to Incremental Inference

AAAI Conferences

Inference techniques play a central role in many cognitive systems. They transform low-level observations of the environment into high-level, actionable knowledge which then gets used by mechanisms that drive action, problem-solving, and learning. This paper presents an initial effort at combining results from AI and psychology into a pragmatic and scalable computational reasoning system. Our approach combines a numeric notion of plausibility with first-order logic to produce an incremental inference engine that is guided by heuristics derived from the psychological literature. We illustrate core ideas with detailed examples and discuss the advantages of the approach with respect to cognitive systems.


eyornd HSA: Structures for PBausible antic Networks

AAAI Conferences

We present a method for automatically deriving plausible inference rules from relations in a knowledge base. We describe two empirical studies of these rules. First, we derived approximately 300 plausible inference rules, generated over 3000 specific inferences, and presented them to human subjects to discover which rules were plausible. The second study tested the hypothesis that the plausibility of these rules can be predicted by whether they obey a kind of transitivity. The paper discusses four sources of variance in subjects' judgments, and concludes that relatively little knowledge is needed to achieve moderately accurate predictions of these judgments.


Conditional Plausibility Measures and Bayesian Networks

arXiv.org Artificial Intelligence

A general notion of algebraic conditional plausibility measures is defined. Probability measures, ranking functions, possibility measures, and (under the appropriate definitions) sets of probability measures can all be viewed as defining algebraic conditional plausibility measures. It is shown that the technology of Bayesian networks can be applied to algebraic conditional plausibility measures.


A Default-Logic Framework for Legal Reasoning in Multiagent Systems

AAAI Conferences

Using law and evidence to achieve fair and accurate decisions in numerous legal cases requires a complex multiagent system. This paper discusses a framework based on many-valued, predicate, default logic that successfully captures legal knowledge, integrates and evaluates expert and non-expert evidence, coordinates agents working on different legal problems, and evolves the knowledge model over time. The graphical syntax and the semantics of this framework allow the automation of key tasks, and the emergence of dynamic structures for integrating human and nonhuman agents. The logical basis of the framework ensures its applicability to knowledge and problem domains of similar complexity to law.


First-Order Conditional Logic Revisited

AAAI Conferences

Conditional Zogics play an important role in recent attempts to investigate default reasoning. This paper investigates firstorder conditional logic. We show that, as for first-order probabilistic logic, it is important not to confound statistical conditionals over the domain (such as "most birds fly"), and subjective conditionals over possible worlds (such as "I believe that lweety is unlikely to fly"). We then address the issue of ascribing semantics to first-order conditional logic. As in the propositional case, there are many possible semantics.