Goto

Collaborating Authors

Loglinear models for first-order probabilistic reasoning

arXiv.org Artificial Intelligence

Recent work on loglinear models in probabilistic constraint logic programming is applied to first-order probabilistic reasoning. Probabilities are defined directly on the proofs of atomic formulae, and by marginalisation on the atomic formulae themselves. We use Stochastic Logic Programs (SLPs) composed of labelled and unlabelled definite clauses to define the proof probabilities. We have a conservative extension of first-order reasoning, so that, for example, there is a one-one mapping between logical and random variables. We show how, in this framework, Inductive Logic Programming (ILP) can be used to induce the features of a loglinear model from data. We also compare the presented framework with other approaches to first-order probabilistic reasoning.


Probability Logic

#artificialintelligence

This chapter presents probability logic as a rationality framework for human reasoning under uncertainty. Selected formal-normative aspects of probability logic are discussed in the light of experimental evidence. Specifically, probability logic is characterized as a generalization of bivalent truth-functional propositional logic (short "logic"), as being connexive, and as being nonmonotonic. The chapter discusses selected argument forms and associated uncertainty propagation rules. Throughout the chapter, the descriptive validity of probability logic is compared to logic, which was used as the gold standard of reference for assessing the rationality of human reasoning in the 20th century.


Towards High-Level Probabilistic Reasoning with Lifted Inference

AAAI Conferences

High-level representations of uncertainty, such as probabilistic logics and programs, have been around for decades. Lifted inference was initially motivated by the need to make reasoning algorithms high-level as well. While the lifted inference community focused on machine learning applications, the high-level reasoning goal has received less attention recently. We revisit the idea and look at the capabilities of the latest techniques in lifted inference. This lets us conclude that lifted inference is strictly more powerful than propositional inference on high-level reasoning tasks.


Probabilistic and Non-Monotonic Inference

arXiv.org Artificial Intelligence

(l) I have enough evidence to render the sentence S probable. (la) So, relative to what I know, it is rational of me to believe S. (2) Now that I have more evidence, S may no longer be probable. (2a) So now, relative to what I know, it is not rational of me to believe S. These seem a perfectly ordinary, common sense, pair of situations. Generally and vaguely, I take them to embody what I shall call probabilistic inference. This form of inference is clearly non-monotonic. Relatively few people have taken this form of inference, based on high probability, to serve as a foundation for non-monotonic logic or for a logical or defeasible inference. There are exceptions: Jane Nutter [16] thinks that sometimes probability has something to do with non-monotonic reasoning. Judea Pearl [ 17] has recently been exploring the possibility. There are any number of people whom one might call probability enthusiasts who feel that probability provides all the answers by itself, with no need of help from logic. Cheeseman [1], Henrion [5] and others think it useful to look at a distribution of probabilities over a whole algebra of statements, to update that distribution in the light of new evidence, and to use the latest updated distribution of probability over the algebra as a basis for planning and decision making. A slightly weaker form of this approach is captured by Nilsson [15], where one assumes certain probabilities for certain statements, and infers the probabilities, or constraints on the probabilities of other statement. None of this corresponds to what I call probabilistic inference. All of the inference that is taking place, either in Bayesian updating, or in probabilistic logic, is strictly deductive. Deductive inference, particularly that concerned with the distribution of classical probabilities or chances, is of great importance. But this is not to say that there is no important role for what earlier logicians have called "ampliative" or "inductive" or "scientific" inference, in which the conclusion goes beyond the premises, asserts more than do the premises. This depends on what David Israel [6] has called "real rules of inference". It is characteristic of any such logic or inference procedure that it can go wrong: that statements accepted at one point may be rejected at a later point. Research underlying the results reported here has been partially supported by the Signals Warfare Center of the United States Army.


Commonsense Interpretation of Triangle Behavior

AAAI Conferences

The ability to infer intentions, emotions, and other unobservable psychological states from people's behavior is a hallmark of human social cognition, and an essential capability for future Artificial Intelligence systems. The commonsense theories of psychology and sociology necessary for such inferences have been a focus of logic-based knowledge representation research, but have been difficult to employ in robust automated reasoning architectures. In this paper we model behavior interpretation as a process of logical abduction, where the reasoning task is to identify the most probable set of assumptions that logically entail the observable behavior of others, given commonsense theories of psychology and sociology. We evaluate our approach using Triangle-COPA, a benchmark suite of 100 challenge problems based on an early social psychology experiment by Fritz Heider and Marianne Simmel. Commonsense knowledge of actions, social relationships, intentions, and emotions are encoded as defeasible axioms in first-order logic. We identify sets of assumptions that logically entail observed behaviors by backchaining with these axioms to a given depth, and order these sets by their joint probability assuming conditional independence. Our approach solves almost all (91) of the 100 questions in Triangle-COPA, and demonstrates a promising approach to robust behavior interpretation that integrates both logical and probabilistic reasoning.