### Technical Perspective: Combining Logic and Probability

A goal of research in artificial intelligence and machine learning since the early days of expert systems has been to develop automated reasoning methods that combine logic and probability. Why is there a need to combine logic and probability? Probability theory allows one to quantify uncertainty over a set of propositions--ground facts about the world--and a probabilistic reasoning system allows one to infer the probability of unknown (hidden) propositions conditioned on the knowledge of other propositions. However, probability theory alone has nothing to say about how propositions are constructed from relationships over entities or tuples of entities, and how general knowledge at the level of relationships is to be represented and applied.

### SS96-02-019.pdf

Can uncertainty management be realized in a finite totally ordered probability algebra?

### Combinatorics and Probability Coursera

About this course: Counting is one of the basic mathematically related tasks we encounter on a day to day basis. The main question here is the following. If we need to count something, can we do anything better than just counting all objects one by one? Do we need to create a list of all phone numbers to ensure that there are enough phone numbers for everyone? Is there a way to tell that our algorithm will run in a reasonable time before implementing and actually running it?

### Knowledge Engineering Within A Generalized Bayesian Framework

During the ongoing debate over the representation of uncertainty in Artificial Intelligence, Cheeseman, Lemmer, Pearl, and others have argued that probability theory, and in particular the Bayesian theory, should be used as the basis for the inference mechanisms of Expert Systems dealing with uncertainty. In order to pursue the issue in a practical setting, sophisticated tools for knowledge engineering are needed that allow flexible and understandable interaction with the underlying knowledge representation schemes. This paper describes a Generalized Bayesian framework for building expert systems which function in uncertain domains, using algorithms proposed by Lemmer. It is neither rule-based nor frame-based, and requires a new system of knowledge engineering tools. The framework we describe provides a knowledge-based system architecture with an inference engine, explanation capability, and a unique aid for building consistent knowledge bases.

### Tuning a Bayesian Knowledge Base

For a knowledge-based system that fails to provide the correct answer, it is important to be able to tune the system while minimizing overall change in the knowledge-base. There are a variety of reasons why the answer is incorrect ranging from incorrect knowledge to information vagueness to incompleteness. Still, in all these situations, it is typically the case that most of the knowledge in the system is likely to be correct as specified by the expert(s) and/or knowledge engineer(s). In this paper, we propose a method to identify the possible changes by understanding the contribution of parameters on the outputs of concern. Our approach is based on Bayesian Knowledge Bases for modeling uncertainties. We start with single parameter changes and then extend to multiple parameters. In order to identify the optimal solution that can minimize the change to the model as specified by the domain experts, we define and evaluate the sensitivity values of the results with respect to the parameters. We discuss the computational complexities of determining the solution and show that the problem of multiple parameters changes can be transformed into Linear Programming problems, and thus, efficiently solvable. Our work can also be applied towards validating the knowledge base such that the updated model can satisfy all test-cases collected from the domain experts.