Goto

Collaborating Authors

Knowledge and Uncertainty

arXiv.org Artificial Intelligence

One purpose -- quite a few thinkers would say the main purpose -- of seeking knowledge about the world is to enhance our ability to make good decisions. An item of knowledge that can make no conceivable difference with regard to anything we might do would strike many as frivolous. Whether or not we want to be philosophical pragmatists in this strong sense with regard to everything we might want to enquire about, it seems a perfectly appropriate attitude to adopt toward artificial knowledge systems. If is granted that we are ultimately concerned with decisions, then some constraints are imposed on our measures of uncertainty at the level of decision making. If our measure of uncertainty is real-valued, then it isn't hard to show that it must satisfy the classical probability axioms. For example, if an act has a real-valued utility U(E) if the event E obtains, and the same real-valued utility if the denial of E obtains, so that U(E) = U(-E), then the expected utility of that act must be U(E), and that must be the same as the uncertainty-weighted average of the returns of the act, p-U(E) + q-U('E), where p and q represent the uncertainty of E and-E respectively. But then we must have p + q = 1.



Do We Need Higher-Order Probabilities and, If So, What Do They Mean?

arXiv.org Artificial Intelligence

The apparent failure of individual probabilistic expressions to distinguish uncertainty about truths from uncertainty about probabilistic assessments have prompted researchers to seek formalisms where the two types of uncertainties are given notational distinction. This paper demonstrates that the desired distinction is already a built-in feature of classical probabilistic models, thus, specialized notations are unnecessary.


The Probability of a Possibility: Adding Uncertainty to Default Rules

arXiv.org Artificial Intelligence

We present a semantics for adding uncertainty to conditional logics for default reasoning and belief revision. We are able to treat conditional sentences as statements of conditional probability, and express rules for revision such as "If A were believed, then B would be believed to degree p." This method of revision extends conditionalization by allowing meaningful revision by sentences whose probability is zero. This is achieved through the use of counterfactual probabilities. Thus, our system accounts for the best properties of qualitative methods of update (in particular, the AGM theory of revision) and probabilistic methods. We also show how our system can be viewed as a unification of probability theory and possibility theory, highlighting their orthogonality and providing a means for expressing the probability of a possibility. We also demonstrate the connection to Lewis's method of imaging.


Learning Probabilities: Towards a Logic of Statistical Learning

arXiv.org Artificial Intelligence

We propose a new model for forming beliefs and learning about unknown probabilities (such as the probability of picking a red marble from a bag with an unknown distribution of coloured marbles). The most widespread model for such situations of 'radical uncertainty' is in terms of imprecise probabilities, i.e. representing the agent's knowledge as a set of probability measures. We add to this model a plausibility map, associating to each measure a plausibility number, as a way to go beyond what is known with certainty and represent the agent's beliefs about probability. There are a number of standard examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two types of information: (1) learning by repeated sampling from the unknown distribution (e.g. picking marbles from the bag); and (2) learning higher-order information about the distribution (in the shape of linear inequalities, e.g. we are told there are more red marbles than green marbles). The first changes only the plausibility map (via a 'plausibilistic' version of Bayes' Rule), but leaves the given set of measures unchanged; the second shrinks the set of measures, without changing their plausibility. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. But our belief change does not comply with standard AGM axioms, since the revision induced by (1) is of a non-AGM type. This is essential, as it allows our agents to learn the true probability: we prove that the beliefs obtained by repeated sampling converge almost surely to the correct belief (in the true probability). We end by sketching the contours of a dynamic doxastic logic for statistical learning.