Qualitative Decision Theory with Sugeno Integrals

arXiv.org Artificial Intelligence

This paper presents an axiomatic framework for qualitative decision under uncertainty in a finite setting. The corresponding utility is expressed by a sup-min expression, called Sugeno (or fuzzy) integral. Technically speaking, Sugeno integral is a median, which is indeed a qualitative counterpart to the averaging operation underlying expected utility. The axiomatic justification of Sugeno integral-based utility is expressed in terms of preference between acts as in Savage decision theory. Pessimistic and optimistic qualitative utilities, based on necessity and possibility measures, previously introduced by two of the authors, can be retrieved in this setting by adding appropriate axioms.


On the Foundations of Qualitative Decision Theory

AAAI Conferences

Our main result is a constructive representation theorem in the spirit of Savage's result for expected utility maximization, which uses two choice axioms to characterize the maxapnin criterion. These axioms characterize agent behaviors that can be modeled compactly using the maxcirnin model, and, with some reservations, indicate that rnaxionin is a reasonable decision criterion.


von Neumann-Morgenstern and Savage Theorems for Causal Decision Making

arXiv.org Artificial Intelligence

Decision making under uncertain conditions has been well studied when uncertainty can only be considered at the associative level of information. The classical Theorems of von Neumann-Morgenstern and Savage provide a formal criterion for rationally making choices using associative information. We provide here a previous result from Pearl and show that it can be considered as a causal version of the von Neumann-Morgenstern Theorem; furthermore, we consider the case when the true causal mechanism that controls the environment is unknown to the decision maker and propose a causal version of the Savage Theorem. As applications, we argue how previous optimal action learning methods for causal environments fit within the Causal Savage Theorem we present thus showing the utility of our result in the justification and design of learning algorithms; furthermore, we define a Causal Nash Equilibria for a strategic game in a causal environment in terms of the preferences induced by our Causal Decision Making Theorem.


Decision Principles to justify Carnap's Updating Method and to Suggest Corrections of Probability Judgments (Invited Talks)

arXiv.org Artificial Intelligence

This paper uses decision-theoretic principles to obtain new insights into the assessment and updating of probabilities. First, a new foundation of Bayesianism is given. It does not require infinite atomless uncertainties as did Savage s classical result, AND can therefore be applied TO ANY finite Bayesian network.It neither requires linear utility AS did de Finetti s classical result, AND r ntherefore allows FOR the empirically AND normatively desirable risk r naversion.Finally, BY identifying AND fixing utility IN an elementary r nmanner, our result can readily be applied TO identify methods OF r nprobability updating.Thus, a decision - theoretic foundation IS given r nto the computationally efficient method OF inductive reasoning r ndeveloped BY Rudolf Carnap.Finally, recent empirical findings ON r nprobability assessments are discussed.It leads TO suggestions FOR r ncorrecting biases IN probability assessments, AND FOR an alternative r nto the Dempster - Shafer belief functions that avoids the reduction TO r ndegeneracy after multiple updatings.r n


Learning Probabilities: Towards a Logic of Statistical Learning

arXiv.org Artificial Intelligence

We propose a new model for forming beliefs and learning about unknown probabilities (such as the probability of picking a red marble from a bag with an unknown distribution of coloured marbles). The most widespread model for such situations of 'radical uncertainty' is in terms of imprecise probabilities, i.e. representing the agent's knowledge as a set of probability measures. We add to this model a plausibility map, associating to each measure a plausibility number, as a way to go beyond what is known with certainty and represent the agent's beliefs about probability. There are a number of standard examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two types of information: (1) learning by repeated sampling from the unknown distribution (e.g. picking marbles from the bag); and (2) learning higher-order information about the distribution (in the shape of linear inequalities, e.g. we are told there are more red marbles than green marbles). The first changes only the plausibility map (via a 'plausibilistic' version of Bayes' Rule), but leaves the given set of measures unchanged; the second shrinks the set of measures, without changing their plausibility. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. But our belief change does not comply with standard AGM axioms, since the revision induced by (1) is of a non-AGM type. This is essential, as it allows our agents to learn the true probability: we prove that the beliefs obtained by repeated sampling converge almost surely to the correct belief (in the true probability). We end by sketching the contours of a dynamic doxastic logic for statistical learning.