Plotting

 Wilson, Nic


Towards Fast Algorithms for the Preference Consistency Problem Based on Hierarchical Models

arXiv.org Artificial Intelligence

Such order relations can be, e.g., comparing alternatives by the values of the evaluation functions In this paper, we construct and compare algorithmic approaches lexicographically [15], by Pareto order, weighted sums [6], to solve the Preference Consistency Problem for based on hierarchical models [16] or by conditional preferences preference statements based on hierarchical models. Instances structures as CP-nets [2] and partial lexicographic preference of this problem contain a set of preference statements that are trees [11]. Here, the choice of the order relation can direct comparisons (strict and non-strict) between some alternatives, lead to stronger or weaker inferences and can make solving and a set of evaluation functions by which all alternatives PDP computationally more or less challenging. In a recommender can be rated. An instance is consistent based on hierarchical system or in a multi-objective decision making scenario, preference models, if there exists an hierarchical model the user should only be presented with a relatively small on the evaluation functions that induces an order relation on number of solutions, hence, a strong order relation is required.


Efficient Inference and Computation of Optimal Alternatives for Preference Languages Based On Lexicographic Models

arXiv.org Artificial Intelligence

We analyse preference inference, through consistency, for general preference languages based on lexicographic models. We identify a property, which we call strong compositionality, that applies for many natural kinds of preference statement, and that allows a greedy algorithm for determining consistency of a set of preference statements. We also consider different natural definitions of optimality, and their relations to each other, for general preference languages based on lexicographic models. Based on our framework, we show that testing consistency, and thus inference, is polynomial for a specific preference language LpqT, which allows strict and non-strict statements, comparisons between outcomes and between partial tuples, both ceteris paribus and strong statements, and their combination. Computing different kinds of optimal sets is also shown to be polynomial; this is backed up by our experimental results.


Multi-Objective Influence Diagrams with Possibly Optimal Policies

AAAI Conferences

The formalism of multi-objective influence diagrams has recently been developed for modeling and solving sequential decision problems under uncertainty and multiple objectives. Since utility values representing the decision maker's preferences are only partially ordered (e.g., by the Pareto order) we no longer have a unique maximal value of expected utility, but a set of them. Computing the set of maximal values of expected utility and the corresponding policies can be computationally very challenging. In this paper, we consider alternative notions of optimality, one of the most important one being the notion of possibly optimal, namely optimal in at least one scenario compatible with the inter-objective tradeoffs. We develop a variable elimination algorithm for computing the set of possibly optimal expected utility values, prove formally its correctness, and compare variants of the algorithm experimentally.


Computing Possibly Optimal Solutions for Multi-Objective Constraint Optimisation with Tradeoffs

AAAI Conferences

Computing the set of optimal solutions for a multi-objective constraint optimisation problem can be computationally very challenging. Also, when solutions are only partially ordered, there can be a number of different natural notions of optimality, one of the most important being the notion of Possibly Optimal, i.e., optimal in at least one scenario compatible with the inter-objective tradeoffs. We develop an AND/OR Branch-and-Bound algorithm for computing the set of Possibly Optimal solutions, and compare variants of the algorithm experimentally.


Quality of Geographic Information: Ontological approach and Artificial Intelligence Tools

arXiv.org Artificial Intelligence

The objective is to present one important aspect of the European IST-FET project "REV!GIS"1: the methodology which has been developed for the translation (interpretation) of the quality of the data into a "fitness for use" information, that we can confront to the user needs in its application. This methodology is based upon the notion of "ontologies" as a conceptual framework able to capture the explicit and implicit knowledge involved in the application. We do not address the general problem of formalizing such ontologies, instead, we rather try to illustrate this with three applications which are particular cases of the more general "data fusion" problem. In each application, we show how to deploy our methodology, by comparing several possible solutions, and we try to enlighten where are the quality issues, and what kind of solution to privilege, even at the expense of a highly complex computational approach. The expectation of the REV!GIS project is that computationally tractable solutions will be available among the next generation AI tools.


The Computational Complexity of Dominance and Consistency in CP-Nets

arXiv.org Artificial Intelligence

We investigate the computational complexity of testing dominance and consistency in CP-nets. Previously, the complexity of dominance has been determined for restricted classes in which the dependency graph of the CP-net is acyclic. However, there are preferences of interest that define cyclic dependency graphs; these are modeled with general CP-nets. In our main results, we show here that both dominance and consistency for general CP-nets are PSPACE-complete. We then consider the concept of strong dominance, dominance equivalence and dominance incomparability, and several notions of optimality, and identify the complexity of the corresponding decision problems. The reductions used in the proofs are from STRIPS planning, and thus reinforce the earlier established connections between both areas.


Rules, Belief Functions and Default Logic

arXiv.org Artificial Intelligence

This paper describes a natural framework for rules, based on belief functions, which includes a repre- sentation of numerical rules, default rules and rules allowing and rules not allowing contraposition. In particular it justifies the use of the Dempster-Shafer Theory for representing a particular class of rules, Belief calculated being a lower probability given certain independence assumptions on an underlying space. It shows how a belief function framework can be generalised to other logics, including a general Monte-Carlo algorithm for calculating belief, and how a version of Reiter's Default Logic can be seen as a limiting case of a belief function formalism.


A Monte-Carlo Algorithm for Dempster-Shafer Belief

arXiv.org Artificial Intelligence

A very computationally-efficient Monte-Carlo algorithm for the calculation of Dempster-Shafer belief is described. If Bel is the combination using Dempster's Rule of belief functions Bel, ..., Bel,7, then, for subset b of the frame C), Bel(b) can be calculated in time linear in 1(31 and m (given that the weight of conflict is bounded). The algorithm can also be used to improve the complexity of the Shenoy-Shafer algorithms on Markov trees, and be generalised to calculate Dempster-Shafer Belief over other logics.


The Assumptions Behind Dempster's Rule

arXiv.org Artificial Intelligence

This paper examines the concept of a combination rule for belief functions. It is shown that two fairly simple and apparently reasonable assumptions determine Dempster's rule, giving a new justification for it.


Order-of-Magnitude Influence Diagrams

arXiv.org Artificial Intelligence

In this paper, we develop a qualitative theory of influence diagrams that can be used to model and solve sequential decision making tasks when only qualitative (or imprecise) information is available. Our approach is based on an order-of-magnitude approximation of both probabilities and utilities and allows for specifying partially ordered preferences via sets of utility values. We also propose a dedicated variable elimination algorithm that can be applied for solving order-of-magnitude influence diagrams.