Goto

Collaborating Authors

 Model-Based Reasoning


Mechanism Design for Scheduling with Uncertain Execution Time

AAAI Conferences

We study the problem where a task (or multiple unrelated tasks) must be executed, there are multiple machines/agents that can potentially perform the task, and our objective is to minimize the expected sum of the agents' processing times. Each agent does not know exactly how long it will take him to finish the task; he only knows the distribution from which this time is drawn. These times are independent across agents and the distributions fulfill the monotone hazard rate condition. Agents are selfish and will lie about their distributions if this increases their expected utility. We study different variations of the Vickrey mechanism that take as input the agents' reported distributions and the players' realized running times and that output a schedule that minimizes the expected sum of processing times, as well as payments that make it an ex-post equilibrium for the agents to both truthfully report their distributions and exert full effort to complete the task. We devise the ChPE mechanism, which is uniquely tailored to our problem, and has many desirable properties including: not rewarding agents that fail to finish the task and having non-negative payments.


Controlling the Precision-Recall Tradeoff in Differential Dependency Network Analysis

arXiv.org Machine Learning

Graphical models have gained a lot of attention recently as a tool for learning and representing dependencies among variables in multivariate data. Often, domain scientists are looking specifically for differences among the dependency networks of different conditions or populations (e.g. differences between regulatory networks of different species, or differences between dependency networks of diseased versus healthy populations). The standard method for finding these differences is to learn the dependency networks for each condition independently and compare them. We show that this approach is prone to high false discovery rates (low precision) that can render the analysis useless. We then show that by imposing a bias towards learning similar dependency networks for each condition the false discovery rates can be reduced to acceptable levels, at the cost of finding a reduced number of differences. Algorithms developed in the transfer learning literature can be used to vary the strength of the imposed similarity bias and provide a natural mechanism to smoothly adjust this differential precision-recall tradeoff to cater to the requirements of the analysis conducted. We present real case studies (oncological and neurological) where domain experts use the proposed technique to extract useful differential networks that shed light on the biological processes involved in cancer and brain function.


The Role of Tuning Uncertain Inference Systems

arXiv.org Artificial Intelligence

This study examined the effects of "tuning" the parameters of the incremental function of MYCIN, the independent function of PROSPECTOR, a probability model that assumes independence, and a simple additive linear equation. me parameters of each of these models were optimized to provide solutions which most nearly approximated those from a full probability model for a large set of simple networks. Surprisingly, MYCIN, PROSPECTOR, and the linear equation performed equivalently; the independence model was clearly more accurate on the networks studied.


Objection-Based Causal Networks

arXiv.org Artificial Intelligence

This paper introduces the notion of objection-based causal networks which resemble probabilistic causal networks except that they are quantified using objections. An objection is a logical sentence and denotes a condition under which a, causal dependency does not exist. Objection-based causal networks enjoy almost all the properties that make probabilistic causal networks popular, with the added advantage that objections are, arguably more intuitive than probabilities.


Causal Modeling

arXiv.org Artificial Intelligence

Causal Models are like Dependency Graphs and Belief Nets in that they provide a structure and a set of assumptions from which a joint distribution can, in principle, be computed. Unlike Dependency Graphs, Causal Models are models of hierarchical and/or parallel processes, rather than models of distributions (partially) known to a model builder through some sort of gestalt. As such, Causal Models are more modular, easier to build, more intuitive, and easier to understand than Dependency Graph Models. Causal Models are formally defined and Dependency Graph Models are shown to be a special case of them. Algorithms supporting inference are presented. Parsimonious methods for eliciting dependent probabilities are presented.


Three Approaches to Probability Model Selection

arXiv.org Artificial Intelligence

This paper compares three approaches to the problem of selecting among probability models to fit data (1) use of statistical criteria such as Akaike's information criterion and Schwarz's "Bayesian information criterion," (2) maximization of the posterior probability of the model, and (3) maximization of an effectiveness ratio? trading off accuracy and computational cost. The unifying characteristic of the approaches is that all can be viewed as maximizing a penalized likelihood function. The second approach with suitable prior distributions has been shown to reduce to the first. This paper shows that the third approach reduces to the second for a particular form of the effectiveness ratio, and illustrates all three approaches with the problem of selecting the number of components in a mixture of Gaussian distributions. Unlike the first two approaches, the third can be used even when the candidate models are chosen for computational efficiency, without regard to physical interpretation, so that the likelihood and the prior distribution over models cannot be interpreted literally. As the most general and computationally oriented of the approaches, it is especially useful for artificial intelligence applications.


Support and Plausibility Degrees in Generalized Functional Models

arXiv.org Artificial Intelligence

By discussing several examples, the theory of generalized functional models is shown to be very natural for modeling some situations of reasoning under uncertainty. A generalized functional model is a pair (f, P) where f is a function describing the interactions between a parameter variable, an observation variable and a random source, and P is a probability distribution for the random source. Unlike traditional functional models, generalized functional models do not require that there is only one value of the parameter variable that is compatible with an observation and a realization of the random source. As a consequence, the results of the analysis of a generalized functional model are not expressed in terms of probability distributions but rather by support and plausibility functions. The analysis of a generalized functional model is very logical and is inspired from ideas already put forward by R.A. Fisher in his theory of fiducial probability.


Developing Parallel Dependency Graph In Improving Game Balancing

arXiv.org Artificial Intelligence

The dependency graph is a data architecture that models all the dependencies between the different types of assets in the game. It depicts the dependency-based relationships between the assets of a game. For example, a player must construct an arsenal before he can build weapons. It is vital that the dependency graph of a game is designed logically to ensure a logical sequence of game play. However, a mere logical dependency graph is not sufficient in sustaining the players' enduring interests in a game, which brings the problem of game balancing into picture. The issue of game balancing arises when the players do not feel the chances of winning the game over their AI opponents who are more skillful in the game play. At the current state of research, the architecture of dependency graph is monolithic for the players. The sequence of asset possession is always foreseeable because there is only a single dependency graph. Game balancing is impossible when the assets of AI players are overwhelmingly outnumbering that of human players. This paper proposes a parallel architecture of dependency graph for the AI players and human players. Instead of having a single dependency graph, a parallel architecture is proposed where the dependency graph of AI player is adjustable with that of human player using a support dependency as a game balancing mechanism. This paper exhibits that the parallel dependency graph helps to improve game balancing.


A Calculus for Causal Relevance

arXiv.org Artificial Intelligence

This paper presents a sound and completecalculus for causal relevance, based onPearl's functional models semantics.The calculus consists of axioms and rulesof inference for reasoning about causalrelevance relationships.We extend the set of known axioms for causalrelevance with three new axioms, andintroduce two new rules of inference forreasoning about specific subclasses ofmodels.These subclasses give a more refinedcharacterization of causal models than the one given in Halpern's axiomatizationof counterfactual reasoning.Finally, we show how the calculus for causalrelevance can be used in the task ofidentifying causal structure from non-observational data.


Bayesian Mixture Models for Frequent Itemset Discovery

arXiv.org Machine Learning

In binary-transaction data-mining, traditional frequent itemset mining often produces results which are not straightforward to interpret. To overcome this problem, probability models are often used to produce more compact and conclusive results, albeit with some loss of accuracy. Bayesian statistics have been widely used in the development of probability models in machine learning in recent years and these methods have many advantages, including their abilities to avoid overfitting. In this paper, we develop two Bayesian mixture models with the Dirichlet distribution prior and the Dirichlet process (DP) prior to improve the previous non-Bayesian mixture model developed for transaction dataset mining. We implement the inference of both mixture models using two methods: a collapsed Gibbs sampling scheme and a variational approximation algorithm. Experiments in several benchmark problems have shown that both mixture models achieve better performance than a non-Bayesian mixture model. The variational algorithm is the faster of the two approaches while the Gibbs sampling method achieves a more accurate results. The Dirichlet process mixture model can automatically grow to a proper complexity for a better approximation. Once the model is built, it can be very fast to query and run analysis on (typically 10 times faster than Eclat, as we will show in the experiment section). However, these approaches also show that mixture models underestimate the probabilities of frequent itemsets. Consequently, these models have a higher sensitivity but a lower specificity.