SS99-01-023.pdf

AAAI Conferences

Rule induction methods axe classified into two categories, induction of deterministic rules and probabilistic ones(Michalski 1986; Pawlak 1991; Tsumoto and Tanaka 1996). While deterministic rules are supported by positive examples, probabilistic ones are supported by large positive examples and small negative samples. That is, both kinds of rules select positively one decision if a case satisfies their conditional parts. However, domain experts do not use only positive reasoning but also negative reasoning, since a domain is not always deterministic. For example, when a patient does not have a headache, migraine should not be suspected: negative reasoning plays an important role in cutting the search space of a differential diagnosis(Tsumoto and Tanaka 1996). 1 Therefore, negative rules should be induced from databases in order to induce rules which will be easier for domain experts to 1The essential point is that if extracted patterns do not reflect experts' reasoning process, domain experts have difficulties in interpreting them. Without interpretation of domain experts, a discovery procedure would not proceed, which also means that the interaction between human experts and computers is indispensable to computer-assisted discovery.


Overview of Udacity Artificial Intelligence Engineer Nanodegree, Term 1

#artificialintelligence

After finishing Udacity Deep Learning Foundation I felt that I got a good introduction to Deep Learning, but to understand things, I must dig deeper. Besides I had a guaranteed admission to Self-Driving Car Engineer, Artificial Intelligence, or Robotics Nanodegree programs.


A Generalization of the Chow-Liu Algorithm and its Application to Statistical Learning

arXiv.org Artificial Intelligence

We extend the Chow-Liu algorithm for general random variables while the previous versions only considered finite cases. In particular, this paper applies the generalization to Suzuki's learning algorithm that generates from data forests rather than trees based on the minimum description length by balancing the fitness of the data to the forest and the simplicity of the forest. As a result, we successfully obtain an algorithm when both of the Gaussian and finite random variables are present.



Using More Reasoning to Improve #SAT Solving

AAAI Conferences

Many real-world problems, including inference in Bayes Nets, can be reduced to #SAT, the problem of counting the number of models of a propositional theory. This has motivated the need for efficient #SAT solvers. Currently, such solvers utilize a modified version of DPLL that employs decomposition and caching, techniques that significantly increase the time it takes to process each node in the search space. In addition, the search space is significantly larger than when solving SAT since we must continue searching even after the first solution has been found. It has previously been demonstrated that the size of a DPLL search tree can be significantly reduced by doing more reasoning at each node. However, for SAT the reductions gained are often not worth the extra time required. In this paper we verify the hypothesis that for #SAT this balance changes. In particular, we show that additional reasoning can reduce the size of a #SAT solver's search space, that this reduction cannot always be achieved by the already utilized technique of clause learning, and that this additional reasoning can be cost effective.