FS94-02-044.pdf

AAAI Conferences

Even though most diagnosticians perform and record only a small fraction of the set of possible tests to reach a classification, most learning systems are designed to work best when their training data contains completely-specified attribute-value tuples. This abstract presents learning algorithms designed to deal with exactly such partially-specified instances. We show, in particular, that it is easy to "PAC learn" decision trees in this model. We also present extensions to the underlying model, and suggest corresponding algorithms, to incrementally modify a given initial decision tree, and provide a framework for discussing various types of "noise" -- where the blocker can stochastically omit required values, or include superfluous values.


Nonmonotonic Logic for Analogical Reasoning

AAAI Conferences

Ascribing the capability of analogical reasoning to computers is one of the long-term goals of artificial intelligence research. The method of reasoning by analogy has been widely used in theorem proving, problem solving, machine learning, planning, pattern recognition and some other fields, see Hall[6] for a survey.


FS94-02-042.pdf

AAAI Conferences

Eric Neufeld Department of Computer Science University of Saskatchewan Saskatoon, SK, S7N 0W0 eric@spr.usask.ca 1 Overview In Selecting relevant information and delaying irrelevant data for objects recognition, Pun t al. use a notion of comparative relevance to get a fast practical solution to the computer vision problem of recognizing objects from a set of image primitives.


Selecting relevant information and delaying irrelevant data for objects recognition

AAAI Conferences

Purposive grouping for recovering missing data The recognition phase operates on images containing new instances of learnt objects. To access the indexing table, the presence of the complex tokens T k must be hypothesized and their relevance p (Tk) measured.


Dynamic-Bias Induction

AAAI Conferences

Strong bias reduces the complexity of induction by reducing the number of alternative hypotheses considered. Bias that applies to a narrow class of leaming tasks is free to include a narrow space of inductive hypotheses. Bias associated with a narrow space of hypotheses is, by definition, strong bias, and thus results in high learning performance. In addition to high learning performance, it is important for an inductive learner to apply to a broad class of learning tasks. To be an effective tool for use in a given domain, an induction system must be capable of solving a range of learning tasks within that domain.


Discarding irrelevant parameters in hidden Markov model based part-of-speech taggers

AAAI Conferences

A binary comparative definition of relevance, suggested by empirical results, gives a performance theory of relevance for hidden Markov models (HMMs) that makes it possible to reduce the total number of parameters in the model and while improving overall performance of the model in a specific application domain. Generalizations of this view of relevance are meaningful in many AI subareas. Another view of this result is that there are at least two kinds of relevance. Knowledge of high quality is more relevant to a conclusion than low quality knowledge; specific knowledge is more relevant that general knowledge. This work argues that one can only be had at the expense of the other. 1 A relevance typology When is knowledge "relevant"?


Discussion of: Forget It!

AAAI Conferences

The authors present an intuitive and well motivated model-theoretic definition of forgetting a sentence (or set of sentences). They give a syntactic method for creating the theory resulting from forgetting a sentence, and describe several properties of their definition. Forgetting There are two questions about the authors' treatment of forgetting: I.Although the definition given for the notion of forgetting is rather intuitive, are there any desiderata from any definition of forgetting? How does the notion of forgetting differ from other notions investigated the context of belief revision? In particular, how does forgetting differ from belief erasure [Katsuno and Mendelzon, 1991] and belief contraction [Alchourron et al., 1985]? 2. Clearly, one motivation for investigating the notion of forgetting is to obtain a theory with which it is easier to reason.


Forget It!

AAAI Conferences

Introduction We encountered the need for a theory of forgetting in our work on Cognitive Robotics. 1 In our effort to control an autonomous robot, we imagine the robot has a knowledge base representing the initial world state. As the robot performs actions, she needs to bring the knowledge base up to date. To do so, she needs to forget about all the facts that are no longer true, and in such a way that this will not affect any of her possible future actions (Lin and Reiter [3; 3]). This paper describes in general terms the particular forms of forgetting used in (Lin and Reiter [3; 3]). Specifically, we propose a logical theory to account for: forgetting about a fact (forget that John is a student), and forgetting about a relation (forget the student relation). We then apply our notion of forgetting in defining various notion of relevance.


A Proof-Theoretic Foundations

AAAI Conferences

Control of reasoning is a major issue in scaling up problem solvers that use declarative representations, since inference is slowedown significantly as the size of the knowledge base (KB) is increased. A key factor for the slow down is the search of the inference engine through parts of the KB that are irrelevant to the query at hand. The ability of a system to ignore irrelevant information is therefore a key in scaling up AI systems to large and complex domains. To address this problem we have developed a general framework for analyzing irrelevance and specific algorithms for efficiently detecting irrelevant portions of a knowledge base [Levy, 1993].