Most models of categorization learn categories defined by characteristic features but some categories are described more naturally in terms of relations. We present a generative model that helps to explain how relational categories are learned and used. Our model learns abstract schemata that specify the relational similarities shared by instances of a category, and our emphasis on abstraction departs from previous theoretical proposals that focus instead on comparison of concrete instances. Ourfirst experiment suggests that abstraction can help to explain some of the findings that have previously been used to support comparison-based approaches. Oursecond experiment focuses on one-shot schema learning, a problem that raises challenges for comparison-based approaches but is handled naturally by our abstraction-based account.
There is an increasing need to derive semantics from real-world observations to facilitate natural information sharing between machine and human. Conceptual spaces theory is a possible approach and has been proposed as mid-level representation between symbolic and sub-symbolic representations, whereby concepts are represented in a geometrical space that is characterised by a number of quality dimensions. Currently, much of the work has demonstrated how conceptual spaces are created in a knowledge-driven manner, relying on prior knowledge to form concepts and identify quality dimensions. This paper presents a method to create semantic representations using data-driven conceptual spaces which are then used to derive linguistic descriptions of numerical data. Our contribution is a principled approach to automatically construct a conceptual space from a set of known observations wherein the quality dimensions and domains are not known a priori. This novelty of the approach is the ability to select and group semantic features to discriminate between concepts in a data-driven manner while preserving the semantic interpretation that is needed to infer linguistic descriptions for interaction with humans. Two data sets representing leaf images and time series signals are used to evaluate the method. An empirical evaluation for each case study assesses how well linguistic descriptions generated from the conceptual spaces identify unknown observations. Furthermore, comparisons are made with descriptions derived on alternative approaches for generating semantic models.
Rather than to the whole world, discovery systems* apply to limited aplalication domainS, with the intent to discover useful domain models* and,domain theory*. In empirical discovery, ' the application domain becomes known by *data*, from which a *discOvery process* attempts to generate *new knowledge*. Page 464 AAA/-94 Workshop on Knowledge Discovery in Databases KDD-94 OBJECT (entity, unit, case) is a member or a part of an *application domain* (*universe*). Objects can belong to' different classes of similar objects, such as persons, transactions, locations,: events, and processes. Objects possess *attributes* and *relationships* to other objects. ATTRIBUTE (field, variable) characterizes a single aspect of *objects* of an object class.
We present an implemented approach for domainrestricted question answering from structured knowledge sources, based on robust semantic analysis in a hybrid NLP system architecture. We build on a lexicalsemantic conceptual structure for question interpretation, which is interfaced with domain-specific concepts and properties in a structured knowledge base. Question interpretation involves a limited amount of domain-specific inferences and accounts for quantificational questions. We extract so-called proto queries from the linguistic representation, which provide partial constraints for answer extraction from the underlying knowledge sources. The search queries we construct from proto queries effectively constitute minimum spanning trees that restrict the possible answer candidates. Our approach naturally extends to multilingual question answering and has been developed as a prototype system for two application domains: the domain of Nobel prize winners and the domain of Language Technology, on the basis of the large ontology underlying the information portal LT World.