Goto

Collaborating Authors

Techniques and Methodology

AI Magazine

Editor's Note: AI workers have claimed for some time A partial evaluator is an interpreter that, with only partial information about a program's inputs, produces a specialized version of the program which exploits the partial information. A similar example is described in more detail in Kahn (1982b). Programming methodology in AI shares much with general programming methodology but differs in significant ways. An AI researcher does not typically understand the problem being programmed very well. An essential aspect of a very common style of doing AI research is to write programs in order to understand something better.



583

AI Magazine

An informal workshop on concurrent logic programming, metaprogramming, and open systems was held at Xerox Palo Alto Research Center (PARC) on 8-9 September 1987 with support from the American Association for Artificial Intelligence. The 50 workshop participants came from the Japanese Fifth Generation Project (ICOT), the Weizmann Institute of Science in Israel, Imperial College in London, the Swedish Institute of Computer Science, Stanford University, the Massachusetts Institute of Technology (MIT), Carnegie-Mellon University (CMU), Cal Tech, Science University of Tokyo, Melbourne University, Calgary University, University of Wisconsin, Case Western Reserve, University of Oregon, Korea Advanced Institute of Science and Technology (KAIST), Quintus, Symbolics, IBM, and Xerox PARC. No proceedings were generated; instead, participants distributed copies of drafts, slides, and recent papers. A shared vision emerged from the morning session with concurrent logic programming fulfilling the same role that C and Assembler do now. Languages such as Flat Concurrent Prolog and Guarded Horn Clauses are seen as general-purpose, parallel machine languages and interface languages between hardware and software and not, as a newcomer to this field might expect, as high-level, AI, problemsolving languages.


Concurrent Logic Programming, Metaprogramming, and Open Systems

AI Magazine

An informal workshop on concurrent logic programming, metaprogramming, and open systems was held at Xerox Palo Alto Research Center (PARC) on 8-9 September 1987 with support from the Association for the Advancement of Artificial Intelligence. The 50 workshop participants came from the Japanese Fifth Generation Project (ICOT), the Weizmann Institute of Sci-ence in Israel, Imperial College in London, the Swedish Institute of Computer Science, Stanford University, the Mas-sachusetts Institute of Technology (MIT), Carnegie Mellon University (CMU), Cal Tech, Science University of Tokyo, Melbourne University, Calgary University, University of Wisconsin, Case Western Reserve, University of Oregon, Korea Advanced Institute of Science and Technology (KAIST), Quintus, Symbolics, IBM, and Xerox PARC. No proceedings were generated; instead, participants distributed copies of drafts, slides, and recent papers.


FOCA: A Methodology for Ontology Evaluation

arXiv.org Artificial Intelligence

Modeling an ontology is a hard and time-consuming task. Although methodologies are useful for ontologists to create good ontologies, they do not help with the task of evaluating the quality of the ontology to be reused. For these reasons, it is imperative to evaluate the quality of the ontology after constructing it or before reusing it. Few studies usually present only a set of criteria and questions, but no guidelines to evaluate the ontology. The effort to evaluate an ontology is very high as there is a huge dependence on the evaluator's expertise to understand the criteria and questions in depth. Moreover, the evaluation is still very subjective. This study presents a novel methodology for ontology evaluation, taking into account three fundamental principles: i) it is based on the Goal, Question, Metric approach for empirical evaluation; ii) the goals of the methodologies are based on the roles of knowledge representations combined with specific evaluation criteria; iii) each ontology is evaluated according to the type of ontology. The methodology was empirically evaluated using different ontologists and ontologies of the same domain. The main contributions of this study are: i) defining a step-by-step approach to evaluate the quality of an ontology; ii) proposing an evaluation based on the roles of knowledge representations; iii) the explicit difference of the evaluation according to the type of the ontology iii) a questionnaire to evaluate the ontologies; iv) a statistical model that automatically calculates the quality of the ontologies.