Information Technology
Artificial Intelligence and Brain-Theory Research at Computer and Information Science Department, University of Massachusetts
Our program in AI is part of the larger departmental focal area of cybernetics which integrates both AI and brain theory (BT). Our research also draws upon a new and expanding interdepartmental program in cognitive science that brings together researchers in cybernetics, linguistics, philosophy, and psychology. This interdisciplinary approach to AI has already led to a number of fruitful collaborations in the areas of cooperative computation, learning, natural language parsing, and vision.
Reflections on the ARPA Experience
When I returned to Stanford last summer after a two-year leave of absence, serving as a program manager at the Defense Advanced Projects Agency, I was frequently asked about that experience. It was superb experience, for many reasons. As a program manager I had near-perfect vantage point from which to view the entire field of Artificial Intelligence. Not only did I become better acquainted with the most creative and active people in the field, I was also personally kept up to date on their latest research. ARPA is not just a place to go to provide a public service, but is really a central node in the research network for collecting and integrating results and disseminating them to the broader community: government, industry and the public at large. Moreover, it was my responsibility to identify new avenues of research and/or applications of research, coupled with the resources (limited, but real) to make these new activities happen -- a unique opportunity.
What Is the Well-Dressed AI Educator Wearing Now?
A funny thing happened to me at IJCAI-81. I went to a panel on "Education in AI" and stepped back into an argument that I had thought settled several years ago. The debate was between the "scruffies," led by Roger Schank and Ed Feignbaum, and the "neats," led by Nils Nilsson. The neats argued that no education in AI was complete without a strong theoretical component, containing, for instance, courses on predicate logic and automata theory. The scruffies maintained that such a theoretical component was not only unnecessary, but harmful.
Artificial Intelligence: Engineering, Science, or Slogan?
This paper presents the view that artificial intelligence (AI) is primarily concerned with propositional languages for representing knowledge and with techniques for manipulating these representations. In this respect, AI is analogous to applied in a variety of other subject areas. Typically, AI research (or should be) more concerned with the general form and properties of representational languages and methods than it is with the context being described by these languages. Notable exceptions involve "commonsense" knowledge about the everyday would ( no other specialty claims this subject area as its own ), and metaknowledge (or knowledge about the properties itself). In these areas AI is concerned with content as well as form. We also observe that the technology that seems to underly peripheral sensory and motor activities (analogous to low-level animal or human vision and muscle control) seems to be quite different from the technology that seems to underly cognitive reasoning and problem solving. Some definitions of AI would include peripheral as well as cognitive processes; here we argue against including the peripheral processes.
Editorial
This issue of AI Magazine is the first for which I extent with the Newsletter, particularly with respect to have the privilege and the responsibility of serving as technical articles, which one now finds in both Editor. Lee Erman (whom you can blame for this event) publications. As the magazine matures, however, I asked me to serve as Editor under the assumptions that I expect to see it develop its own distinctive style. We'll was interested in the dissemination of interesting and see. A condition under which I accepted Lee's offer was having just left the hectic world of research management that I could enlist a group of Associate Editors who could at ARPA, I would have copious amounts of free time on help dig out interesting and informative material, and my hands (false). I accepted the offer mainly because it's keep the magazine broad in scope.
High-Road and Low-Road Programs
Consider a class of computing problem for which all bananas is left as an exercise for the reader, or the sufficiently short programs are too slow and all sufficiently monkey. When it has been possible to couple causal models problems of this kind were left strictly alone for the first with various kinds and combinations of search, twenty-years or so of the computing era. There were two mathematical programming and analytic methods, then good reasons. First, the above definition rules out both evaluation of t has been taken as the basis for "high road" the algorithmic and the database type of solution. In "low road" representations Second, in a pinch, a human expert could usually be s may be represented directly in machine memory as a set found who was able at least to compute acceptable A recent pattern-directed allocation, inventory optimisation, or whatever large heuristic model used for industrial monitoring and control combinatorial domain might happen to be involved.
Knowledge-based programming self-applied
A knowledge-based programming system can utilize a very-high-level self description to rewrite and improve itself. This paper presents a specification, in the very-high-level language V, of the rule compiler component of the CIII knowledgebased programming system. From this specification of part of itself, CIII produces an efficient program satisfying the specification. This represents a modest application of a machine intelligence system to a real programming problem, namely improving one of the programming environment's tools — the rule compiler. The high-level description and the use of a programming knowledge base provide potential for system performance to improve with added knowledge.In Hayes, J. E., Michie, D., and Pao, Y.-H. (Eds.), Machine Intelligence 10. Ellis Horwood.
Logic for Natural Language Analysis
Pereira, Fernando Carlos Neves
This work investigates the use of formal logic as a practical tool for describing the syntax and semantics of a subset of English, and building a computer program to answer data base queries expressed in that subset. To achieve an intimate connection between logical descriptions and computer programs, all the descriptions given are in the definite clause subset of the predicate calculus, which is the basis of the programming language Prolog. The logical descriptions run directly as efficient Prolog programs. Three aspects of the use of logic in natural language analysis are covered: formal representation of syntactic rules by means of a grammar formalism based on logic, extraposition grammars;. formal semantics for the chosen English subset, appropriate for data base queries; informal semantic and pragmatic rules to translate analysed sentences into their formal semantics. On these three aspects, the work improves and extends earlier work by Colmerauer and others, where the use of computational logic in language analysis was first introduced. The University of Edinburgh