Goto

Collaborating Authors

 Shapiro, Stuart C.


Inference Graphs: A New Kind of Hybrid Reasoning System

AAAI Conferences

Hybrid reasoners combine multiple types of reasoning, usually subsumption and Prolog-style resolution. We outline a system which combines natural deduction and subsumption reasoning using Inference Graphs implementing a Logic of Arbitrary and Indefinite Objects.


Inference Graphs: A New Kind of Hybrid Reasoning System

AAAI Conferences

Hybrid reasoners combine multiple types of reasoning, usually subsumption and Prolog-style resolution. We outline a system which combines natural deduction and subsumption reasoning using Inference Graphs implementing a Logic of Arbitrary and Indefinite Objects.


Concurrent Reasoning with Inference Graphs

AAAI Conferences

Since their popularity began to rise in the mid-2000s there has been significant growth in the number of multi-core and multi-processor computers available. Knowledge representation systems using logical inference have been slow to embrace this new technology. We present the concept of inference graphs, a natural deduction inference system which scales well on multi-core and multi-processor machines. Inference graphs enhance propositional graphs by treating propositional nodes as tasks which can be scheduled to operate upon messages sent between nodes via the arcs that already exist as part of the propositional graph representation. The use of scheduling heuristics within a prioritized message passing architecture allows inference graphs to perform very well in forward, backward, bi-directional, and focused reasoning. Tests demonstrate the usefulness of our scheduling heuristics, and show significant speedup in both best case and worst case inference scenarios as the number of processors increases.


Mapping the Landscape of Human-Level Artificial General Intelligence

AI Magazine

Of course, this is far from the first attempt to plot a course toward human-level AGI: arguably this was the goal of the founders of the field of artificial intelligence in the 1950s, and has been pursued by a steady stream of AI researchers since, even as the majority of the AI field has focused its attention on more narrow, specific subgoals. The ideas presented here build on the ideas of others in innumerable ways, but to review the history of AI and situate the current effort in the context of its predecessors would require a much longer article than this one. Thus we have chosen to focus on the results of our AGI roadmap discussions, acknowledging in a broad way the many debts owed to many prior researchers. References to the prior literature on evaluation of advanced AI systems are given by Laird (Laird et al. 2009) and Geortzel and Bugaj (2009), which may in a limited sense be considered prequels to this article. We begin by discussing AGI in general and adopt a pragmatic goal for measuring progress toward its attainment. An initial capability landscape for AGI The heterogeneity of general intelligence in will be presented, drawing on major themes from humans makes it practically impossible to develop developmental psychology and illuminated by a comprehensive, fine-grained measurement system mathematical, physiological, and informationprocessing for AGI. While we encourage research in defining perspectives. The challenge of identifying such high-fidelity metrics for specific capabilities, appropriate tasks and environments for measuring we feel that at this stage of AGI development AGI will be taken up. Several scenarios will a pragmatic, high-level goal is the best we can be presented as milestones outlining a roadmap agree upon. I advocate beginning with a system that has minimal, although extensive, built-in capabilities. Many variant approaches have been proposed A classic example of the narrow AI approach was for achieving such a goal, and both the AI and AGI IBM's Deep Blue system (Campbell, Hoane, and communities have been working for decades on Hsu 2002), which successfully defeated world chess the myriad subgoals that would have to be champion Gary Kasparov but could not readily achieved and integrated to deliver a comprehensive apply that skill to any other problem domain without AGI system.


The Jobs Puzzle: A Challenge for Logical Expressibility and Automated Reasoning

AAAI Conferences

The Jobs Puzzle, introduced in a book about automated reasoning, is a logic puzzle solvable by some "intelligent sixth graders," but the formalization of the puzzle by the authors was, according to them, "sometimes difficult and sometimes tedious." The puzzle thus presents a triple challenge: 1) formalize it in a non-difficult, non-tedious way; 2) formalize it in a way that adheres closely to the English statement of the puzzle; 3) have an automated general-purpose commonsense reasoner that can accept that formalization and solve the puzzle quickly. In this paper, I present and discuss three formalizations that are less difficult and less tedious than the original. However, none satisfy all three requirements as well as might be desired, and there are a significant number of automated reasoners that cannot solve the puzzle using any of the formalizations. So the Jobs Puzzle remains an interesting challenge.


Set-Oriented Logical Connectives: Syntax and Semantics

AAAI Conferences

Of the common commutative binary logical connectives, only and and or may be used as operators that take arbitrary numbers of arguments with order and multiplicity being irrelevant, that is, as connectives that take sets of arguments. This is especially evident in the Common Logic Interchange Format, in which it is easy for operators to be given arbitrary numbers of arguments. The reason is that and and or are associative and idempotent, as well as commutative. We extend the ability of taking sets of arguments to the other common commutative connectives by defining generalized versions of nand , nor , xor ,and iff , as well as the additional, parameterized connectives andor and thresh . We prove that andor is expressively complete — all the other connectives may be considered abbreviations of it.


The GLAIR Cognitive Architecture

AAAI Conferences

GLAIR (Grounded Layered Architecture with Integrated Reasoning) is a multi-layered cognitive architecture for embodied agents operating in real,virtual, or simulated environments containing other agents. The highest layer of the GLAIR Architecture, the Knowledge Layer (KL), contains the beliefs of the agent, and is the layer in which conscious reasoning, planning, and act selection is performed. The lowest layer of the GLAIR Architecture, the Sensori-Actuator Layer (SAL), contains the controllers of the sensors and effectors of the hardware or software robot. Between the KL and the SAL is the Perceptuo-Motor Layer (PML), which grounds the KL symbols in perceptual structures and subconscious actions, contains various registers for providing the agent's sense of situatedness in the environment, and handles translation and communication between the KL and the SAL. The motivation for the development of GLAIR has been "Computational Philosophy", the computational understanding and implementation of human-level intelligent behavior without necessarily being bound by the actual implementation of the human mind. Nevertheless, the approach has been inspired by human psychology and biology.


Metacognition in SNePS

AI Magazine

The SNePS knowledge representation, reasoning, and acting system has several features that facilitate metacognition in SNePS-based agents. The most prominent is the fact that propositions are represented in SNePS as terms rather than as sentences, so that propositions can occur as argu- ments of propositions and other expressions without leaving first-order logic. The SNePS acting subsystem is integrated with the SNePS reasoning subsystem in such a way that: there are acts that affect what an agent believes; there are acts that specify knowledge-contingent acts and lack-of-knowledge acts; there are policies that serve as "daemons," triggering acts when certain propositions are believed or wondered about.


Metacognition in SNePS

AI Magazine

The SNePS knowledge representation, reasoning, and acting system has several features that facilitate metacognition in SNePS-based agents. The most prominent is the fact that propositions are represented in SNePS as terms rather than as sentences, so that propositions can occur as argu- ments of propositions and other expressions without leaving first-order logic. The SNePS acting subsystem is integrated with the SNePS reasoning subsystem in such a way that: there are acts that affect what an agent believes; there are acts that specify knowledge-contingent acts and lack-of-knowledge acts; there are policies that serve as "daemons," triggering acts when certain propositions are believed or wondered about. The GLAIR agent architecture supports metacognition by specifying a location for the source of self-awareness and of a sense of situatedness in the world. Several SNePS-based agents have taken advantage of these facilities to engage in self-awareness and metacognition.


A net structure for semantic information storage, deduction and retrieval

Classics

This paper describes a data structure, MENS (MEmory Net Structure), that is useful for storing semantic information stemming from a natural language, and a system, MENTAL (MEmory Net That Answers and Learns) that interacts with a user (human or program), stores information into and retrieves information from MENS and interprets some information in MENS as rules telling it how to deduce new information from what is already stored. MENTAL can be used as a guestion-answering system with formatted input/output, as a vehicle for experimenting with various theories of semantic structures or as the memory management portion of a natural language question-answering system.See also:U. Wisconsin Technical Report 109 versionScanned, non-OCR, versionIn IJCAI-71: INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE. British Computer Society, London, pp. 512-523.