FS04-04-000.pdf

AAAI Conferences

Detection of Neuropsychiatric States of Interest in Text / 1 Robert J. Bechtel and Louis A. Gottschalk Some Novel Aspects of Health Communication from a Dialogue Systems Perspective / 5 Timothy Bickmore and Toni Giorgino Talking Telemedicine: Is the Interactive Voice-Logbook Evolving into the Cornerstone of Diabetes Healthcare? / 13 LA.


Preface

AAAI Conferences

This symposium brings together connectionist and nonconnectionist researchers to discuss and debate a topic of central concern in AI and cognitive science: the nature of compositionality. The open-ended productivity of the human capabilities aspired to by AI (e.g., perception, cognition, and language) is generally taken to be a consequence of compositionality (the ability to recursively combine constituents). Given that these capabilities are implemented by real neural networks in the brain, it is important to understand feasible connectionist implementations of compositionality. The aim of this symposium is to expose connectionist researchers to the broadest possible range of conceptions of composition while simultaneously alerting other researchers to the range of possibilities for connectionist implementation of composition. The issue of compositionality has been recognized as important since the early days of the connectionist renaissance.


Implementing the (De-)Composition of Concepts: Oscillatory Networks, Coherency Chains and Hierarchical Binding

AAAI Conferences

The paper introduces oscillatory networks as a model of the realization of lexical and nonlexical complex concepts in the cortex. The network has both perceptual and semantic capabilities. Three adequacy conditions, the compositionality of meaning, the compositionality of content, and the covariation with content, are satisfied. Coherency chains and hierarchical mechanisms of binding are discussed.


Geometric Ordering of Concepts, Logical Disjunction, and Learning by Induction

AAAI Conferences

In many of the abstract geometric models which have been used to represent concepts and their relationships, regions possessing some cohesive property such as convexity or linearity have played a significant role. When the implication or containment relationship is used as an ordering relationship in such models, this gives rise to logical operators for which the disjunction of two concepts is often larger than the set union obtained in Boolean models. This paper describes some of the characteristic properties of such broad non-distributive composition operations and their applications to learning algorithms and classification structures. As an example we describe a quad-tree representation which we have used to provide a structure for indexing objects and composition of regions in a spatial database. The quad-tree combines logical, algebraic and geometric properties in a naturally non-distributive fashion. The lattice of subspaces of a vector space is presented as a special example, which draws a middle-way between'noninductive' Boolean logic and'overinductive' tree-structures. This gives rise to composition operations that are already used as models in physics and cognitive science.


A Neural Model of Compositional Sentence Structures

AAAI Conferences

A neural architecture for compositional sentence structures is presented. The architecture solves the'four challenges for cognitive neuroscience' described by Jackendoff (2002). Sentence structures are encoded in this neural architecture by temporarily binding word representations with structure representations in a manner that preserves sentence structure. The architecture can store different sentence structures simultaneously. Answers to specific'who does what to whom' questions can be produced by means of a selective activation process within the architecture. The architecture can account for effects of sentence complexity.


On early stages of learning in connectionist models with feedback connections

AAAI Conferences

We have recently shown that when initiated with "small" weights, many connectionist models with feedback connections are inherently biased towards Markov models, i.e. even prior to any training, dynamics of the models can be readily used to extract finite memory machines (Tiňo, Čerňanský, & Beňušková 2004; Hammer & Tiňo 2003). In this study we briefly outline the core arguments for such claims and generalize the results to recursive neural networks capable of processing ordered trees. In the early stages of learning, the compositional organization of recursive activations has a Markovian structure: Trees sharing a top subtree are mapped close to each other. The deeper is the shared subtree, the closer are the trees mapped.


On the relationship between symbolic and neural computation

AAAI Conferences

There is a need to clarify the relationship between traditional symbolic computation and neural network computation. We suggest that traditional context-free grammars are best understood as a special case of neural network computation; the special case derives its power from the presence of certain kinds of symmetries in the weight values. We describe a simple class of stochastic neural networks, Stochastic Linear Dynamical Automata (SLDAs), define Lyapunov Exponents for these networks, and show that they exhibit a significant range of dynamical behaviors--contractive and chaotic, with context free grammars at the boundary between these regimes. Placing context-free languages in this more general context has allowed us, in previous work, to make headway on the challenging problem of designing neural mechanisms that can learn them.


FS04-03-013.pdf

AAAI Conferences

This paper builds on the insight of Lashley (1951) and Miller, Galanter, & Pribram (1960) that action and motor planning mechanisms provide a basis for all serially ordered compositional systems, including language and reasoning. It reinterprets this observation in terms of modern AI formalisms for planning, showing that both the syntactic apparatus that projects lexical meanings onto sentences and the neural mechanisms that are known to be implicated in both language behavior and motor planning reflect exactly the same primitive combinatory operations. The paper then considers some neurocomputational mechanisms that can be applied to modeling this system, and the relation of the compositionality property to such mechanisms.


Compositionality in a Knowledge-based Constructive Learner

AAAI Conferences

A knowledge-based constructive learning algorithm, KBCC, simplifies and accelerates the learning of parity and chessboard problems. Previously learned knowledge of simpler versions of these problems is recruited in the service of learning more complex versions. A learned solution can be viewed as a composition in which the components are not altered, showing that concatenative compositionality can be achieved in neural terms.


On-Line Learning of Predictive Compositional Hierarchies by Hebbian Chunking

AAAI Conferences

I have investigated systems for online, cumulative learning of compositional hierarchies embedded within predictive probabilistic models. The hierarchies are learned unsupervised from unsegmented data streams. Such learning is critical for long-lived intelligent agents in complex worlds. Learned patterns enable prediction of unseen data and serve as building blocks for higherlevel knowledge representation. These systems are examples of a rare combination--unsupervised, online structure learning (specifically structure growth). The system described here embeds a compositional hierarchy within an undirected graphical model based directly on Boltzmann machines, extended to handle categorical variables. A novel online chunking rule creates new nodes corresponding to frequently occurring patterns that are combinations of existing known patterns. This work can be viewed as a direct (and long overdue) attempt to explain how the hierarchical compositional structure of classic models such as McClelland and Rumelhart's Interactive Activation model of context effects in letter perception can be learned automatically.