Scaling Connectionist Compositional Representations

AAAI Conferences

The Recursive Auto-Associative Memory (RAAM) has come to dominate connectionist investigations into representing compositional structure. Although an adequate model when dealing with limited data, the capacity of RAAM to scale-up to real-world tasks has been frequently questioned. RAAM networks are difficult to train (due to the moving target effect) and as such training times can be lengthy. Investigations into RAAM have produced many variants in an attempt to overcome such limitations. We outline how one such model ((S)RAAM) is able to quickly produce contextsensitive representations that may be used to aid a deterministic parsing process. By substituting a symbolic stack in an existing hybrid parser, we show that (S)RAAM is more than capable of encoding the realworld data sets employed. We conclude by suggesting that models such as (S)RAAM offer valuable insights into the features of connectionist compositional representations.


Embeddings and Representation Learning for Structured Data

arXiv.org Machine Learning

Performing machine learning on structured data is complicated by the fact that such data does not have vectorial form. Therefore, multiple approaches have emerged to construct vectorial representations of structured data, from kernel and distance approaches to recurrent, recursive, and convolutional neural networks. Recent years have seen heightened attention in this demanding field of research and several new approaches have emerged, such as metric learning on structured data, graph convolutional neural networks, and recurrent decoder networks for structured data. In this contribution, we provide an high-level overview of the state-of-the-art in representation learning and embeddings for structured data across a wide range of machine learning fields.


Using Simple Recurrent Networks to Learn Fixed-Length Representations of Variable-Length Strings

AAAI Conferences

Four connectionist models are reported that learn static representations of variable-length strings using a novel autosequencer architecture. These representations were learned as plans for a simple recurrent network to regenerate a given input sequence. Results showed that the autosequencer can be used to address the dispersion problem because the positions and identities of letters in a string were integrated over learning into the plan representations. Results also revealed a moderate degree of componentiality in the plan representations. Linguistic structures vary in length.


Graded Grammaticality in Prediction Fractal Machines

Neural Information Processing Systems

We introduce a novel method of constructing language models, which avoids some of the problems associated with recurrent neural networks.The method of creating a Prediction Fractal Machine (PFM) [1] is briefly described and some experiments are presented which demonstrate the suitability of PFMs for language modeling. PFMs distinguish reliably between minimal pairs, and their behavior isconsistent with the hypothesis [4] that wellformedness is'graded' not absolute. A discussion of their potential to offer fresh insights into language acquisition and processing follows. 1 Introduction Cognitive linguistics has seen the development in recent years of two important, related trends.


Context-freeversus context-dependent constituency relations: A false dichotomy

AAAI Conferences

In this paper I articulate a notion of constituency orthogonal both to classical and connectionist approaches. I shall consider three "structure-in-time" connectionist networks (Simple Recurrent Networks, Long Short-Term Memory Models, and Nonclassical Connectionist Parsers). I shall argue that explaining compositionality by means of any of these models drives us to an information-processing blind alley. In my view, human combinatorial behaviour must be grounded in sensorimotor activity and the parameter of time. A dynamical notion of constituency will thus be offered.