Goto

Collaborating Authors

Natural Language State Representation for Reinforcement Learning

arXiv.org Artificial Intelligence

Recent advances in Reinforcement Learning have highlighted the difficulties in learning within complex high dimensional domains. We argue that one of the main reasons that current approaches do not perform well, is that the information is represented sub-optimally. A natural way to describe what we observe, is through natural language. In this paper, we implement a natural language state representation to learn and complete tasks. Our experiments suggest that natural language based agents are more robust, converge faster and perform better than vision based agents, showing the benefit of using natural language representations for Reinforcement Learning.


McFate

AAAI Conferences

The naturalness of qualitative reasoning suggests that qualitative representations might be an important component of the semantics of natural language. Prior work showed that frame-based representations of qualitative process theory constructs could indeed be extracted from natural language texts. That technique relied on the parser recognizing specific syntactic constructions, which had limited coverage. This paper describes a new approach, using narrative function to represent the higher-order relationships between the constituents of a sentence and between sentences in a discourse. We outline how narrative function combined with query-driven abduction enables the same kinds of information to be extracted from natural language texts. Moreover, we also show how the same technique can be used to extract type-level qualitative representations from text, and used to improve performance in playing a strategy game.


Allen

AAAI Conferences

Current semantic parsers either compute shallow representations over a wide range of input, or deeper representations in very limited domains. We describe a system that provides broad-coverage, deep semantic parsing designed to work in any domain using a core domain-general lexicon, ontology and grammar. This paper discusses how this core system can be customized for a particularly challenging domain, namely reading research papers in biology.


Sauzay

AAAI Conferences

In one hand, the meaning of natural languages is often described with basic semantic features and a boolean composition of these features. However, this approach is not sufficient to describe more deeply the meaning of linguistic units. In the other hand, the semantic of computer languages often starts from Church's λ-calculus and walks up to more abstract levels. In this communication, is introduced a new general computational approach of the representation of meanings for high level languages (natural languages and programming languages), in working from the paradigm of compilation in computer sciences. In this compilation paradigm, the expressions of a high level symbolic language are changed in representations by means of intermediary levels, to hit formal representations directly compatible with the material structures of a machine or of a brain. From this analogy, expressions of a natural language can be analyzed by different metalinguistic levels of representations linked to each other by changing representation processes. The communication will give examples of representation changing between expressions of English and representations of meanings expressed by algebra of "treilles".