An Unsupervised Character-Aware Neural Approach to Word and Context Representation Learning

Marra, Giuseppe, Zugarini, Andrea, Melacci, Stefano, Maggini, Marco

arXiv.org Machine Learning 

In the last few years, neural networks have been intensively used to develop meaningful distributed representations of words and contexts around them. When these representations, also known as "embed-dings", are learned from unsupervised large corpora, they can be transferred to different tasks with positive effects in terms of performances, especially when only a few supervisions are available. In this work, we further extend this concept, and we present an unsupervised neural architecture that jointly learns word and context embeddings, processing words as sequences of characters. This allows our model to spot the regularities that are due to the word morphology, and to avoid the need of a fixed-sized input vocabulary of words. We show that we can learn compact encoders that, despite the relatively small number of parameters, reach high-level performances in downstream tasks, comparing them with related state-of-the-art approaches or with fully supervised methods. Keywords: Recurrent Neural Networks, Unsupervised Learning, Word and Context Embeddings, Natural Language Processing, Deep Learning 1 Introduction Recent advances in Natural Language Processing (NLP) are characterized by the development of techniques that compute powerful word embeddings and by the extensive use of neural language models. Word Embeddings (WEs) aim at representing individual words in a low-dimensional continuous space, in order to exploit its topological properties to model semantic or grammatical relationships between different words. In particular, they are based on the assumption that functionally or semantically related words appear in similar contexts. Despite the idea of continuous word representations was proposed a several years ago [4], their importance became strongly popular mostly after the work ofnull This is a post-peer-review, pre-copyedit version of an article published in LNCS, volume 11141.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found