Goto

Collaborating Authors

Context Vectors are Reflections of Word Vectors in Half the Dimensions

arXiv.org Machine Learning

This paper takes a step towards theoretical analysis of the relationship between word embeddings and context embeddings in models such as word2vec. We start from basic probabilistic assumptions on the nature of word vectors, context vectors, and text generation. These assumptions are well supported either empirically or theoretically by the existing literature. Next, we show that under these assumptions the widely-used word-word PMI matrix is approximately a random symmetric Gaussian ensemble. This, in turn, implies that context vectors are reflections of word vectors in approximately half the dimensions. As a direct application of our result, we suggest a theoretically grounded way of tying weights in the SGNS model.


Context Vectors are Reflections of Word Vectors in Half the Dimensions

Journal of Artificial Intelligence Research

Journal of Artificial Intelligence Research 66 (2019) 225-242 Submitted 02/2019; published 09/2019 Context Vectors Are Reflections of Word Vectors in Half the Dimensions Zhenisbek Assylbekov zhassylbekov@nu.edu.kz Nazarbayev University, Department of Mathematics, 53 Kabanbay Batyr ave., Astana 010000 Kazakhstan Abstract This paper takes a step towards the theoretical analysis of the relationship between word embeddings and context embeddings in models such as word2vec. We start from basic probabilistic assumptions on the nature of word vectors, context vectors, and text generation. These assumptions are supported either empirically or theoretically by the existing literature. Next, we show that under these assumptions the widely-used word-word PMI matrix is approximately a random symmetric Gaussian ensemble. This, in turn, implies that context vectors are reflections of word vectors in approximately half the dimensions. As a direct application of our result, we suggest a theoretically grounded way of tying weights in the SGNS model. 1 1. Introduction and Main Result Today word embeddings play an important role in many natural language processing tasks, from predictive language models and machine translation to image annotation and question answering, where they are usually plugged into a larger model.


Language Models, Word2Vec, and Efficient Softmax Approximations

@machinelearnbot

The Word2Vec model has become a standard method for representing words as dense vectors. This is typically done as a preprocessing step, after which the learned vectors are fed into a discriminative model (typically an RNN) to generate predictions such as movie review sentiment, do machine translation, or even generate text, character by character. Previously, the bag of words model was commonly used to represent words and sentences as numerical vectors, which could then be fed into a classifier (for example Naive Bayes) to produce output predictions. Given a vocabulary of words and a document of words, a -dimensional vector would be created to represent the vector, where index denotes the number of times the th word in the vocabulary occured in the document. This model represented words as atomic units, assuming that all words were independent of each other.


Multi-View Learning of Word Embeddings via CCA

Neural Information Processing Systems

Recently, there has been substantial interest in using large amounts of unlabeled data to learn word representations which can then be used as features in supervised classifiers for NLP tasks. However, most current approaches are slow to train, do not model the context of the word, and lack theoretical grounding. In this paper, we present a new learning method, Low Rank Multi-View Learning (LR-MVL) which uses a fast spectral method to estimate low dimensional context-specific word representations from unlabeled data. These representation features can then be used with any supervised learner. LR-MVL is extremely fast, gives guaranteed convergence to a global optimum, is theoretically elegant, and achieves state-ofthe-art performanceon named entity recognition (NER) and chunking problems.


An introduction to representation learning

#artificialintelligence

Although many companies today possess massive amounts of data, the vast majority of that data is often unstructured and unlabeled. In fact, the amount of data that is appropriately labeled for a specific business need is typically quite small (possibly even zero), and acquiring new labels is usually a slow, expensive endeavor. As a result, algorithms that can extract features from unlabeled data to improve the performance of data-limited tasks are quite valuable. Most machine learning practitioners are first exposed to feature extraction techniques through unsupervised learning. In unsupervised learning, an algorithm attempts to discover the latent features that describe a data set's "structure" under certain (either explicit or implicit) assumptions.