Reviews: On the Dimensionality of Word Embedding

Neural Information Processing Systems 

In this work a Pairwise Inner Product Loss is developed, motivated by unitary invariance of word embeddings. It then investigates theoretically the relationship between word embedding dimensionality to robustness under different singular exponents, and relates it to bias/variance tradeoff. The discovered relationships are used to define a criterion for a word embedding dimensionality selection procedure, which is empirically validated on 3 intrinsic evaluation tasks. The PIP loss technique is well motivated, clear, and easy to understand. It would be interesting to see this technique applied in other contexts and for other NLP tasks. The paper is clearly written, well motivated, and sections follow naturally.