Factorial LDA: Sparse Multi-Dimensional Text Models

Paul, Michael, Dredze, Mark

Neural Information Processing Systems 

Multi-dimensional latent variable models can capture the many latent factors in a text corpus, such as topic, author perspective and sentiment. We introduce factorial LDA, a multi-dimensional latent variable model in which a document is influenced by K different factors, and each word token depends on a K-dimensional vector of latent variables. Our model incorporates structured word priors and learns a sparse product of factors. Experiments on research abstracts show that our model can learn latent factors such as research topic, scientific discipline, and focus (e.g. Our modeling improvements reduce test perplexity and improve human interpretability of the discovered factors.