Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems 

The authors present an intriguing take on factor analysis: namely, the use of Ganchev et al's posterior regularization to enforce non negativity constraints on the posterior. They present proofs of convergence and correctness, and present a scalable method for inference and learning in stacked constrained factor analysis models. The method appears to be sound and is an interesting direction overall, a refreshing departure from much of the existing literature, and the first instance of which I am aware that posterior regularization has been highlighted in a deep learning/unsupervised feature learning context. While I did not review it in detail, the degree of thoroughness demonstrated by the supplementary material is truly impressive. My main concern with this paper is one of being somewhat underwhelmed by the empirical evaluation, in light of the norms of the community.