Rectified Factor Networks

Djork-Arné Clevert, Andreas Mayr, Thomas Unterthiner, Sepp Hochreiter

Neural Information Processing Systems 

We propose rectified factor networks (RFNs) to efficiently construct very sparse, non-linear, high-dimensional representations of the input. RFN models identify rare and small events in the input, have a low interference between code units, have a small reconstruction error, and explain the data covariance structure. RFN learning is a generalized alternating minimization algorithm derived from the posterior regularization method which enforces non-negative and normalized posterior means.