Reviews: Parsimonious Bayesian deep networks

Neural Information Processing Systems 

The paper introduces a new type of (deep) neural network for binary classification. Each layer is in principle infinitely wide but in practice finite number of units is used. The layers are trained sequentially by first training one layer, and then always the next layer after the previous one. The main claim is that the proposed model gives comparable results to the alternative approaches by utilizing fewer hyperplanes that results in faster out-of-sample predictions. The approach seems somewhat novel and the results support the claim to some extent.