Reviews: Random deep neural networks are biased towards simple functions

Neural Information Processing Systems 

Summary: If we assume the inputs of DNNs to be binary (e.g. This theoretical result suggests that DNNs produce functions with low Kolmogorov complexity, which is useful for studying generalization bounds of DNNs. Some experiments on random data and on tiny nets on MNIST (in the supplement) are presented, empirically verifying the bounds. I tend to weakly reject this paper due to the weakness of the empirical and theoretical results, and on the organization of the paper (MNIST results in main text?). Given result 2), I'd strongly suggest that the authors think about what might be possible from an adversarial setting.