Review for NeurIPS paper: Neural Sparse Representation for Image Restoration
–Neural Information Processing Systems
Weaknesses: The paper is rather weak on the theoretical side of sparsity and the existing work. The paper claims in the introduction that "sparsity of hidden representation in deep neural networks cannot be solved by iterative optimization as sparse coding". I do not understand this claims since algorithms such as LISTA do compute sparse coding from few layers in deep networks. The fact that sparsity is needed to do denoising, compression or inverse problems is well understood independantly from neural networks and result from work carried by may researchers such as Donoho between 1995 and 2005. I do n \ot understand why they say that such sparsity can not be implemented given that a ReLU is the proximal operator of a positive l1 sparse coder, that many algorithms implement a sparse code with such architectures, and that such architectures with ReLU get very good performance for denoising and inverse problems as shown by "Convolutional Neural Networks for Inverse Problems in Imaging: A Review" published in 2017, and much more work has been done so far.
Neural Information Processing Systems
Jan-27-2025, 16:03:18 GMT
- Technology: