Review for NeurIPS paper: Implicit Rank-Minimizing Autoencoder
–Neural Information Processing Systems
This work got mixed reviews: R1 praised the potential impact of such a simple idea being shown to work remarkably well, but other reviewers had significant concerns about the empirical evaluation, which is especially important when the main contribution of the paper is to show that an idea is effective in practice. The reviewers were ultimately unable to reach a consensus about this paper, but all reviewers agreed that the core idea is promising, and R2, R3 and R4 raised their scores in light of the discussion and the author feedback. While the resulting scores still make this a difficult decision overall, I have chosen to recommend acceptance. The main point of discussion was whether the required changes to the manuscript require another review cycle or not. Indeed, the requested changes were quite broad: - demonstrate the effect of the initial variance of the linear layers - compare the model against modern autoencoder variants - compare against vanilla autoencoders with varying latent dimension - demonstrate the effect of the number of linear layers - avoid overclaiming, e.g. about the proposed model working well "with all types of optimizers" - etc.
Neural Information Processing Systems
Jan-27-2025, 10:20:17 GMT