Review for NeurIPS paper: Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder

Neural Information Processing Systems 

The key idea of this paper is checking how well a given VAE can be further trained on a given test input. The hope is that training the encoder for further iterations may increase the likelihood for OOD samples more when compared to inliers to facilitate detection. The authors characterize this improvement by a measure, coined as likelihood regret. The authors do not provide any analysis why this method might work, or characterize the conditions when it might not. This is not a requirement, but then the paper should provide enough empirical evidence that the approach is noteworthy.