Review for NeurIPS paper: Reciprocal Adversarial Learning via Characteristic Functions
–Neural Information Processing Systems
Weaknesses: My primary concern is that: 0. The paper seems to propose two ideas: 1) measuring distance between distributions as an expected squared difference between empirical characteristic functions evaluated at points sampled according to some adversarially learned distribution T; 2) the reciprocal training of adversarial autoencoders, i.e. adversarially aligning embeddings of X and Y, while making sure that these embeddings follow the Gaussian distribution and minimize the reconstruction loss. I wonder whether the impact of these two design choices can be evaluated independently: 1) seeing how direct minimization of C_T(X, g(Z)) wrt g performs compared to the model with a dedicated encoder/critic; 2) replacing C_T in Algorithm 1 with MMD / Sliced Wasserstein Distance or another statistical distance (moreover, distance to a Gaussian can often be estimated in closed form); does Lemma 4 hold for other statistical distances? And there are some things that I must have misunderstood. In general, authors discuss in great details possible interpretations of phase and amplitude components of CFs, but cram a lot of content critical to proper understanding of the final model on the first half of page 6. For example, in lines 214-215: "we further re-design the critic loss by finding an anchor as C(f(Y),Z) C(f(X),Z)" - it is still not clear to me what "anchors" authors are referring to.
Neural Information Processing Systems
Jan-21-2025, 03:23:19 GMT
- Technology: