Review for NeurIPS paper: Meta-learning from Tasks with Heterogeneous Attribute Spaces
–Neural Information Processing Systems
Weaknesses: (i) Missing References and Comparisons: Comparison (and citations) with CNAP [1] and self-attention based approaches [2] should be included. Self-attention is itself permutation-invariant (unless you use positional encoding). In a way, self-attention "generalises" the summation operation as it performs a weighted summation of different attention vectors. By setting all keys and queries to 1.0, you effectively end up with the Deep Sets architecture. I also feel a comparison with Prototypical Nets [3] for the few-shot classification setting is needed seeing its close resemblance of the way latent attribute vectors are calculated. I feel that showing results on synthetic data and OpenML data ( relatively easy) is not that interesting.
Neural Information Processing Systems
May-29-2025, 03:58:02 GMT
- Technology: