Regularizing Towards Permutation Invariance in Recurrent Models
–Neural Information Processing Systems
In many machine learning problems the output should not depend on the order of the input. Such "permutation invariant" functions have been studied extensively recently. Here we argue that temporal architectures such as RNNs are highly relevant for such problems, despite the inherent dependence of RNNs on order. We show that RNNs can be regularized towards permutation invariance, and that this can result in compact models, as compared to non-recurrent architectures. We implement this idea via a novel form of stochastic regularization. Existing solutions mostly suggest restricting the learning problem to hypothesis classes which are permutation invariant by design [Zaheer et al., 2017, Lee et al., 2019, Murphy et al., 2018]. Our approach of enforcing permutation invariance via regularization gives rise to models which are semi permutation invariant (e.g.
Neural Information Processing Systems
Aug-16-2025, 15:30:15 GMT
- Country:
- Asia > Middle East
- Israel > Tel Aviv District > Tel Aviv (0.04)
- North America
- Canada (0.04)
- Puerto Rico > San Juan
- San Juan (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Education > Focused Education > Special Education (0.44)
- Technology: