Probabilistic Invariant Learning with Randomized Linear Classifiers
–Neural Information Processing Systems
Designing models that are both expressive and preserve known invariances of tasks is an increasingly hard problem. In this work, we show how to leverage randomness and design models that are both expressive and invariant but use less resources. Inspired by randomized algorithms, our key insight is that accepting probabilistic notions of universal approximation and invariance can reduce our resource requirements. More specifically, we propose a class of binary classification models called Randomized Linear Classifiers (RLCs). We give parameter and sample size conditions in which RLCs can, with high probability, approximate any (smooth) function while preserving invariance to compact group transformations.
Neural Information Processing Systems
May-27-2025, 14:24:11 GMT
- Technology: