Approximation-Generalization Trade-offs under (Approximate) Group Equivariance
–Neural Information Processing Systems
The explicit incorporation of task-specific inductive biases through symmetry has emerged as a general design precept in the development of high-performance machine learning models. For example, group equivariant neural networks have demonstrated impressive performance across various domains and applications such as protein and drug design. A prevalent intuition about such models is that the integration of relevant symmetry results in enhanced generalization. Moreover, it is posited that when the data and/or the model may only exhibit approximate or partial symmetry, the optimal or best-performing model is one where the model symmetry aligns with the data symmetry. In this paper, we conduct a formal unified investigation of these intuitions.
Neural Information Processing Systems
May-25-2025, 11:02:39 GMT
- Country:
- Europe (0.46)
- North America > United States
- Maryland (0.14)
- Massachusetts > Middlesex County
- Cambridge (0.14)
- Industry:
- Technology: