On the Utility of Equivariance and Symmetry Breaking in Deep Learning Architectures on Point Clouds

Vadgama, Sharvaree, Islam, Mohammad Mohaiminul, Buracus, Domas, Shewmake, Christian, Bekkers, Erik

arXiv.org Artificial Intelligence 

This paper explores the key factors that influence the performance of models working with point clouds, across different tasks of varying geometric complexity. In this work, we explore the trade-offs between flexibility and weight-sharing introduced by equivariant layers, assessing when equivariance boosts or detracts from performance. It is often argued that providing more information as input improves a model's performance. However, if this additional information breaks certain properties, such as SE(3) equivariance, does it remain beneficial? We identify the key aspects of equivariant and non-equivariant architectures that drive success in different tasks by benchmarking them on segmentation, regression, and generation tasks across multiple datasets with increasing complexity. We observe a positive impact of equivariance, which becomes more pronounced with increasing task complexity, even when strict equivariance is not required. The inductive bias of weight sharing in convolutions, as introduced in LeCun et al. (2010) traditionally refers to applying the same convolution kernel (a linear transformation) across all neighborhoods of an image. To extend this to transformations beyond translations, Cohen & Welling (2016) introduced Group Equivariant CNN (G-CNNs), adding group equivariance properties to encompass group actions and have weight-sharing across group convolution kernels. G-CNN layers are explicitly designed to maintain equivariance under group transformations, allowing the model to handle transformations naturally without needing to learn invariance to changes that preserve object identity.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found