VC dimensions of group convolutional neural networks

Petersen, Philipp Christian, Sepliarskaia, Anna

arXiv.org Artificial Intelligence 

Due to impressive results in image recognition, convolutional neural networks (CNNs) have become one of the most widely-used neural network architectures [12, 13]. It is believed that one of the main reasons for the efficiency of CNNs is their ability to convert translation symmetry of the data into a built-in translationequivariance property of the neural network without exhausting the data to learn the equivariance [4, 15]. Based on this intuition, other data symmetries have recently been incorporated into neural network architectures. Group convolutional neural networks (G-CNNs) are a natural generalization of CNNs that can be equivariant with respect to rotation [5, 24, 23, 9], scale [21, 20, 1], and other symmetries defined by matrix groups [7]. Moreover, every neural network that is equivariant to the action of a group on its input is a G-CNN, where the convolutions are with respect to the group, [11] (see Theorem 2.10 below).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found