target-vae
SupplementaryMaterial
To study the accuracy of the predicted rotation angles by TARGET-VAE, we calculate the mean standard deviation ofthepredicted rotations, introduced in[1]. This metric basically measures the mean square error between the rotation ofthe object inthe input image and the predicted rotation forthatobject. Wefind that the model correctly identifies and reconstructs the objects (Figure 3). Eachshape is rotated by one of 40 values linearly spaced in [0, 2π], translated across bothx and y dimensions, and scaled using one of six linearly spaced values in [0.5, 1]. Weobserved that, as expected, eliminating inference on the discretized rotation dimension has a significant negative effect on identifying transformation-invariant representations and the clustering accuracy on MNIST(U) is only33.8%(Table2).
Unsupervised Object Representation Learning using Translation and Rotation Group Equivariant VAE
In many imaging modalities, objects of interest can occur in a variety of locations and poses (i.e. are subject to translations and rotations in 2d or 3d), but the location and pose of an object does not change its semantics (i.e. the object's essence). That is, the specific location and rotation of an airplane in satellite imagery, or the 3d rotation of a chair in a natural image, or the rotation of a particle in a cryo-electron micrograph, do not change the intrinsic nature of those objects. Here, we consider the problem of learning semantic representations of objects that are invariant to pose and location in a fully unsupervised manner. We address shortcomings in previous approaches to this problem by introducing TARGET-VAE, a translation and rotation group-equivariant variational autoencoder framework.
- North America > United States > New York (0.04)
- Asia > Middle East > Jordan (0.04)
Unsupervised Object Representation Learning using Translation and Rotation Group Equivariant VAE
In many imaging modalities, objects of interest can occur in a variety of locations and poses (i.e. are subject to translations and rotations in 2d or 3d), but the location and pose of an object does not change its semantics (i.e. the object's essence). That is, the specific location and rotation of an airplane in satellite imagery, or the 3d rotation of a chair in a natural image, or the rotation of a particle in a cryo-electron micrograph, do not change the intrinsic nature of those objects. Here, we consider the problem of learning semantic representations of objects that are invariant to pose and location in a fully unsupervised manner. We address shortcomings in previous approaches to this problem by introducing TARGET-VAE, a translation and rotation group-equivariant variational autoencoder framework. In comprehensive experiments, we show that TARGET-VAE learns disentangled representations without supervision that significantly improve upon, and avoid the pathologies of, previous methods.