05311655a15b75fab86956663e1819cd-Supplemental.pdf

Neural Information Processing Systems 

In what follows we will call each experiment by its corresponding figure or table number for convenience. For the rotated/shifted MNIST images (Figure 8, 9), we use the Affine transformation function in the TorchVisionlibrary. In experiments (Table 2, 3, 4, 5), we use either or both of the Large (L) and Small (S) dataset for the standard benchmark vision data: MNIST, FMNIST, KMNIST, Omniglot, SVHN, CIFAR10, CIFAR100, CELEBA. For Figure 10, Table 3, the regularization coefficients for CAE, WAE are searched around 0.01 0.001, the noise level used in DAE is searched around0.1 0.01, and the regularization coefficient andλforSPAEandNRAE aresearched around0.001 Ontheother hand, the runtimes of our algorithms are comparable with other existing methods.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found