Goto

Collaborating Authors

 glow


Glow: Generative Flow with Invertible 1x1 Convolutions

Neural Information Processing Systems

Flow-based generative models are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood and qualitative sample quality. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient synthesis of large and subjectively realistic-looking images.


Why do cats' eyes glow in the dark?

Popular Science

Why do cats' eyes glow in the dark? That eerie glow is actually a pair of built-in night-vision goggles. Cat eyes have even inspired some life-saving tech. Breakthroughs, discoveries, and DIY tips sent six days a week. One foggy night in 1933, a businessman named Percy Shaw was driving home from the pub in Yorkshire, England.



MaCow: Masked Convolutional Generative Flow

Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy

Neural Information Processing Systems

Unsupervised learning of probabilistic models is a central yet challenging problem. Deep generative models have shown promising results in modeling complex distributions such as natural images (Radford et al.,2015), audio (Van Den Oord et al.,2016)and text (Bowman et al.,2015).





Deer markings actually glow

Popular Science

The scrapes and rubs the mammals leave behind shine under UV light humans can't see. Breakthroughs, discoveries, and DIY tips sent six days a week. Animals see the world around them in ways that we humans can only imagine. Arctic reindeer's eyes change color with the season to help them find food, while giant squid have eyes the size of dinner plates. Many species take advantage of seeing ultraviolet (UV) light that's invisible to humans--including deer .


Glow: Generative Flow with Invertible 1x1 Convolutions

Neural Information Processing Systems

Flow-based generative models are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood and qualitative sample quality. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient synthesis of large and subjectively realistic-looking images.


Keunseo Kim

Neural Information Processing Systems

Supplementary Material for the Paper entitled "Locally Most Powerful Bayesian Test for Out-of-Distribution Detection Using Deep Generative Models" We present the implementation details for the V AE and Glow, used in Section 5. First, the structure of the Glow is presented in Table 1, where the "Level" refers to the number of scales that split the dimension of the latent space defined in the multi-scale architecture (Dinh et al., 2016), the "Depth per level" refers to the number of flow layers that are repeated on each scale, "In-channels hidden units" refers to the number of input channels and the number of hidden units of the neural network that defines the parameters of the flow layers, and the "Coupling" refers to the type of the coupling layers. For the training, we used the Adam optimizer with learning rate of 0.001 and batch size of 64 for both datasets. Next, Table 2 presents the structure for the V AE, which is the same as that used in Xiao et al. (2020). Batch normalization layer (BN) and activation layer by ReLU function (ReLU) were added after each convolutional layer. For training the V AE, we used the Adam optimizer with a learning rate of 0.0005, weight decay of 0.00003, and batch size of 64 for both datasets.