Learning Disentangled Joint Continuous and Discrete Representations

Neural Information Processing Systems

We present a framework for learning disentangled and interpretable jointly continuous and discrete representations in an unsupervised manner. By augmenting the continuous latent distribution of variational autoencoders with a relaxed discrete distribution and controlling the amount of information encoded in each latent unit, we show how continuous and categorical factors of variation can be discovered automatically from data. Experiments show that the framework disentangles continuous and discrete generative factors on various datasets and outperforms current disentangling methods when a discrete generative factor is prominent. Papers published at the Neural Information Processing Systems Conference.



Discrete Infomax Codes for Meta-Learning

arXiv.org Machine Learning

Learning compact discrete representations of data is itself a key task in addition to facilitating subsequent processing. It is also relevant to meta-learning since a latent representation shared across relevant tasks enables a model to adapt to new tasks quickly. In this paper, we present a method for learning a stochastic encoder that yields discrete p-way codes of length d by maximizing the mutual information between representations and labels. We show that previous loss functions for deep metric learning are approximations to this information-theoretic objective function. Our model, Discrete InfoMax Codes (DIMCO), learns to produce a short representation of data that can be used to classify classes with few labeled examples. Our analysis shows that using shorter codes reduces overfitting in the context of few-shot classification. Experiments show that DIMCO requires less memory (i.e., code length) for performance similar to previous methods and that our method is particularly effective when the training dataset is small.


Intel plans to release its first discrete GPU in 2020

Engadget

After Intel nabbed Raja Koduri last year from AMD, where he led Radeon development, it was only a matter of time until it entered the high-end GPU arena. That confirmation came in a short tweet today: Intel plans to release its first discrete GPU -- one that isn't integrated into a CPU like its current graphics -- in 2020. In a meeting with analysts last week, Intel executive Navin Shenoy noted that it's exploring both server and client (gaming and professional graphics) offerings, according to Ryan Shrout. There's no telling which market Intel is targeting first, but either way, both NVIDIA and AMD should take notice. As Shrout points out, even a 2020 release seems surprisingly fast for Intel.


Integer Discrete Flows and Lossless Compression

arXiv.org Machine Learning

Lossless compression methods shorten the expected representation size of data without loss of information, using a statistical model. Flow-based models are attractive in this setting because they admit exact likelihood optimization, which is equivalent to minimizing the expected number of bits per message. However, conventional flows assume continuous data, which may lead to reconstruction errors when quantized for compression. For that reason, we introduce a flow-based generative model for ordinal discrete data called Integer Discrete Flow (IDF): a bijective integer map that can learn rich transformations on high-dimensional data. As building blocks for IDFs, we introduce a flexible transformation layer called integer discrete coupling. Our experiments show that IDFs are competitive with other flow-based generative models. Furthermore, we demonstrate that IDF based compression achieves state-of-the-art lossless compression rates on CIFAR10, ImageNet32, and ImageNet64. To the best of our knowledge, this is the first lossless compression method that uses invertible neural networks.