Goto

Collaborating Authors

 Nazabal, Alfredo


Inference and Learning for Generative Capsule Models

arXiv.org Artificial Intelligence

Capsule networks (see e.g. Hinton et al., 2018) aim to encode knowledge of and reason about the relationship between an object and its parts. In this paper we specify a generative model for such data, and derive a variational algorithm for inferring the transformation of each model object in a scene, and the assignments of observed parts to the objects. We derive a learning algorithm for the object models, based on variational expectation maximization (Jordan et al., 1999). We also study an alternative inference algorithm based on the RANSAC method of Fischler and Bolles (1981). We apply these inference methods to (i) data generated from multiple geometric objects like squares and triangles ("constellations"), and (ii) data from a parts-based model of faces. Recent work by Kosiorek et al. (2019) has used amortized inference via stacked capsule autoencoders (SCAEs) to tackle this problem -- our results show that we significantly outperform them where we can make comparisons (on the constellations data).


VAEs in the Presence of Missing Data

arXiv.org Machine Learning

Real world datasets often contain entries with Existing approaches which adapt VAEs to datasets with missing elements e.g. in a medical dataset, a patient missing data (Vedantam et al., 2017; Nazabal et al., 2018; is unlikely to have taken all possible diagnostic Mattei & Frellsen, 2019; Ma et al., 2019) suffer from a number tests. Variational Autoencoders (VAEs) are of significant disadvantages, including 1) not handling popular generative models often used for unsupervised missing not at random (MNAR) data, 2) replacing missing learning. Despite their widespread use elements with zeros with no way to distinguish an observed it is unclear how best to apply VAEs to datasets data element with value zero from a missing element, 3) with missing data. We develop a novel latent not scaling to high dimensional inputs and/or 4) restricting variable model of a corruption process which the types of neural network architectures permitted, these generates missing data, and derive a corresponding issues are discussed in detail below. We aim to improve tractable evidence lower bound (ELBO). Our upon the handling of missing data by VAEs by addressing model is straightforward to implement, can handle the disadvantages of the existing approaches. In particular both missing completely at random (MCAR) and we propose a novel latent variable probabilistic model of missing not at random (MNAR) data, scales to missing data as the result of a corruption process, and derive high dimensional inputs and gives both the VAE a tractable ELBO for our proposed model.


Handling Incomplete Heterogeneous Data using VAEs

arXiv.org Machine Learning

Variational autoencoders (VAEs), as well as other generative models, have been shown to be efficient and accurate to capture the latent structure of vast amounts of complex high-dimensional data. However, existing VAEs can still not directly handle data that are heterogenous (mixed continuous and discrete) or incomplete (with missing data at random), which is indeed common in real-world applications. In this paper, we propose a general framework to design VAEs, suitable for fitting incomplete heterogenous data. The proposed HI-VAE includes likelihood models for real-valued, positive real valued, interval, categorical, ordinal and count data, and allows to estimate (and potentially impute) missing data accurately. Furthermore, HI-VAE presents competitive predictive performance in supervised tasks, outperforming supervised models when trained on incomplete data.