Goto

Collaborating Authors

Variational Autoencoder with Embedded Student-$t$ Mixture Model for Authorship Attribution

arXiv.org Machine Learning

Traditional computational authorship attribution describes a classification task in a closed-set scenario. Given a finite set of candidate authors and corresponding labeled texts, the objective is to determine which of the authors has written another set of anonymous or disputed texts. In this work, we propose a probabilistic autoencoding framework to deal with this supervised classification task. More precisely, we are extending a variational autoencoder (VAE) with embedded Gaussian mixture model to a Student-$t$ mixture model. Autoencoders have had tremendous success in learning latent representations. However, existing VAEs are currently still bound by limitations imposed by the assumed Gaussianity of the underlying probability distributions in the latent space. In this work, we are extending the Gaussian model for the VAE to a Student-$t$ model, which allows for an independent control of the "heaviness" of the respective tails of the implied probability densities. Experiments over an Amazon review dataset indicate superior performance of the proposed method.


Deep Generative Modeling(DGM)

#artificialintelligence

The main goal of the DGM to estimate the likelihood of each observation and to create new samples from the underlying distribution, in other words take as input training samples from some distribution and learn a model that represents the distribution.It is achieved by density estimation and sample generation. We will be having some data which has a probability distribution and when we give new data this generates samples with the reference of the input PDF. Variables that are not directly observed but rather inferred from other variables that are observed(true exploratory variables).In latent variable model, there are different types of models auto encoders, Variational Auto encoders(VAEs) and Generative adversarial networks(GANs) Foundational generative model which tries to build this latent Variable Representation by self encoding the inputs.Encoder,Learns mapping from the data X to low dimensional latent space Z. Decode - mapping back from latent space Z to reconstruct observation as X' Autoencoders focus on Bottleneck layer which helps to learn a compressed latent representation and Reconstruction loss which helps the latent representation to capture(or encode) as much "information" about the data as possible. In this loss function does not use any labels, more likely output minus input is the loss obtained L(x) x-x' ² For VAEs impose a stochastic or variational twist on this architecture helps to generate smoother representation of data.In this instead of learning the latent variable(z) directly for each variable the VAEs learns mean and variance associated with that latent variable.this With the help of regularization term we can avoid overfitting.Futher Unlike in Autoencoders, In VAEs we cannot directly back-propagate gradients through sampling layers.To achieve back-propagation we need to parameterise the sampling layers such that, it could be trained end to end.


Spike and Slab Gaussian Process Latent Variable Models

arXiv.org Machine Learning

The Gaussian process latent variable model (GP-LVM) is a popular approach to non-linear probabilistic dimensionality reduction. One design choice for the model is the number of latent variables. We present a spike and slab prior for the GP-LVM and propose an efficient variational inference procedure that gives a lower bound of the log marginal likelihood. The new model provides a more principled approach for selecting latent dimensions than the standard way of thresholding the length-scale parameters. The effectiveness of our approach is demonstrated through experiments on real and simulated data. Further, we extend multi-view Gaussian processes that rely on sharing latent dimensions (known as manifold relevance determination) with spike and slab priors. This allows a more principled approach for selecting a subset of the latent space for each view of data. The extended model outperforms the previous state-of-the-art when applied to a cross-modal multimedia retrieval task.


Learning Disentangled Joint Continuous and Discrete Representations

Neural Information Processing Systems

We present a framework for learning disentangled and interpretable jointly continuous and discrete representations in an unsupervised manner. By augmenting the continuous latent distribution of variational autoencoders with a relaxed discrete distribution and controlling the amount of information encoded in each latent unit, we show how continuous and categorical factors of variation can be discovered automatically from data. Experiments show that the framework disentangles continuous and discrete generative factors on various datasets and outperforms current disentangling methods when a discrete generative factor is prominent.


Probabilistic Neural Programmed Networks for Scene Generation

Neural Information Processing Systems

In this paper we address the text to scene image generation problem. Generative models that capture the variability in complicated scenes containing rich semantics is a grand goal of image generation. Complicated scene images contain rich visual elements, compositional visual concepts, and complicated relations between objects. Generative models, as an analysis-by-synthesis process, should encompass the following three core components: 1) the generation process that composes the scene; 2) what are the primitive visual elements and how are they composed; 3) the rendering of abstract concepts into their pixel-level realizations. We propose PNP-Net, a variational auto-encoder framework that addresses these three challenges: it flexibly composes images with a dynamic network structure, learns a set of distribution transformers that can compose distributions based on semantics, and decodes samples from these distributions into realistic images.