The Deep Generative Decoder: MAP estimation of representations improves modeling of single-cell RNA data
Schuster, Viktoria, Krogh, Anders
–arXiv.org Artificial Intelligence
Learning low-dimensional representations of single-cell transcriptomics has become instrumental to its downstream analysis. The state of the art is currently represented by neural network models such as variational autoencoders (VAEs) which use a variational approximation of the likelihood for inference. We here present the Deep Generative Decoder (DGD), a simple generative model that computes model parameters and representations directly via maximum a posteriori (MAP) estimation. The DGD handles complex parameterized latent distributions naturally unlike VAEs which typically use a fixed Gaussian distribution, because of the complexity of adding other types. We first show its general functionality on a commonly used benchmark set, Fashion-MNIST. Secondly, we apply the model to multiple single-cell data sets. Here the DGD learns low-dimensional, meaningful and well-structured latent representations with sub-clustering beyond the provided labels. The advantages of this approach are its simplicity and its capability to provide representations of much smaller dimensionality than a comparable VAE.
arXiv.org Artificial Intelligence
Jul-12-2023
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- Netherlands > South Holland
- Leiden (0.04)
- Switzerland (0.04)
- Denmark > Capital Region
- North America > United States
- California
- Los Angeles County > Pasadena (0.04)
- San Francisco County > San Francisco (0.14)
- Nevada > Clark County
- Las Vegas (0.04)
- New York > New York County
- New York City (0.04)
- California
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Industry:
- Technology: