Goto

Collaborating Authors

 dgn







Analytical Probability Distributions and Exact Expectation-Maximization for Deep Generative Networks

Neural Information Processing Systems

Deep Generative Networks (DGNs) with probabilistic modeling of their output and latent space are currently trained via Variational Autoencoders (VAEs). In the absence of a known analytical form for the posterior and likelihood expectation, VAEs resort to approximations, including (Amortized) Variational Inference (AVI) and Monte-Carlo sampling. We exploit the Continuous Piecewise Affine property of modern DGNs to derive their posterior and marginal distributions as well as the latter's first two moments. These findings enable us to derive an analytical Expectation-Maximization (EM) algorithm for gradient-free DGN learning. We demonstrate empirically that EM training of DGNs produces greater likelihood than VAE training. Our new framework will guide the design of new VAE AVI that better approximates the true posterior and open new avenues to apply standard statistical tools for model comparison, anomaly detection, and missing data imputation.




Neural Path Features and Neural Path Kernel: Understanding the role of gates in deep learning Chandrashekar Lakshminarayanan and Amit Vikram Singh

Neural Information Processing Systems

A deep neural network (DNN) with ReLU activations has many gates, and the on/off status of each gate changes across input examples as well as network weights. For a given input example, only a subset of gates are active, i.e., on, and the sub-network of weights connected to these active gates is responsible for producing


Towards Deeper Graph Neural Networks with Differentiable Group Normalization

Neural Information Processing Systems

Several attempts have been made to tackle the issue by bringing linked node pairs close and unlinked pairs distinct. However, they often ignore the intrinsic community structures and would result in sub-optimal performance.