Goto

Collaborating Authors

 entropy


Supplementary Materials for: Max-Sliced Mutual Information A Proofs

Neural Information Processing Systems

A.1 Proof of Proposition 1 We note that 1 is restated and was proved in [25, Appendix A.1] Proof of 2: Non-negativity directly follows by non-negativity of mutual information. Proof of 5: The proof relies on the independence of functions of independent random variables. This concludes the proof. 1 A.2 Proof of Proposition 2 By translation invariance of mutual information, we may assume w.l.o.g. that the means are Next, we show that we may equivalently optimize with the added unit variance constraint. Example 3.4]), we have I (A B) null, where the last equality uses the unit variance property and Schur's determinant formula. Armed with Lemma 1, we are in place to prove Proposition 2. Since the CCA solutions Theorem 2.2], which is restated next for completeness.



On Convergence of Polynomial Approximations to the Gaussian Mixture Entropy

Neural Information Processing Systems

Gaussian mixture models (GMMs) are fundamental to machine learning due to their flexibility as approximating densities. However, uncertainty quantification of GMMs remains a challenge as differential entropy lacks a closed form.


Implicit Variational Inference for High-Dimensional Posteriors

Neural Information Processing Systems

In variational inference, the benefits of Bayesian models rely on accurately capturing the true posterior distribution. We propose using neural samplers that specify implicit distributions, which are well-suited for approximating complex multimodal and correlated posteriors in high-dimensional spaces.





Figure 9: In experiments, we used a common feature-extractor (F

Neural Information Processing Systems

Here, we include implementation details omitted from the main paper for brevity. Upon acceptance, a deanonymized repository will be released. The last layer's dimension depended upon the exact The feature extractors and decoders varied by domain. In particular, we found that if we did not apply this linear transformation (i.e., pass the raw encodings For VQ-based methods, use a large enough codebook to have at least one element per class. Other differences simply reflected differences in architecture (e.g., For iNat, we trained all models with batch size 256, using the hyperparameters specified in Table 3.