Appendix: InverseLearningofSymmetries 1 Model
–Neural Information Processing Systems
To do so, we describe the encoder termI(Z;X), which is calculated as the Kullback-Leibler divergence(DKL)betweenpφ(z|x)andp(z). However upon this point, we have only learned the parameters ofthe Gaussian distribution. Thenaiveapproach requires estimating the joint distribution of the variables. Anumberofmethodsestimating lower bounds of mutual information exist [1, 11]. Such bounds, however, suffer from inherent statistical limitations [8].
Neural Information Processing Systems
Feb-10-2026, 12:02:30 GMT
- Country:
- Technology: