Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification - APPENDIX - Contents
–Neural Information Processing Systems
A.1 Assumptions Assumption 1. W e assume that the the sample space X belonging to the input RV X: X R is a compact domain in R The compactness of X is the major aspect here. However, this is always fulfilled for image data, as the pixels can only take certain range of values, and equally fulfilled for most other real-world datasets, as data representations, measurement devices, etc. Assumption 2. W e assume g This is a fairly mild set of assumptions, as it is fulfilled by construction with most existing INN architectures using standard multi-layer subnetworks. A.2 Mutual Cross-Information as Estimator for MI In our case, we only require CI (X,Z However, note that our estimator will likely not be particularly useful outside of our specific use-case, and other methods should be preferred (e.g. Our approach has the specific advantage, that we estimate the MI of the model using the model itself. MINE, we would require three models, one generative model, and two models that only serve to estimate the MI.
Neural Information Processing Systems
Oct-2-2025, 23:52:54 GMT
- Country:
- Europe > France (0.04)
- North America
- Canada > British Columbia
- Vancouver (0.04)
- United States > Louisiana
- Orleans Parish > New Orleans (0.04)
- Canada > British Columbia
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Technology: