Neural Conditional Probability for Uncertainty Quantification
–Neural Information Processing Systems
We introduce Neural Conditional Probability (NCP), an operator-theoretic approach to learning conditional distributions with a focus on statistical inference tasks. NCP can be used to build conditional confidence regions and extract key statistics such as conditional quantiles, mean, and covariance. It offers streamlined learning via a single unconditional training phase, allowing efficient inference without the need for retraining even when conditioning changes. By leveraging the approximation capabilities of neural networks, NCP efficiently handles a wide variety of complex probability distributions. We provide theoretical guarantees that ensure both optimization consistency and statistical accuracy. In experiments, we show that NCP with a 2-hidden-layer network matches or outperforms leading methods. This demonstrates that a a minimalistic architecture with a theoretically grounded loss can achieve competitive results, even in the face of more complex architectures.
Neural Information Processing Systems
May-29-2025, 23:24:04 GMT
- Country:
- Europe > Germany (0.14)
- North America > United States (0.14)
- Genre:
- Overview (0.67)
- Research Report
- Experimental Study (1.00)
- New Finding (0.92)
- Industry:
- Education (0.46)
- Health & Medicine (0.67)
- Information Technology (0.45)