How Regularization Terms Make Invertible Neural Networks Bayesian Point Estimators
–arXiv.org Artificial Intelligence
Whenever a quantity of interest cannot be observed directly but only through an indirect measurement process or in the presence of noise, one is faced with an inverse problem. To stabilize the reconstruction and mitigate the information loss inherent in the measurement, it is necessary to incorporate additional knowledge about the unknown data -- its prior distribution, which encodes what one expects the reconstruction to resemble, such as the characteristic features of natural images. Yet our ability to describe natural images in an explicit, algorithmic form remains quite limited. Fortunately, recent years have seen the emergence of data-driven approaches that enable the construction of priors directly from collections of representative samples. While these approaches often surpass classical methods in reconstruction quality, many of them lack theoretical guarantees and remain difficult to interpret. A promising direction explored recently [3, 4, 5, 21] involves invertible neural networks. Thanks to their bidirectional structure, a single network can simultaneously approximate the forward operator and serve as a reconstruction method, with stability ensured by the architecture itself. This hybrid use makes it possible to assess deviations from a known forward operator - or even replace it by a data-based version - while maintaining interpretability of the reconstruction process by the learned measurement model and vice versa. This dual capability is particularly relevant in applications where both high-fidelity reconstructions and a faithful representation of the measurement process are critical, such as scientific imaging and med-Preprint.
arXiv.org Artificial Intelligence
Oct-31-2025
- Country:
- Genre:
- Research Report > New Finding (0.68)
- Technology: