Goto

Collaborating Authors

 hyperprior





Joint Autoregressive and Hierarchical Priors for Learned Image Compression

David Minnen, Johannes Ballé, George D. Toderici

Neural Information Processing Systems

Most recent methods for learning-based, lossy image compression adopt an approach based on transform coding [1]. In this approach, image compression is achieved by first mapping pixel data into a quantized latent representation and then losslessly compressing the latents.


Sparsity via Hyperpriors: A Theoretical and Algorithmic Study under Empirical Bayes Framework

Li, Zhitao, Dong, Yiqiu, Zeng, Xueying

arXiv.org Machine Learning

This paper presents a comprehensive analysis of hyperparameter estimation within the empirical Bayes framework (EBF) for sparse learning. By studying the influence of hyperpriors on the solution of EBF, we establish a theoretical connection between the choice of the hyperprior and the sparsity as well as the local optimality of the resulting solutions. We show that some strictly increasing hyperpriors, such as half-Laplace and half-generalized Gaussian with the power in $(0,1)$, effectively promote sparsity and improve solution stability with respect to measurement noise. Based on this analysis, we adopt a proximal alternating linearized minimization (PALM) algorithm with convergence guaranties for both convex and concave hyperpriors. Extensive numerical tests on two-dimensional image deblurring problems demonstrate that introducing appropriate hyperpriors significantly promotes the sparsity of the solution and enhances restoration accuracy. Furthermore, we illustrate the influence of the noise level and the ill-posedness of inverse problems to EBF solutions.


Addressing prior dependence in hierarchical Bayesian modeling for PTA data analysis I: Methodology and implementation

D'amico, Luigi, Villa, Eleonora, Bittordo, Fatima Modica, Barca, Aldo, Alì, Francesco, Meneghetti, Massimo, Naso, Luca

arXiv.org Machine Learning

Complex inference tasks, such as those encountered in Pulsar Timing Array (PTA) data analysis, rely on Bayesian frameworks. The high-dimensional parameter space and the strong interdependencies among astrophysical, pulsar noise, and nuisance parameters introduce significant challenges for efficient learning and robust inference. These challenges are emblematic of broader issues in decision science, where model over-parameterization and prior sensitivity can compromise both computational tractability and the reliability of the results. We address these issues in the framework of hierarchical Bayesian modeling by introducing a reparameterization strategy. Our approach employs Normalizing Flows (NFs) to decorrelate the parameters governing hierarchical priors from those of astrophysical interest. The use of NF-based mappings provides both the flexibility to realize the reparametrization and the tractability to preserve proper probability densities. We further adopt i-nessai, a flow-guided nested sampler, to accelerate exploration of complex posteriors. This unified use of NFs improves statistical robustness and computational efficiency, providing a principled methodology for addressing hierarchical Bayesian inference in PTA analysis.


Addressing prior dependence in hierarchical Bayesian modeling for PTA data analysis II: Noise and SGWB inference through parameter decorrelation

Villa, Eleonora, D'Amico, Luigi, Barca, Aldo, Bittordo, Fatima Modica, Alì, Francesco, Meneghetti, Massimo, Naso, Luca

arXiv.org Artificial Intelligence

Pulsar Timing Arrays provide a powerful framework to measure low-frequency gravitational waves, but accuracy and robustness of the results are challenged by complex noise processes that must be accurately modeled. Standard PTA analyses assign fixed uniform noise priors to each pulsar, an approach that can introduce systematic biases when combining the array. To overcome this limitation, we adopt a hierarchical Bayesian modeling strategy in which noise priors are parametrized by higher-level hyperparameters. We further address the challenge posed by the correlations between hyperparameters and physical noise parameters, focusing on those describing red noise and dispersion measure variations. To decorrelate these quantities, we introduce an orthogonal reparametrization of the hierarchical model implemented with Normalizing Flows. We also employ i-nessai, a flow-guided nested sampler, to efficiently explore the resulting higher-dimensional parameter space. We apply our method to a minimal 3-pulsar case study, performing a simultaneous inference of noise and SGWB parameters. Despite the limited dataset, the results consistently show that the hierarchical treatment constrains the noise parameters more tightly and partially alleviates the red-noise-SGWB degeneracy, while the orthogonal reparametrization further enhances parameter independence without affecting the correlations intrinsic to the power-law modeling of the physical processes involved.



Gaussian Process-Based Prediction and Control of Hammerstein-Wiener Systems

Yin, Mingzhou, Müller, Matthias A.

arXiv.org Artificial Intelligence

This work investigates data-driven prediction and control of Hammerstein-Wiener systems using physics-informed Gaussian process models. Data-driven prediction algorithms have been developed for structured nonlinear systems based on Willems' fundamental lemma. However, existing frameworks cannot treat output nonlinearities and require a dictionary of basis functions for Hammerstein systems. In this work, an implicit predictor structure is considered, leveraging the multi-step-ahead ARX structure for the linear part of the model. This implicit function is learned by Gaussian process regression with kernel functions designed from Gaussian process priors for the nonlinearities. The linear model parameters are estimated as hyperparameters by assuming a stable spline hyperprior. The implicit Gaussian process model provides explicit output prediction by optimizing selected optimality criteria. The model is also applied to receding horizon control with the expected control cost and chance constraint satisfaction guarantee. Numerical results demonstrate that the proposed prediction and control algorithms are superior to black-box Gaussian process models.


More Options for Prelabor Rupture of Membranes, A Bayesian Analysis

Klein, Ashley, Raff, Edward, Seamon, Elisabeth, Foley, Lily, Bussert, Timothy

arXiv.org Artificial Intelligence

An obstetric goal for a laboring mother is to achieve a vaginal delivery as it reduces the risks inherent in major abdominal surgery (i.e., a Cesarean section). Various medical interventions may be used by a physician to increase the likelihood of this occurring while minimizing maternal and fetal morbidity. However, patients with prelabor rupture of membranes (PROM) have only two commonly used options for cervical ripening, Pitocin and misoprostol. Little research exists on the benefits/risks for these two key drugs for PROM patients. A major limitation with most induction-of-labor related research is the inability to account for differences in \textit{Bishop scores} that are commonly used in obstetrical practice to determine the next induction agent offered to the patient. This creates a confounding factor, which biases the results, but has not been realized in the literature. In this work, we use a Bayesian model of the relationships between the relevant factors, informed by expert physicians, to separate the confounding variable from its actual impact. In doing so, we provide strong evidence that pitocin and buccal misoprostol are equally effective and safe; thus, physicians have more choice in clinical care than previously realized. This is particularly important for developing countries where neither medication may be readily available, and prior guidelines may create an artificial barrier to needed medication.