Bayesian Learning via Q-Exponential Process
Li, Shuyi, O'Connor, Michael, Lan, Shiwei
Regularization is one of the most fundamental topics in optimization, statistics and machine learning. To get sparsity in estimating a parameter $u\in\mathbb{R}^d$, an $\ell_q$ penalty term, $\Vert u\Vert_q$, is usually added to the objective function. What is the probabilistic distribution corresponding to such $\ell_q$ penalty? What is the correct stochastic process corresponding to $\Vert u\Vert_q$ when we model functions $u\in L^q$? This is important for statistically modeling large dimensional objects, e.g. images, with penalty to preserve certainty properties, e.g. edges in the image. In this work, we generalize the $q$-exponential distribution (with density proportional to) $\exp{(- \frac{1}{2}|u|^q)}$ to a stochastic process named $Q$-exponential (Q-EP) process that corresponds to the $L_q$ regularization of functions. The key step is to specify consistent multivariate $q$-exponential distributions by choosing from a large family of elliptic contour distributions. The work is closely related to Besov process which is usually defined by the expanded series. Q-EP can be regarded as a definition of Besov process with explicit probabilistic formulation and direct control on the correlation length. From the Bayesian perspective, Q-EP provides a flexible prior on functions with sharper penalty ($q<2$) than the commonly used Gaussian process (GP). We compare GP, Besov and Q-EP in modeling functional data, reconstructing images, and solving inverse problems and demonstrate the advantage of our proposed methodology.
Nov-15-2023
- Country:
- Europe
- Iceland > Capital Region
- Reykjavik (0.04)
- United Kingdom > North Sea
- Southern North Sea (0.04)
- Iceland > Capital Region
- North America > United States
- Arizona > Maricopa County
- Tempe (0.04)
- Texas > Travis County
- Austin (0.04)
- Arizona > Maricopa County
- Europe
- Genre:
- Research Report (0.50)
- Industry:
- Health & Medicine > Diagnostic Medicine > Imaging (0.46)