Bayesian Learning via Q-Exponential Process
–Neural Information Processing Systems
Regularization is one of the most fundamental topics in optimization, statistics and machine learning. To get sparsity in estimating a parameter u\in\mathbb{R} d, an \ell_q penalty term, \Vert u\Vert_q, is usually added to the objective function. What is the probabilistic distribution corresponding to such \ell_q penalty? What is the \emph{correct} stochastic process corresponding to \Vert u\Vert_q when we model functions u\in L q? This is important for statistically modeling high-dimensional objects such as images, with penalty to preserve certainty properties, e.g.
Neural Information Processing Systems
Jan-20-2025, 01:06:17 GMT