Perspectives on Sparse Bayesian Learning

Palmer, Jason, Rao, Bhaskar D., Wipf, David P.

Neural Information Processing Systems 

Recently, relevance vector machines (RVM) have been fashioned from a sparse Bayesian learning (SBL) framework to perform supervised learning usinga weight prior that encourages sparsity of representation. The methodology incorporates an additional set of hyperparameters governing theprior, one for each weight, and then adopts a specific approximation tothe full marginalization over all weights and hyperparameters. Despite its empirical success however, no rigorous motivation for this particular approximation is currently available. To address this issue, we demonstrate that SBL can be recast as the application of a rigorous variational approximationto the full model by expressing the prior in a dual form. This formulation obviates the necessity of assuming any hyperpriors andleads to natural, intuitive explanations of why sparsity is achieved in practice.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found