Comparing the Effects of Different Weight Distributions on Finding Sparse Representations
Rao, Bhaskar D., Wipf, David P.
–Neural Information Processing Systems
Given a redundant dictionary of basis vectors (or atoms), our goal is to find maximally sparse representations of signals. Previously, we have argued that a sparse Bayesian learning (SBL) framework is particularly well-suited for this task, showing that it has far fewer local minima than other Bayesian-inspired strategies. In this paper, we provide further evidence forthis claim by proving a restricted equivalence condition, based on the distribution of the nonzero generating model weights, whereby the SBL solution will equal the maximally sparse representation. We also prove that if these nonzero weights are drawn from an approximate Jeffreys prior,then with probability approaching one, our equivalence condition issatisfied.
Neural Information Processing Systems
Dec-31-2006