erfinv
On the accuracy of l-filtering of signals with block-sparse structure Boris Polyak
Our emphasis is on the efficiently computable error bounds for the recovery routines. We optimize these bounds with respect to the method parameters to construct the estimators with improved statistical properties. We justify the proposed approach with an oracle inequality which links the properties of the recovery algorithms and the best estimation performance.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- (3 more...)
Capacity of the Hebbian-Hopfield network associative memory
In \cite{Hop82}, Hopfield introduced a \emph{Hebbian} learning rule based neural network model and suggested how it can efficiently operate as an associative memory. Studying random binary patterns, he also uncovered that, if a small fraction of errors is tolerated in the stored patterns retrieval, the capacity of the network (maximal number of memorized patterns, $m$) scales linearly with each pattern's size, $n$. Moreover, he famously predicted $\alpha_c=\lim_{n\rightarrow\infty}\frac{m}{n}\approx 0.14$. We study this very same scenario with two famous pattern's basins of attraction: \textbf{\emph{(i)}} The AGS one from \cite{AmiGutSom85}; and \textbf{\emph{(ii)}} The NLT one from \cite{Newman88,Louk94,Louk94a,Louk97,Tal98}. Relying on the \emph{fully lifted random duality theory} (fl RDT) from \cite{Stojnicflrdt23}, we obtain the following explicit capacity characterizations on the first level of lifting: \begin{equation} \alpha_c^{(AGS,1)} = \left ( \max_{\delta\in \left ( 0,\frac{1}{2}\right ) }\frac{1-2\delta}{\sqrt{2} \mbox{erfinv} \left ( 1-2\delta\right )} - \frac{2}{\sqrt{2\pi}} e^{-\left ( \mbox{erfinv}\left ( 1-2\delta \right )\right )^2}\right )^2 \approx \mathbf{0.137906} \end{equation} \begin{equation} \alpha_c^{(NLT,1)} = \frac{\mbox{erf}(x)^2}{2x^2}-1+\mbox{erf}(x)^2 \approx \mathbf{0.129490}, \quad 1-\mbox{erf}(x)^2- \frac{2\mbox{erf}(x)e^{-x^2}}{\sqrt{\pi}x}+\frac{2e^{-2x^2}}{\pi}=0. \end{equation} A substantial numerical work gives on the second level of lifting $\alpha_c^{(AGS,2)} \approx \mathbf{0.138186}$ and $\alpha_c^{(NLT,2)} \approx \mathbf{0.12979}$, effectively uncovering a remarkably fast lifting convergence. Moreover, the obtained AGS characterizations exactly match the replica symmetry based ones of \cite{AmiGutSom85} and the corresponding symmetry breaking ones of \cite{SteKuh94}.
A problem dependent analysis of SOCP algorithms in noisy compressed sensing
Under-determined systems of linear equations with sparse solutions have been the subject of an extensive research in last several years above all due to results of \cite{CRT,CanRomTao06,DonohoPol}. In this paper we will consider \emph{noisy} under-determined linear systems. In a breakthrough \cite{CanRomTao06} it was established that in \emph{noisy} systems for any linear level of under-determinedness there is a linear sparsity that can be \emph{approximately} recovered through an SOCP (second order cone programming) optimization algorithm so that the approximate solution vector is (in an $\ell_2$-norm sense) guaranteed to be no further from the sparse unknown vector than a constant times the noise. In our recent work \cite{StojnicGenSocp10} we established an alternative framework that can be used for statistical performance analysis of the SOCP algorithms. To demonstrate how the framework works we then showed in \cite{StojnicGenSocp10} how one can use it to precisely characterize the \emph{generic} (worst-case) performance of the SOCP. In this paper we present a different set of results that can be obtained through the framework of \cite{StojnicGenSocp10}. The results will relate to \emph{problem dependent} performance analysis of SOCP's. We will consider specific types of unknown sparse vectors and characterize the SOCP performance when used for recovery of such vectors. We will also show that our theoretical predictions are in a solid agreement with the results one can get through numerical simulations.
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
On the accuracy of l1-filtering of signals with block-sparse structure
Karzan, Fatma K., Nemirovski, Arkadi S., Polyak, Boris T., Juditsky, Anatoli
We discuss new methods for the recovery of signals with block-sparse structure, based on l1-minimization. Our emphasis is on the efficiently computable error bounds for the recovery routines. We optimize these bounds with respect to the method parameters to construct the estimators with improved statistical properties. We justify the proposed approach with an oracle inequality which links the properties of the recovery algorithms and the best estimation performance.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- (3 more...)