Goto

Collaborating Authors

 montanari


Debiased Estimators in High-Dimensional Regression: A Review and Replication of Javanmard and Montanari (2014)

Smith, Benjamin

arXiv.org Machine Learning

High-dimensional statistical settings ($p \gg n$) pose fundamental challenges for classical inference, largely due to bias introduced by regularized estimators such as the LASSO. To address this, Javanmard and Montanari (2014) propose a debiased estimator that enables valid hypothesis testing and confidence interval construction. This report examines their debiased LASSO framework, which yields asymptotically normal estimators in high-dimensional settings. The key theoretical results underlying this approach are presented. Specifically, the construction of an optimized debiased estimator that restores asymptotic normality, which enables the computation of valid confidence intervals and $p$-values. To evaluate the claims of Javanmard and Montanari, a subset of the original simulation study and the real-data analysis is presented. The original empirical analysis is extended to the desparsified LASSO, which is referenced but not implemented in the original study. The results demonstrate that while the debiased LASSO achieves reliable coverage and controls Type I error, the LASSO projection estimator can offer improved power in idealized low-signal settings without compromising error rates. The results reveal a trade-off: the LASSO projection estimator performs well in low-signal settings, while Javanmard and Montanari's method is more robust to complex correlations, improving precision and signal detection in real data.




Plug-in Estimation in High-Dimensional Linear Inverse Problems: A Rigorous Analysis

Alyson K. Fletcher, Parthe Pandit, Sundeep Rangan, Subrata Sarkar, Philip Schniter

Neural Information Processing Systems

Estimating a vector x from noisy linear measurements Ax + w often requires use of prior knowledge or structural constraints on x for accurate reconstruction. Several recent works have considered combining linear least-squares estimation with a generic or "plug-in" denoiser function that can be designed in a modular manner based on the prior knowledge about x.




becc353586042b6dbcc42c1b794c37b6-Paper.pdf

Neural Information Processing Systems

Here, the functionf() is applied toAx in a component-wise manner. The above model arises in many applications of signal processing [13, 10, 41], communications [56, 9, 25], and machine learning [48, 40].


1 Model,contributionsandrelatedworks Randomfeaturesmodelasa2-layersneuralnetwork. Givennobservations(x1,y1), (xn,yn) withxi Rp andyi Rforeachi=1,,n,theobjectofstudyofthispaperistheestimate bα=argmin

Neural Information Processing Systems

We establish Central Limit Theorems (CLT) for the derivatives of 2-layers NN models in(2) when n,p,d + in the proportional asymptotic regime(6). A weighted average of the gradients of the trained NN, up to an explicit additive correction, is proved to be asymptotically normal, where the variance of the limit can be estimatedexplicitly.


All-or-nothingstatisticalandcomputationalphase transitionsinsparsespikedmatrixestimation

Neural Information Processing Systems

Similarly the ISOMAP face database consists ofimages (256levels ofgray)ofsize64 64,i.e.,vectors in R4096, whereas the correct intrinsic dimension is only3 (for the vertical, horizontal pause and lightingdirection). The second approach, is anaverage caseapproach (in the spirit of thestatistical mechanics treatment ofhighdimensional systems), thatmodelsfeaturevectorsby arandom ensemble,taken as aset ofrandom vectors with independently identically distributed (i.i.d.) components, and a small but xed fraction of non-zero components.


All-or-nothingstatisticalandcomputationalphase transitionsinsparsespikedmatrixestimation

Neural Information Processing Systems

Similarly the ISOMAP face database consists ofimages (256levels ofgray)ofsize64 64,i.e.,vectors in R4096, whereas the correct intrinsic dimension is only3 (for the vertical, horizontal pause and lightingdirection). The second approach, is anaverage caseapproach (in the spirit of thestatistical mechanics treatment ofhighdimensional systems), thatmodelsfeaturevectorsby arandom ensemble,taken as aset ofrandom vectors with independently identically distributed (i.i.d.) components, and a small but xed fraction of non-zero components.