Schniter, Philip
Score Combining for Contrastive OOD Detection
Reehorst, Edward T., Schniter, Philip
In out-of-distribution (OOD) detection, one is asked to classify whether a test sample comes from a known inlier distribution or not. We focus on the case where the inlier distribution is defined by a training dataset and there exists no additional knowledge about the novelties that one is likely to encounter. This problem is also referred to as novelty detection, one-class classification, and unsupervised anomaly detection. The current literature suggests that contrastive learning techniques are state-of-the-art for OOD detection. We aim to improve on those techniques by combining/ensembling their scores using the framework of null hypothesis testing and, in particular, a novel generalized likelihood ratio test (GLRT). We demonstrate that our proposed GLRT-based technique outperforms the state-of-the-art CSI and SupCSI techniques from Tack et al. 2020 in dataset-vs-dataset experiments with CIFAR-10, SVHN, LSUN, ImageNet, and CIFAR-100, as well as leave-one-class-out experiments with CIFAR-10. We also demonstrate that our GLRT outperforms the score-combining methods of Fisher, Bonferroni, Simes, Benjamini-Hochwald, and Stouffer in our application.
pcaGAN: Improving Posterior-Sampling cGANs via Principal Component Regularization
Bendel, Matthew C., Ahmad, Rizwan, Schniter, Philip
In ill-posed imaging inverse problems, there can exist many hypotheses that fit both the observed measurements and prior knowledge of the true image. Rather than returning just one hypothesis of that image, posterior samplers aim to explore the full solution space by generating many probable hypotheses, which can later be used to quantify uncertainty or construct recoveries that appropriately navigate the perception/distortion trade-off. In this work, we propose a fast and accurate posterior-sampling conditional generative adversarial network (cGAN) that, through a novel form of regularization, aims for correctness in the posterior mean as well as the trace and K principal components of the posterior covariance matrix. Numerical experiments demonstrate that our method outperforms contemporary cGANs and diffusion models in imaging inverse problems like denoising, large-scale inpainting, and accelerated MRI recovery.
Sketching Datasets for Large-Scale Learning (long version)
Gribonval, Rรฉmi, Chatalic, Antoine, Keriven, Nicolas, Schellekens, Vincent, Jacques, Laurent, Schniter, Philip
This article considers "sketched learning," or "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e.g., clustering, classification, or regression) is performed. In particular, a "sketch" is first constructed by computing carefully chosen nonlinear random features (e.g., random Fourier features) and averaging them over the whole dataset. Parameters are then learned from the sketch, without access to the original dataset. This article surveys the current state-of-the-art in sketched learning, including the main concepts and algorithms, their connections with established signal-processing methods, existing theoretical guarantees---on both information preservation and privacy preservation, and important open problems.
Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems
Fletcher, Alyson K., Sahraee-Ardakan, Mojtaba, Rangan, Sundeep, Schniter, Philip
The problem of estimating a random vector x from noisy linear measurements y Ax w with unknown parameters on the distributions of x and w, which must also be learned, arises in a wide range of statistical learning and linear inverse problems. We show that a computationally simple iterative message-passing algorithm can provably obtain asymptotically consistent estimates in a certain high-dimensional large-system limit (LSL) under very general parameterizations. Previous message passing techniques have required i.i.d. The proposed algorithm, called adaptive vector approximate message passing (Adaptive VAMP) with auto-tuning, applies to all right-rotationally random A. Importantly, this class includes matrices with arbitrarily bad conditioning. We show that the parameter estimates and mean squared error (MSE) of x in each iteration converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations.
Plug-in Estimation in High-Dimensional Linear Inverse Problems: A Rigorous Analysis
Fletcher, Alyson K., Pandit, Parthe, Rangan, Sundeep, Sarkar, Subrata, Schniter, Philip
Estimating a vector $\mathbf{x}$ from noisy linear measurements $\mathbf{Ax+w}$ often requires use of prior knowledge or structural constraints on $\mathbf{x}$ for accurate reconstruction. Several recent works have considered combining linear least-squares estimation with a generic or plug-in ``denoiser" function that can be designed in a modular manner based on the prior knowledge about $\mathbf{x}$. While these methods have shown excellent performance, it has been difficult to obtain rigorous performance guarantees. This work considers plug-in denoising combined with the recently-developed Vector Approximate Message Passing (VAMP) algorithm, which is itself derived via Expectation Propagation techniques. It shown that the mean squared error of this ``plug-in" VAMP can be exactly predicted for a large class of high-dimensional random $\Abf$ and denoisers. The method is illustrated in image reconstruction and parametric bilinear estimation.
Plug-in Estimation in High-Dimensional Linear Inverse Problems: A Rigorous Analysis
Fletcher, Alyson K., Pandit, Parthe, Rangan, Sundeep, Sarkar, Subrata, Schniter, Philip
Estimating a vector $\mathbf{x}$ from noisy linear measurements $\mathbf{Ax+w}$ often requires use of prior knowledge or structural constraints on $\mathbf{x}$ for accurate reconstruction. Several recent works have considered combining linear least-squares estimation with a generic or plug-in ``denoiser" function that can be designed in a modular manner based on the prior knowledge about $\mathbf{x}$. While these methods have shown excellent performance, it has been difficult to obtain rigorous performance guarantees. This work considers plug-in denoising combined with the recently-developed Vector Approximate Message Passing (VAMP) algorithm, which is itself derived via Expectation Propagation techniques. It shown that the mean squared error of this ``plug-in" VAMP can be exactly predicted for a large class of high-dimensional random $\Abf$ and denoisers. The method is illustrated in image reconstruction and parametric bilinear estimation.
Learning and Free Energies for Vector Approximate Message Passing
Fletcher, Alyson K., Schniter, Philip
Vector approximate message passing (VAMP) is a computationally simple approach to the recovery of a signal $\mathbf{x}$ from noisy linear measurements $\mathbf{y}=\mathbf{Ax}+\mathbf{w}$. Like the AMP proposed by Donoho, Maleki, and Montanari in 2009, VAMP is characterized by a rigorous state evolution (SE) that holds under certain large random matrices and that matches the replica prediction of optimality. But while AMP's SE holds only for large i.i.d. sub-Gaussian $\mathbf{A}$, VAMP's SE holds under the much larger class: right-rotationally invariant $\mathbf{A}$. To run VAMP, however, one must specify the statistical parameters of the signal and noise. This work combines VAMP with Expectation-Maximization to yield an algorithm, EM-VAMP, that can jointly recover $\mathbf{x}$ while learning those statistical parameters. The fixed points of the proposed EM-VAMP algorithm are shown to be stationary points of a certain constrained free-energy, providing a variational interpretation of the algorithm. Numerical simulations show that EM-VAMP is robust to highly ill-conditioned $\mathbf{A}$ with performance nearly matching oracle-parameter VAMP.
prDeep: Robust Phase Retrieval with Flexible Deep Neural Networks
Metzler, Christopher A., Schniter, Philip, Veeraraghavan, Ashok, Baraniuk, Richard G.
Phase retrieval (PR) algorithms have become an important component in many modern computational imaging systems. For instance, in the context of ptychography and speckle correlation imaging PR algorithms enable imaging past the diffraction limit and through scattering media, respectively. Unfortunately, traditional PR algorithms struggle in the presence of noise. Recently PR algorithms have been developed that use priors to make themselves more robust. However, these algorithms often require unrealistic (Gaussian or coded diffraction pattern) measurement models and offer slow computation times. These drawbacks have hindered widespread adoption. In this work we use convolutional neural networks, a powerful tool from machine learning, to regularize phase retrieval problems and improve recovery performance. We test our new algorithm, prDeep, in simulation and demonstrate that it is robust to noise, can handle a variety system models, and operates fast enough for high-resolution applications.
Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems
Fletcher, Alyson K., Sahraee-Ardakan, Mojtaba, Rangan, Sundeep, Schniter, Philip
The problem of estimating a random vector x from noisy linear measurements y=Ax+w with unknown parameters on the distributions of x and w, which must also be learned, arises in a wide range of statistical learning and linear inverse problems. We show that a computationally simple iterative message-passing algorithm can provably obtain asymptotically consistent estimates in a certain high-dimensional large-system limit (LSL) under very general parameterizations. Previous message passing techniques have required i.i.d. sub-Gaussian A matrices and often fail when the matrix is ill-conditioned. The proposed algorithm, called adaptive vector approximate message passing (Adaptive VAMP) with auto-tuning, applies to all right-rotationally random A. Importantly, this class includes matrices with arbitrarily bad conditioning. We show that the parameter estimates and mean squared error (MSE) of x in each iteration converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations. In addition, a simple testable condition is provided in which the MSE matches the Bayes-optimal value predicted by the replica method. The paper thus provides a computationally simple method with provable guarantees of optimality and consistency over a large class of linear inverse problems.
A GAMP Based Low Complexity Sparse Bayesian Learning Algorithm
Al-Shoukairi, Maher, Schniter, Philip, Rao, Bhaskar D.
In this paper, we present an algorithm for the sparse signal recovery problem that incorporates damped Gaussian generalized approximate message passing (GGAMP) into Expectation-Maximization (EM)-based sparse Bayesian learning (SBL). In particular, GGAMP is used to implement the E-step in SBL in place of matrix inversion, leveraging the fact that GGAMP is guaranteed to converge with appropriate damping. The resulting GGAMP-SBL algorithm is much more robust to arbitrary measurement matrix $\boldsymbol{A}$ than the standard damped GAMP algorithm while being much lower complexity than the standard SBL algorithm. We then extend the approach from the single measurement vector (SMV) case to the temporally correlated multiple measurement vector (MMV) case, leading to the GGAMP-TSBL algorithm. We verify the robustness and computational advantages of the proposed algorithms through numerical experiments.