numerical result
Recovery Analysis for Plug-and-Play Priors using the Restricted Eigenvalue Condition
The plug-and-play priors (PnP) and regularization by denoising (RED) methods have become widely used for solving inverse problems by leveraging pre-trained deep denoisers as image priors. While the empirical imaging performance and the theoretical convergence properties of these algorithms have been widely investigated, their recovery properties have not previously been theoretically analyzed. We address this gap by showing how to establish theoretical recovery guarantees for PnP/RED by assuming that the solution of these methods lies near the fixed-points of a deep neural network. We also present numerical results comparing the recovery performance of PnP/RED in compressive sensing against that of recent compressive sensing algorithms based on generative models. Our numerical results suggest that PnP with a pre-trained artifact removal network provides significantly better results compared to the existing state-of-the-art methods.
Reviewer # 2: (I) Our algorithm can handle > 2 protected groups: in our numerical results, there are up to five protected
We sincerely thank all of you for the detailed, thoughtful, and constructive comments and feedback. We added a table of racial composition data for all networks. We incorporated all the recommendations. We improve clarity of Th. 1 by adding "In this formulation, there are two sets of variables: a) We will provide a head-to-head comparison with Table 1. We will release the code and a "readme" file with instructions, detailing the sequence of the runs.
1abb1e1ea5f481b589da52303b091cbb-Reviews.html
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper describes an approach to derive the L1 regularized Gaussian maximum likelihood estimator for the sparse inverse covariance estimation problem. The focus of this paper was to scale the previous algorithm QUIC to solve problems involving million of variables. They describe three innovations brought about by this new approach: inexact Hessians, better computation of the logdet function, and carefully selecting the blocks updated in their block coordinate scheme via a smart clustering scheme. The numerical results test their new method against the previous QUIC algorithm, GLASSO and ALM, showing improved performance on a few select problems.
1aa48fc4880bb0c9b8a3bf979d3b917e-Reviews.html
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The authors present an unnamed algorithm for recovering a degree three tensor from a noisy version, where the noise is due to (a) slice misalignment and (b) sparse noise. The authors present a loss function for the model which they optimize by an ADMM-inspired gradient descent heuristic. The authors compare their method on real and synthetic image data to different implementations of RASL and and an algorithm from [14] which they call Li's work and show that their algorithm has lower recovery error. The paper is clearly written and the idea of performing alignment and denoising on multiple images at once seems to be novel, while the reviewer is not a full expert on tensor methods in image processing and cannot finally settle the question of originality of the application.
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Summary: The authors re-explain regularization in optimization problems as a constraint of the type the parameters ${\bf w}$ must belong to the convex set $O$ where the convex set O is obtained as the convex hull of all the points of the form $g.v$ where $v$ is some fix vector, $g$ an element from a group and $.$ is a (linear) group action of element $g$ on vector $v$. More concretely, their main contributions are as follows. For example, the ball associated to the L1 norm can be explained as the convex hull of the points obtained by flipping the sign and permuting the components of the vector $(1,0,0,..,0)$; (B) they show that given a seed $v$ and a group action associated to a group $G$, the notion of $w$ is a member of the convex set $O_G(v)$ can be seen as $v$ is smaller than $w$ under a pre-order; (C) they show that if $-v$ belongs to convex set $O$ then $O$ can be seen as the ball of an atomic norm (as defined in Chandra et al.); (D) they show that the L1-sorted norm equals the dual of the norm associated to the signed-pertumation orbitope; (E) they show how to reinterpret the main steps of conditional and projected gradient algorithms in the language of orbitopes and give a procedure to compute projections onto orbitopes. Quality: There are no technical mistakes in the paper.
convergence of several policy gradient methods, whose novelty is summarized in Lines 210-212 and further explained
R1.1 ...these analysis mainly come from the existing work...the novelty is very limited. Our proposed SRVR-NPG has a better complexity than SRVR-PG (Remark 4.13). We believed our theoretical contrition already has archival value. R1.3 Reproducibility: We believe that all of our theoretical claims have been proved. Please refer to [34] for a detailed proof.