Goto

Collaborating Authors

 Ratti, Luca


Revisiting $\Psi$DONet: microlocally inspired filters for incomplete-data tomographic reconstructions

arXiv.org Artificial Intelligence

In this paper, we revisit a supervised learning approach based on unrolling, known as $\Psi$DONet, by providing a deeper microlocal interpretation for its theoretical analysis, and extending its study to the case of sparse-angle tomography. Furthermore, we refine the implementation of the original $\Psi$DONet considering special filters whose structure is specifically inspired by the streak artifact singularities characterizing tomographic reconstructions from incomplete data. This allows to considerably lower the number of (learnable) parameters while preserving (or even slightly improving) the same quality for the reconstructions from limited-angle data and providing a proof-of-concept for the case of sparse-angle tomographic data.


Learning sparsity-promoting regularizers for linear inverse problems

arXiv.org Machine Learning

This paper introduces a novel approach to learning sparsity-promoting regularizers for solving linear inverse problems. We develop a bilevel optimization framework to select an optimal synthesis operator, denoted as $B$, which regularizes the inverse problem while promoting sparsity in the solution. The method leverages statistical properties of the underlying data and incorporates prior knowledge through the choice of $B$. We establish the well-posedness of the optimization problem, provide theoretical guarantees for the learning process, and present sample complexity bounds. The approach is demonstrated through examples, including compact perturbations of a known operator and the problem of learning the mother wavelet, showcasing its flexibility in incorporating prior knowledge into the regularization framework. This work extends previous efforts in Tikhonov regularization by addressing non-differentiable norms and proposing a data-driven approach for sparse regularization in infinite dimensions.


Learning a Gaussian Mixture for Sparsity Regularization in Inverse Problems

arXiv.org Artificial Intelligence

In inverse problems, it is widely recognized that the incorporation of a sparsity prior yields a regularization effect on the solution. This approach is grounded on the a priori assumption that the unknown can be appropriately represented in a basis with a limited number of significant components, while most coefficients are close to zero. This occurrence is frequently observed in real-world scenarios, such as with piecewise smooth signals. In this study, we propose a probabilistic sparsity prior formulated as a mixture of degenerate Gaussians, capable of modeling sparsity with respect to a generic basis. Under this premise, we design a neural network that can be interpreted as the Bayes estimator for linear inverse problems. Additionally, we put forth both a supervised and an unsupervised training strategy to estimate the parameters of this network. To evaluate the effectiveness of our approach, we conduct a numerical comparison with commonly employed sparsity-promoting regularization techniques, namely LASSO, group LASSO, iterative hard thresholding, and sparse coding/dictionary learning. Notably, our reconstructions consistently exhibit lower mean square error values across all $1$D datasets utilized for the comparisons, even in cases where the datasets significantly deviate from a Gaussian mixture model.


Learned reconstruction methods for inverse problems: sample error estimates

arXiv.org Machine Learning

The mathematical treatment of inverse problems has proved to be a lively and attractive research field, driven and motivated by a wide variety of applications and by the theoretical challenges induced by their ill-posed nature. In order to provide more accurate and reliable strategies, especially for the reconstruction task, the scientific research in the field has shown a growing interest in the topic of learned reconstruction, or data-driven, methods, to combine classical, model-based approaches with valuable information of statistical nature. This has represented a natural outcome and development of the analysis of inverse problems, both on a numerical and on a theoretical side: indeed, the idea of leveraging prior knowledge on the solution has traditionally been considered to mitigate ill-posedness, as a regularization tool as much as a support for the reconstruction. We have now witnessed the emergence of several learning-based approaches to inverse problems, providing, in many cases, striking numerical results in terms of accuracy and efficiency. Moreover, significant interest has grown in the direction of theoretical guarantees for such techniques, spanning from the demand of interpretability and reliability, up to the issues of stability and convergence [8, 55]. Despite several distinct avenues have emerged, which are now well-established and are developing independently (to name a few: generative models, unrolled techniques, Plug and Play schemes), it is possible to provide a unifying overview of them from the point of view of statistical learning theory [20]. In this context, the goal pursued by this paper is twofold. On the one side, it aims to provide a general theoretical framework in statistical learning that is able to comprehend a large family of data-driven reconstruction methods.


Learning the optimal regularizer for inverse problems

arXiv.org Machine Learning

In this work, we consider the linear inverse problem $y=Ax+\epsilon$, where $A\colon X\to Y$ is a known linear operator between the separable Hilbert spaces $X$ and $Y$, $x$ is a random variable in $X$ and $\epsilon$ is a zero-mean random process in $Y$. This setting covers several inverse problems in imaging including denoising, deblurring, and X-ray tomography. Within the classical framework of regularization, we focus on the case where the regularization functional is not given a priori but learned from data. Our first result is a characterization of the optimal generalized Tikhonov regularizer, with respect to the mean squared error. We find that it is completely independent of the forward operator $A$ and depends only on the mean and covariance of $x$. Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both $x$ and $y$, and one unsupervised, based only on samples of $x$. In both cases, we prove generalization bounds, under some weak assumptions on the distribution of $x$ and $\epsilon$, including the case of sub-Gaussian variables. Our bounds hold in infinite-dimensional spaces, thereby showing that finer and finer discretizations do not make this learning problem harder. The results are validated through numerical simulations.


Convex regularization in statistical inverse learning problems

arXiv.org Machine Learning

We consider a statistical inverse learning problem, where the task is to estimate a function $f$ based on noisy point evaluations of $Af$, where $A$ is a linear operator. The function $Af$ is evaluated at i.i.d. random design points $u_n$, $n=1,...,N$ generated by an unknown general probability distribution. We consider Tikhonov regularization with general convex and $p$-homogeneous penalty functionals and derive concentration rates of the regularized solution to the ground truth measured in the symmetric Bregman distance induced by the penalty functional. We derive concrete rates for Besov norm penalties and numerically demonstrate the correspondence with the observed rates in the context of X-ray tomography.