Not enough data to create a plot.
Try a different view from the menu above.
Burger, Martin
Convex regularization in statistical inverse learning problems
Bubba, Tatiana A., Burger, Martin, Helin, Tapio, Ratti, Luca
We consider a statistical inverse learning problem, where the task is to estimate a function $f$ based on noisy point evaluations of $Af$, where $A$ is a linear operator. The function $Af$ is evaluated at i.i.d. random design points $u_n$, $n=1,...,N$ generated by an unknown general probability distribution. We consider Tikhonov regularization with general convex and $p$-homogeneous penalty functionals and derive concentration rates of the regularized solution to the ground truth measured in the symmetric Bregman distance induced by the penalty functional. We derive concrete rates for Besov norm penalties and numerically demonstrate the correspondence with the observed rates in the context of X-ray tomography.
First order algorithms in variational image processing
Burger, Martin, Sawatzky, Alex, Steidl, Gabriele
Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form ${\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u$ ; where the functional ${\cal D}$ is a data fidelity term also depending on some input data $f$ and measuring the deviation of $Ku$ from such and ${\cal R}$ is a regularization functional. Moreover $K$ is a (often linear) forward operator modeling the dependence of data on an underlying image, and $\alpha$ is a positive regularization parameter. While ${\cal D}$ is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or $\ell_1$-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.