Petersen, Philipp
Extraction of digital wavefront sets using applied harmonic analysis and deep neural networks
Andrade-Loarca, Héctor, Kutyniok, Gitta, Öktem, Ozan, Petersen, Philipp
Microlocal analysis provides deep insight into singularity structures and is often crucial for solving inverse problems, predominately, in imaging sciences. Of particular importance is the analysis of wavefront sets and the correct extraction of those. In this paper, we introduce the first algorithmic approach to extract the wavefront set of images, which combines data-based and model-based methods. Based on a celebrated property of the shearlet transform to unravel information on the wavefront set, we extract the wavefront set of an image by first applying a discrete shearlet transform and then feeding local patches of this transform to a deep convolutional neural network trained on labeled data. The resulting algorithm outperforms all competing algorithms in edge-orientation and ramp-orientation detection.
Optimal approximation of piecewise smooth functions using deep ReLU neural networks
Petersen, Philipp, Voigtlaender, Felix
We study the necessary and sufficient complexity of ReLU neural networks-in terms of depth and number of weights-required for approximating classifier functions in an $L^2$-sense. As a model, we consider the set $\mathcal{E}^\beta (\mathbb{R}^d)$ of possibly discontinuous piecewise $C^\beta$ functions $f : [-1/2, 1/2]^d \to \mathbb{R}$, where the different 'smooth regions' of $f$ are separated by $C^\beta$ hypersurfaces. For given dimension $d \geq 2$, regularity $\beta > 0$, and accuracy $\varepsilon > 0$, we construct ReLU neural networks that approximate functions from $\mathcal{E}^\beta(\mathbb{R}^d)$ up to an $L^2$ error of $\varepsilon$. The constructed networks have a fixed number of layers, depending only on $d$ and $\beta$ and they have $O(\varepsilon^{-2(d-1)/\beta})$ many nonzero weights, which we prove to be optimal. In addition to the optimality in terms of the number of weights, we show that in order to achieve this optimal approximation rate, one needs ReLU networks of a certain minimal depth. Precisely, for piecewise $C^\beta(\mathbb{R}^d)$ functions, this minimal depth is given-up to a multiplicative constant-by $\beta/d$. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function $f$ to be approximated can be factorized into a smooth dimension reducing feature map $\tau$ and classifier function $g$-defined on a low-dimensional feature space-as $f = g \circ \tau$. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.