A CNN architectures A.1 DnCNN
–Neural Information Processing Systems
In this section we describe the denoising architectures used for our computational experiments. All architectures except BFCNN have additive (bias) terms after every convolutional layer. DnCNN [66] consists of 20 convolutional layers, each consisting of 3 3 filters and 64 channels, batch normalization [23], and a ReLU nonlinearity. It has a skip connection from the initial layer to the final layer, which has no nonlinear units. We use BFCNN [37] based on DnCNN architecture, i.e, we remove all sources of additive bias, including the mean parameter of the batch-normalization in every layer (note however that the scaling parameter is preserved). Our UNet model [50] has the following layers: 1. conv1 - Takes in input image and maps to 32 channels with 5 5 convolutional kernels. The input to this layer is the concatenation of the outputs of layer conv7 and conv2. The structure is the same as in [68]. This configuration of UNet assumes even width and height, so we remove one row or column from images in with odd height or width. We use a modified version of the blind-spot network architecture introduced in Ref. [29]. We rotate the input frames by multiples of 90 and process them through four separate branches (with shared weights) containing asymmetric convolutional filters that are vertically causal. The architecture of a branch is described in Table 1.
Neural Information Processing Systems
Mar-21-2025, 18:31:17 GMT