Goto

Collaborating Authors

 insp-net


Signal Processingfor Implicit Neural Representations

Neural Information Processing Systems

We 39] UnivCon hasserv (real-vf and g, we examine filter. Wechoose Thai Statue, Armadillo, and Dragonfrom Stanford 3DScanning Repository [84,85,86,87] todemonstrateourresults. Figure 1 8 Input Image Mean Filter Median Filter LaMaINSP-Net Target Image


Signal Processing for Implicit Neural Representations

Neural Information Processing Systems

Implicit Neural Representations (INRs) encoding continuous multi-media data via multi-layer perceptrons has shown undebatable promise in various computer vision tasks. Despite many successful applications, editing and processing an INR remains intractable as signals are represented by latent parameters of a neural network. Existing works manipulate such continuous representations via processing on their discretized instance, which breaks down the compactness and continuous nature of INR. In this work, we present a pilot study on the question: how to directly modify an INR without explicit decoding? We answer this question by proposing an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR. Our key insight is that spatial gradients of neural networks can be computed analytically and are invariant to translation, while mathematically we show that any continuous convolution filter can be uniformly approximated by a linear combination of high-order differential operators. With these two knobs, INSP-Net instantiates the signal processing operator as a weighted composition of computational graphs corresponding to the high-order derivatives of INRs, where the weighting parameters can be data-driven learned. Based on our proposed INSP-Net, we further build the first Convolutional Neural Network (CNN) that implicitly runs on INRs, named INSP-ConvNet.


A Proof of Theorem

Neural Information Processing Systems

Eq. 7 implies that gradient operator is Below we supplement the Lemma A.1 used to prove Theorem 1. ( j 1) Jacobian matrix, the second equality is due to the induction hypothesis, and the third equality is an adoption of chain rule. Then by induction, we can conclude the proof. For a sake of clarity, we first introduce few notations in algebra and real analysis. Definition B.2. (Differential Operator) Suppose a compact set Definition B.4. (F ourier Transform) Given real-valued function Definition B.5. (Convolution) Given two real-valued functions Before we prove Theorem 2, we enumerate the following results as our key mathematical tools: First of all, we note the following well-known result without a proof. Lemma B.2. (Stone-W eierstrass Theorem) Suppose A C ( X, R) is a unital sub-algebra which separates points in X .



Signal Processing for Implicit Neural Representations

Neural Information Processing Systems

Implicit Neural Representations (INRs) encoding continuous multi-media data via multi-layer perceptrons has shown undebatable promise in various computer vision tasks. Despite many successful applications, editing and processing an INR remains intractable as signals are represented by latent parameters of a neural network. Existing works manipulate such continuous representations via processing on their discretized instance, which breaks down the compactness and continuous nature of INR. In this work, we present a pilot study on the question: how to directly modify an INR without explicit decoding? We answer this question by proposing an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.