Supplementary Material: Aligning Gradient and Hessian for Neural Signed Distance Function
–Neural Information Processing Systems
As our method requires that the implicit function has at least second-order smoothness, we combine our method with SIREN [25] and Neural-Pull [5] that leverage the Softplus activation function. For SIREN-based MLP, we use a 4-layer SIREN-based MLP with 256 nodes in each layer. For combining our term with Neural-Pull [5], we use a 8-layers fully connected network architecture with 512 nodes in each layer. We use the Adam optimizer with a learning rate of 0.0001, a batch size of 10K, and 10K iterations. We also evaluate our method with multi-image input with NeuS [27].