Goto

Collaborating Authors

 cifar100




Figure 9: In experiments, we used a common feature-extractor (F

Neural Information Processing Systems

Here, we include implementation details omitted from the main paper for brevity. Upon acceptance, a deanonymized repository will be released. The last layer's dimension depended upon the exact The feature extractors and decoders varied by domain. In particular, we found that if we did not apply this linear transformation (i.e., pass the raw encodings For VQ-based methods, use a large enough codebook to have at least one element per class. Other differences simply reflected differences in architecture (e.g., For iNat, we trained all models with batch size 256, using the hyperparameters specified in Table 3.


FEEL-SNN: Robust Spiking Neural Networks with Frequency Encoding and Evolutionary Leak Factor

Neural Information Processing Systems

In our study, we first present a unified theoretical framework for SNN robustness constraint, which suggests that improving the encoding method and evolution of the membrane potential leak factor in spiking neurons can improve SNN robustness.





Appendix for " Residual Alignment: Uncovering the Mechanisms of Residual Networks " Anonymous Author(s) Affiliation Address email

Neural Information Processing Systems

We start by providing motivation for the unconstrained Jacobians problem introduced in the main text. We will continue our proof using contradiction. Figure 1: Fully-connected ResNet34 (Type 1 model) trained on MNIST.Figure 2: Fully-connected ResNet34 (Type 1 model) trained on FashionMNIST. Figure 10: Fully-connected ResNet34 (Type 1 model) trained on MNIST. Figure 24: Fully-connected ResNet34 (Type 1 model) trained on MNIST.