Goto

Collaborating Authors

 Tuscany



bc218a0c656e49d4b086975a9c785f47-Supplemental-Datasets_and_Benchmarks.pdf

Neural Information Processing Systems

Emerging ethical approaches have attempted to filter pretraining material, but such approaches have been ad hoc and failed to take context into account. We offer an approach to filtering grounded in law, which has directly addressed the tradeoffs in filtering material.





1d8dc55c1f6cf124af840ce1d92d1896-Paper-Conference.pdf

Neural Information Processing Systems

As inthe classical problem, weights are fixed by an adversary and elements appear in random order. In contrast to previous variants of predictions, our algorithm only has access toamuch weakerpiece ofinformation: anadditive gapc.


Appendices

Neural Information Processing Systems

The supplementary material is organized as follows. We first discuss additional related work and provide experiment details inSection 2andAppendix Brespectively. Adversarial Defenses: Neural networks trained using standard procedures such as SGD are extremely vulnerable [23] to -bound adversarial attacks such as FGSM [23], PGD [42], CW [11], andMomentum [17];Unrestricted attacks [7,19]cansignificantly degrade model performance as well. Defense strategies based on heuristics such as feature squeezing [82], denoising [80], encoding [10], specialized nonlinearities [83] and distillation [56] have had limited success against stronger attacks [2]. Then, we introduce a noisy version of the5-slab block,whichwelateruseinAppendixD.


6cfe0e6127fa25df2a0ef2ae1067d915-Paper.pdf

Neural Information Processing Systems

However,maximum-marginclassifiers areinherently robusttoperturbations ofdata at prediction time, and this implication is at odds with concrete evidence that neural networks, in practice, are brittle toadversarial examples [71]and distribution shifts [52,58,44,65]. Hence, the linear setting, while convenient to analyze, is insufficient to capture the non-robustness of neural networkstrainedonrealdatasets.Goingbeyondthelinearsetting,severalworks[ 1,49,74]arguethat neuralnetworksgeneralize wellbecause standard training procedures haveabiastowardslearning