Goto

Collaborating Authors

 inthissection


Instance-SpecificAsymmetricSensitivityin DifferentialPrivacy

Neural Information Processing Systems

While the inverse sensitivity mechanism was shown to be instance optimal, it was only with respect to a class of unbiased mechanisms such that the most likely outcome matches the underlying data.




a7c4163b33286261b24c72fd3d1707c9-Supplemental-Datasets_and_Benchmarks.pdf

Neural Information Processing Systems

These datasets enable large-scale study of abuse detection for these languages. Anonymized comments: To further address privacy concerns, we anonymize our dataset. We combine thehate and offensivecategories in these datasets for training a binary classification model. We showthepercentage (%)ofemoticons present inourdatasetMACDinTable12. Infuture work,we will investigate in detail about the impact of emoticons on abuse detection. However,duetothe limited scale and diversity of abuse detection datasets in Indic languages, development of these models for Indic languages has been severely impeded.



c39e1a03859f9ee215bc49131d0caf33-Supplemental.pdf

Neural Information Processing Systems

Additionally, we show generalization performance of our proposed method across differentvisualdomains. Withthegiven problemcategory(task),asubsetforlearning can be sampled (via domain episode module in Figure 4 in main text). Here, by replacingclass with task, K-shot andN-task reasoning framework can be defined. Here, we show analogical learning with the existing meta learning framework for fast adaptation fromthesourcedomain tothetargetdomain.



80f2f15983422987ea30d77bb531be86-Paper.pdf

Neural Information Processing Systems

Wethenseparate theoptimization process into two steps, corresponding to weight update and structure parameter update. For the former step, we use the conventional chain rule, which can be sparse via exploiting the sparse structure.


Appendices

Neural Information Processing Systems

The supplementary material is organized as follows. We first discuss additional related work and provide experiment details inSection 2andAppendix Brespectively. Adversarial Defenses: Neural networks trained using standard procedures such as SGD are extremely vulnerable [23] to -bound adversarial attacks such as FGSM [23], PGD [42], CW [11], andMomentum [17];Unrestricted attacks [7,19]cansignificantly degrade model performance as well. Defense strategies based on heuristics such as feature squeezing [82], denoising [80], encoding [10], specialized nonlinearities [83] and distillation [56] have had limited success against stronger attacks [2]. Then, we introduce a noisy version of the5-slab block,whichwelateruseinAppendixD.