Goto

Collaborating Authors

 conferencetrackproceedi...


AutomatedDiscoveryofAdaptiveAttackson AdversarialDefenses

Neural Information Processing Systems

Common modifications include:(i)tuning attack parameters (e.g., number ofsteps),(ii)replacing network components to simplify the attack (e.g., removing randomization or non-differentiable components), and(iii) replacing the loss function optimized by the attack.



A New Defense Against Adversarial Images: Turning a Weakness into a Strength

Shengyuan Hu, Tao Yu, Chuan Guo, Wei-Lun Chao, Kilian Q. Weinberger

Neural Information Processing Systems

While many techniques for detecting these attacks have been proposed, theyareeasily bypassed when theadversary hasfullknowledge of the detection mechanism and adapts the attack strategy accordingly. In this paper,we adopt anovel perspectiveand regard the omnipresence of adversarial perturbations asastrength rather thanaweakness.


Visualizing the PHATE of Neural Networks

Scott Gigante, Adam S. Charles, Smita Krishnaswamy, Gal Mishne

Neural Information Processing Systems

Wedemonstrate that our visualization provides intuitive, detailed summaries of the learning dynamics beyond simple global measures (i.e., validation loss and accuracy), without the need to access validation data. Furthermore, M-PHATE better captures both the dynamics and community structure of the hidden units as compared to visualization based on standard dimensionality reduction methods (e.g., ISOMAP,t-SNE).






Hyperbolic Graph Neural Networks

Qi Liu, Maximilian Nickel, Douwe Kiela

Neural Information Processing Systems

Motivatedbyrecent advances ingeometric representation learning, we propose a novel GNN architecture for learning representations on Riemannian manifolds with differentiable exponential and logarithmic maps.