Goto

Collaborating Authors

 vancouver



AutomatedDiscoveryofAdaptiveAttackson AdversarialDefenses

Neural Information Processing Systems

Common modifications include:(i)tuning attack parameters (e.g., number ofsteps),(ii)replacing network components to simplify the attack (e.g., removing randomization or non-differentiable components), and(iii) replacing the loss function optimized by the attack.


A New Defense Against Adversarial Images: Turning a Weakness into a Strength

Shengyuan Hu, Tao Yu, Chuan Guo, Wei-Lun Chao, Kilian Q. Weinberger

Neural Information Processing Systems

While many techniques for detecting these attacks have been proposed, theyareeasily bypassed when theadversary hasfullknowledge of the detection mechanism and adapts the attack strategy accordingly. In this paper,we adopt anovel perspectiveand regard the omnipresence of adversarial perturbations asastrength rather thanaweakness.



Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models

Ruoxi Sun, Scott Linderman, Ian Kinsella, Liam Paninski

Neural Information Processing Systems

Recent progress in the development of voltage indicators [1-8] has brought us closer to a longstanding goal incellular neuroscience: imaging the full spatiotemporal voltageonadendritic tree. These recordings have the potential (pun not intended) to resolve fundamental questions about the computations performed by dendrites -- questions that have remained open for more than a century[9,10].


Novel positional encodings to enable tree-based transformers

Vighnesh Shiv, Chris Quirk

Neural Information Processing Systems

Motivated by this property, we propose a method to extend transformers to tree-structured data, enabling sequence-totree, tree-to-sequence, and tree-to-tree mappings. Our approach abstracts the transformer'ssinusoidal positional encodings, allowing ustoinstead useanovel positional encoding scheme to represent node positions within trees.


Gradient-basedEditingofMemoryExamplesfor Online Task-freeContinualLearning

Neural Information Processing Systems

GMED-editedexamplesremain similar to their unedited forms, but can yield increased loss in the upcoming model updates, thereby making thefuture replays more effectiveinovercoming catastrophic forgetting.