Goto

Collaborating Authors

 etal


Self-Routing Capsule Networks

Taeyoung Hahn, Myeongjang Pyeon, Gunhee Kim

Neural Information Processing Systems

In this work, we propose a novel and surprisingly simple routing strategy called self-routing, where each capsule is routed independently by its subordinate routing network. Therefore, the agreement between capsules is not required anymore, but both poses and activations of upper-level capsules are obtained in a way similar to Mixture-of-Experts. Our experiments on CIFAR10, SVHN, and SmallNORB showthat the self-routing performs more robustly against white-box adversarial attacks and affine transformations, requiring less computation.


Learning Conditional Deformable Templates with Convolutional Networks

Adrian Dalca, Marianne Rakic, John Guttag, Mert Sabuncu

Neural Information Processing Systems

In these frameworks, templates are constructed using an iterative process of template estimation and alignment, which is often computationally very expensive. Due in part to this shortcoming, most methods compute asingle template for the entire population of images, or a few templates for specific sub-groups of the data.


MacNet: Transferring Knowledge from Machine Comprehension to Sequence-to-Sequence Models

Boyuan Pan, Yazheng Yang, Hao Li, Zhou Zhao, Yueting Zhuang, Deng Cai, Xiaofei He

Neural Information Processing Systems

Machine comprehension (MC) has gained significant popularity over the past few years and it is a coveted goal in the field of natural language understanding. Its task is to teach the machine to understand thecontent ofagivenpassage andthenanswer arelated question, which requires deep comprehension and accurate information extraction towards the text.




A Smoothed Analysis of the Greedy Algorithm for the Linear Contextual Bandit Problem

Sampath Kannan, Jamie H. Morgenstern, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu

Neural Information Processing Systems

Wegiveasmoothed analysis, showing that evenwhen contexts may be chosen by an adversary, small perturbations of the adversary's choices suffice for the algorithm to achieve "no regret", perhaps (depending on the specifics of the setting) with a constant amount of initial training data.




LimitstoDepth-EfficienciesofSelf-Attention

Neural Information Processing Systems

Self-attention architectures, which are rapidly pushing the frontier innatural language processing, demonstrate asurprising depth-inefficient behavior: previous works indicate that increasing the internal representation (network width) isjust as useful as increasing the number of self-attention layers (network depth).