Goto

Collaborating Authors

 network






Learning Conditioned Graph Structures for Interpretable Visual Question Answering

Will Norcliffe-Brown, Stathis Vafeias, Sarah Parisot

Neural Information Processing Systems

Understanding both the question and image, as well as modelling their interactions requires us to combine Computer Vision and NLP techniques. The problem is generally framed in terms of classification, such that the network learns to produce answers from a finite set of classes which facilitates training and evaluation.


Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

Yusuke Tsuzuku, Issei Sato, Masashi Sugiyama

Neural Information Processing Systems

Adversarial training [10, 16, 18], which injects adversarially perturbed dataintotraining data,isapromising approach. Many other heuristics have been developed to make neural networks insensitive against small perturbations on inputs.





Curriculum By Smoothing

Neural Information Processing Systems

Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation. Moreover, recent work in Generative Adversarial Networks (GANs) has highlighted the importance of learning by progressively increasing the difficulty of a learning task Kerras et al. When learning a network from scratch, the information propagated within the network during the earlier stages of training can contain distortion artifacts due to noise which can be detrimental to training. In this paper, we propose an elegant curriculum-based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters. We propose to augment the training of CNNs by controlling the amount of high frequency information propagated within the CNNs as training progresses, by convolving the output of a CNN feature map of each layer with a Gaussian kernel. By decreasing the variance of the Gaussian kernel, we gradually increase the amount of high-frequency information available within the network for inference. As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data. Our proposed augmented training scheme significantly improves the performance of CNNs on various vision tasks without either adding additional trainable parameters or an auxiliary regularization objective. The generality of our method is demonstrated through empirical performance gains in CNN architectures across four different tasks: transfer learning, cross-task transfer learning, and generative models.