Canada
Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity
Two of the most prominent algorithms for solving unconstrained smooth games are the classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic consensus optimization (SCO) [Mescheder et al., 2017]. SGDA is known to converge to a stationary point for specific classes of games, but current convergence analyses require a bounded variance assumption. SCO is used successfully for solving large-scale adversarial problems, but its convergence guarantees are limited to its deterministic variant. In this work, we introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO under this condition for solving a class of stochastic variational inequality problems that are potentially non-monotone. We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size, and we propose insightful stepsize-switching rules to guarantee convergence to the exact solution. In addition, our convergence guarantees hold under the arbitrary sampling paradigm, and as such, we give insights into the complexity of minibatching.
Explicit Regularisation in Gaussian Noise Injections
We study the regularisation induced in neural networks by Gaussian noise injections (GNIs). Though such injections have been extensively studied when applied to data, there have been few studies on understanding the regularising effect they induce when applied to network activations. Here we derive the explicit regulariser of GNIs, obtained by marginalising out the injected noise, and show that it penalises functions with high-frequency components in the Fourier domain; particularly in layers closer to a neural network's output. We show analytically and empirically that such regularisation produces calibrated classifiers with large classification margins.
Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity Andrew C. Cullen 1 Paul Montague 2 Sarah M. Erfani 1
In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution. There, invariance of predictions to all norm-bounded attacks is achieved through randomised smoothing of network inputs. Today's state-of-the-art certifications make optimal use of the class output scores at the input instance under test: no better radius of certification (under the L
Supplementary Material A Dataset Detail
Since DSLR and Webcam do not have many examples, we conduct experiments on D to A, W to A, A to C (Caltech), D to C, and W to C shifts. The setting is the same as (11). The second benchmark dataset is OfficeHome (OH) (12), which contains four domains and 65 classes. The third dataset is VisDA (9), which contains 12 classes from the two domains, synthetic and real images. The synthetic domain consists of 152,397 synthetic 2D renderings of 3D objects and the real domain consists of 55,388 real images.
Few-shot Image Generation with Elastic Weight Consolidation Supplementary Material
In this supplementary material, we present more few-shot generation results evaluated extensively with different artistic domains where there are only a few examples available in practical. The goal is to illustrate the effectiveness of the proposed method in generating diverse high-quality results without being over-fitted to the few given examples. Figure 1 shows the generations of source and target domain by feeding the same latent code into the source and adapted model. It clearly tells that while the adaptation renders new appearance of target domain, other attributes such as the pose, glass and hairstyle, are well inherited and preserved from the source domain. For each target domain, we only use 10 examples for the adaptation and present 100 new results.
Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness Long Zhao 1 Ting Liu 2 Xi Peng 3
Adversarial data augmentation has shown promise for training robust deep neural networks against unforeseen data shifts or corruptions. However, it is difficult to define heuristics to generate effective fictitious target distributions containing "hard" adversarial perturbations that are largely different from the source distribution. In this paper, we propose a novel and effective regularization term for adversarial data augmentation. We theoretically derive it from the information bottleneck principle, which results in a maximum-entropy formulation. Intuitively, this regularization term encourages perturbing the underlying source distribution to enlarge predictive uncertainty of the current model, so that the generated "hard" adversarial perturbations can improve the model robustness during training. Experimental results on three standard benchmarks demonstrate that our method consistently outperforms the existing state of the art by a statistically significant margin.
Calibration of Shared Equilibria in General Sum Partially Observable Markov Games - Supplementary Nelson Vadori, Sumitra Ganesh, Prashant Reddy, Manuela Veloso J.P. Morgan AI Research A Proofs, y
B.4 Complete set of experimental results associated to section 4 In this section we display the complete set of results associated to figures shown in section 4. We display in figure 2 the rewards of all agents during training (calibrator, merchant on supertype 1 and n 1 merchants on supertype 2) for experiments 1-5 previously described.
Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation
We investigate the robustness of vision transformers (ViTs) through the lens of their special patch-based architectural structure, i.e., they process an image as a sequence of image patches. We find that ViTs are surprisingly insensitive to patchbased transformations, even when the transformation largely destroys the original semantics and makes the image unrecognizable by humans. This indicates that ViTs heavily use features that survived such transformations but are generally not indicative of the semantic class to humans. Further investigations show that these features are useful but non-robust, as ViTs trained on them can achieve high in-distribution accuracy, but break down under distribution shifts. From this understanding, we ask: can training the model to rely less on these features improve ViT robustness and out-of-distribution performance? We use the images transformed with our patch-based operations as negatively augmented views and offer losses to regularize the training away from using non-robust features. This is a complementary view to existing research that mostly focuses on augmenting inputs with semantic-preserving transformations to enforce models' invariance. We show that patch-based negative augmentation consistently improves robustness of ViTs on ImageNet based robustness benchmarks across 20+ different experimental settings. Furthermore, we find our patch-based negative augmentation are complementary to traditional (positive) data augmentation techniques and batchbased negative examples in contrastive learning.
supplementary material for paper: Constant-Expansion Suffices for Compressed Sensing with Generative Priors
In this section we prove Theorem 3.2. The two arguments are essentially identical, and we will focus on the former. See [20] for a reference on the first bound. The second bound is by concentration of chisquared with k degrees of freedom. We check that f and g satisfy the three conditions of Theorem 4.4 with appropriate parameters. Finally, since Pr[W Θ] 1/2, it follows that conditioning on Θ at most doubles the failure probability.