Goto

Collaborating Authors

 Country


Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity

Neural Information Processing Systems

Two of the most prominent algorithms for solving unconstrained smooth games are the classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic consensus optimization (SCO) [Mescheder et al., 2017]. SGDA is known to converge to a stationary point for specific classes of games, but current convergence analyses require a bounded variance assumption. SCO is used successfully for solving large-scale adversarial problems, but its convergence guarantees are limited to its deterministic variant. In this work, we introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO under this condition for solving a class of stochastic variational inequality problems that are potentially non-monotone. We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size, and we propose insightful stepsize-switching rules to guarantee convergence to the exact solution. In addition, our convergence guarantees hold under the arbitrary sampling paradigm, and as such, we give insights into the complexity of minibatching.


The Image Local Autoregressive Transformer

Neural Information Processing Systems

Recently, AutoRegressive (AR) models for the whole image generation empowered by transformers have achieved comparable or even better performance compared to Generative Adversarial Networks (GANs). Unfortunately, directly applying such AR models to edit/change local image regions, may suffer from the problems of missing global information, slow inference speed, and information leakage of local guidance. To address these limitations, we propose a novel model - image Local Autoregressive Transformer (iLAT), to better facilitate the locally guided image synthesis. Our iLAT learns the novel local discrete representations, by the newly proposed local autoregressive (LA) transformer of the attention mask and convolution mechanism. Thus iLAT can efficiently synthesize the local image regions by key guidance information. Our iLAT is evaluated on various locally guided image syntheses, such as pose-guided person image synthesis and face editing. Both quantitative and qualitative results show the efficacy of our model.


Explicit Regularisation in Gaussian Noise Injections

Neural Information Processing Systems

We study the regularisation induced in neural networks by Gaussian noise injections (GNIs). Though such injections have been extensively studied when applied to data, there have been few studies on understanding the regularising effect they induce when applied to network activations. Here we derive the explicit regulariser of GNIs, obtained by marginalising out the injected noise, and show that it penalises functions with high-frequency components in the Fourier domain; particularly in layers closer to a neural network's output. We show analytically and empirically that such regularisation produces calibrated classifiers with large classification margins.


Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity Andrew C. Cullen 1 Paul Montague 2 Sarah M. Erfani 1

Neural Information Processing Systems

In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution. There, invariance of predictions to all norm-bounded attacks is achieved through randomised smoothing of network inputs. Today's state-of-the-art certifications make optimal use of the class output scores at the input instance under test: no better radius of certification (under the L


Supplementary Material A Dataset Detail

Neural Information Processing Systems

Since DSLR and Webcam do not have many examples, we conduct experiments on D to A, W to A, A to C (Caltech), D to C, and W to C shifts. The setting is the same as (11). The second benchmark dataset is OfficeHome (OH) (12), which contains four domains and 65 classes. The third dataset is VisDA (9), which contains 12 classes from the two domains, synthetic and real images. The synthetic domain consists of 152,397 synthetic 2D renderings of 3D objects and the real domain consists of 55,388 real images.


Supplementary material to De-randomizing MCMC dynamics with the generalized Stein operator Samuel Kaski

Neural Information Processing Systems

If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?



Few-shot Image Generation with Elastic Weight Consolidation Supplementary Material

Neural Information Processing Systems

In this supplementary material, we present more few-shot generation results evaluated extensively with different artistic domains where there are only a few examples available in practical. The goal is to illustrate the effectiveness of the proposed method in generating diverse high-quality results without being over-fitted to the few given examples. Figure 1 shows the generations of source and target domain by feeding the same latent code into the source and adapted model. It clearly tells that while the adaptation renders new appearance of target domain, other attributes such as the pose, glass and hairstyle, are well inherited and preserved from the source domain. For each target domain, we only use 10 examples for the adaptation and present 100 new results.


Everything Unveiled at Google I/O 2025

Mashable

See all the highlights from Google's annual 2025 Developers Conference in Mountain View, California. Check out the latest updates from Android XR to Gemini Live, and more. Topics Android Artificial Intelligence Google Google Gemini Latest Videos Everything Announced at AMD's 2025 Computex Keynote in 19 Minutes Watch highlights from AMD's Computex press conference. 1 hour ago By Mashable Video'Caught Stealing' trailer sees Zoë Kravitz and Austin Butler's cat-sitting gone awry Darren Aronofsky's swaggering new film looks like a rollicking time. Loading... Subscribe These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16 and agree to ourTerms of Use and Privacy Policy.


Android XR Glasses Unveiled at Google I/O 2025

Mashable

Topics Android Artificial Intelligence Google Google Gemini Latest Videos Everything Announced at AMD's 2025 Computex Keynote in 19 Minutes Watch highlights from AMD's Computex press conference. 1 hour ago By Mashable Video'Caught Stealing' trailer sees Zoë Kravitz and Austin Butler's cat-sitting gone awry Darren Aronofsky's swaggering new film looks like a rollicking time. Loading... Subscribe These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16 and agree to ourTerms of Use and Privacy Policy. See you at your inbox! Mashable is a registered trademark of Ziff Davis and may not be used by third parties without express written permission.