Goto

Collaborating Authors

 ada




Anchor Data Augmentation

Neural Information Processing Systems

We propose a novel algorithm for data augmentation in nonlinear over-parametrized regression. Our data augmentation algorithm borrows from the literature on causality. Contrary to the current state-of-the-art solutions that rely on modifications of Mixup algorithm, we extend the recently proposed distributionally robust Anchor regression (AR) method for data augmentation. Our Anchor Data Augmentation (ADA) uses several replicas of the modified samples in AR to provide more training examples, leading to more robust regression predictions. We apply ADA to linear and nonlinear regression problems using neural networks. ADA is competitive with state-of-the-art C-Mixup solutions.


A Continuous Mapping For Augmentation Design

Neural Information Processing Systems

Automated data augmentation (ADA) techniques have played an important role in boosting the performance of deep models. Such techniques mostly aim to optimize a parameterized distribution over a discrete augmentation space. Thus, are restricted by the discretization of the search space which normally is handcrafted. To overcome the limitations, we take the first step to constructing a continuous mapping from $\mathbb{R}^d$ to image transformations (an augmentation space). Using this mapping, we take a novel approach where 1) we pose the ADA as a continuous optimization problem over the parameters of the augmentation distribution; and 2) use Stochastic Gradient Langevin Dynamics to learn and sample augmentations. This allows us to potentially explore the space of infinitely many possible augmentations, which otherwise was not possible due to the discretization of the space. This view of ADA is radically different from the standard discretization based view of ADA, and it opens avenues for utilizing the vast efficient gradient-based algorithms available for continuous optimization problems. Results over multiple benchmarks demonstrate the efficiency improvement of this work compared with previous methods.





Supplementary Material: Appendices A Geometric Numerical Integration Geometric numerical integration is a study on the numerical integrators of ODEs that preserve the

Neural Information Processing Systems

Due to this property, there should exist a corresponding Hamiltonian function, i.e., energy function, As such, the discrete gradient method has achieved great success. As shown above, a discrete gradient is defined in Definition 1. The target equations for this study are the differential equations with a certain geometric structure. A (null u)null v (16) with a matrix A (null u); hence, Eq. (15) is shown to be equivalent to null w This is our target equation in Eq. (1). This section provides the proofs of the Theorems in the main text.