Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples

Neural Information Processing Systems 

We present a transductive learning algorithm that takes as input training examples from a distribution and arbitrary (unlabeled) test examples, possibly chosen by an adversary. This is unlike prior work that assumes that test examples are small perturbations of.