Goto

Collaborating Authors

 exponential separation



Exponential Separation between Two Learning Models and Adversarial Robustness

Neural Information Processing Systems

We prove an exponential separation for the sample/query complexity between the standard PAC-learning model and a version of the Equivalence-Query-learning model. In the PAC model all samples are provided at the beginning of the learning process. In the Equivalence-Query model the samples are acquired through an interaction between a teacher and a learner, where the teacher provides counterexamples to hypotheses given by the learner. It is intuitive that in an interactive setting fewer samples are needed. We make this formal and prove that in order to achieve an error $\epsilon$ {\em exponentially} (in $\epsilon$) fewer samples suffice than what the PAC bound requires. It was shown experimentally by Stutz, Hein, and Schiele that adversarial training with on-manifold adversarial examples aids generalization (compared to standard training).



Exponential Separations in Symmetric Neural Networks

Neural Information Processing Systems

In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network \parencite{santoro2017simple} architecture as a natural generalization of the DeepSets \parencite{zaheer2017deep} architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size N with elements in dimension D, which can be efficiently approximated by the former architecture, but provably requires width exponential in N and D for the latter.


Exponential Separation between Two Learning Models and Adversarial Robustness

Neural Information Processing Systems

We prove an exponential separation for the sample/query complexity between the standard PAC-learning model and a version of the Equivalence-Query-learning model. In the PAC model all samples are provided at the beginning of the learning process. In the Equivalence-Query model the samples are acquired through an interaction between a teacher and a learner, where the teacher provides counterexamples to hypotheses given by the learner. It is intuitive that in an interactive setting fewer samples are needed. We make this formal and prove that in order to achieve an error \epsilon {\em exponentially} (in \epsilon) fewer samples suffice than what the PAC bound requires.


Exponential Separations in Local Differential Privacy Through Communication Complexity

Joseph, Matthew, Mao, Jieming, Roth, Aaron

arXiv.org Machine Learning

We prove a general connection between the communication complexity of two-player games and the sample complexity of their multi-player locally private analogues. We use this connection to prove sample complexity lower bounds for locally differentially private protocols as straightforward corollaries of results from communication complexity. In particular, we 1) use a communication lower bound for the hidden layers problem to prove an exponential sample complexity separation between sequentially and fully interactive locally private protocols, and 2) use a communication lower bound for the pointer chasing problem to prove an exponential sample complexity separation between $k$ round and $k+1$ round sequentially interactive locally private protocols, for every $k$.


Extension Variables in QBF Resolution

Beyersdorf, Olaf (University of Leeds) | Chew, Leroy (University of Leeds) | Janota, Mikolas (Microsoft Research, Cambridge)

AAAI Conferences

We investigate two QBF resolution systems that use extension variables: weak extended Q-resolution, where the extension variables are quantified at the innermost level, and extended Q-resolution, where the extension variables can be placed inside the quantifier prefix. These systems have been considered previously by Wintersteiger et al, who give experimental evidence that extended Q-resolution is stronger than weak extended Q-resolution. Here we prove an exponential separation between the two systems, thereby confirming the conjecture of Wintersteiger et al. Conceptually, this separation relies on showing strategy extraction for weak extended Q-resolution by bounded-depth circuits. In contrast, we show that this strong strategy extraction result fails in extended Q-resolution.