Goto

Collaborating Authors

 vcdim


SupplementaryMaterial

Neural Information Processing Systems

This is the appendix for "A general approximation lower bound inLp norm, with applications to feed-forwardneuralnetworks". Layer L consists of a single node: the output neuron. Note that skip connections are allowed, i.e., there can be connections between non-consecutivelayers. We now explain how to derive Proposition 1 (with an arbitrary range[a,b]) as a straightforward consequenceofProposition7. Proof(ofProposition1). In order to apply Proposition 7, we reduce the problem from[a,b] to [0,1] by translating and rescaling every function inG.



ATheoryofPACLearnabilityunderTransformation Invariances

Neural Information Processing Systems

Third, weintroduce acomplexitymeasure (seeDefinition 5)thatcharacterizes theoptimal sample complexity of learning in settings (ii) and (iii) above, and we give optimal algorithms for these settings. Finally,wealso provide adaptivelearning algorithms that interpolate between settings (i) and (ii), i.e., whenh is partiallyinvariant.




Recursively Enumerably Representable Classes and Computable Versions of the Fundamental Theorem of Statistical Learning

Kattermann, David, Krapp, Lothar Sebastian

arXiv.org Artificial Intelligence

We study computable probably approximately correct (CPAC) learning, where learners are required to be computable functions. It had been previously observed that the Fundamental Theorem of Statistical Learning, which characterizes PAC learnability by finiteness of the Vapnik-Chervonenkis (VC-)dimension, no longer holds in this framework. Recent works recovered analogs of the Fundamental Theorem in the computable setting, for instance by introducing an effective VC-dimension. Guided by this, we investigate the connection between CPAC learning and recursively enumerable representable (RER) classes, whose members can be algorithmically listed. Our results show that the effective VC-dimensions can take arbitrary values above the traditional one, even for RER classes, which creates a whole family of (non-)examples for various notions of CPAC learning. Yet the two dimensions coincide for classes satisfying sufficiently strong notions of CPAC learning. We then observe that CPAC learnability can also be characterized via containment of RER classes that realize the same samples. Furthermore, it is shown that CPAC learnable classes satisfying a unique identification property are necessarily RER. Finally, we establish that agnostic learnability can be guaranteed for RER classes, by considering the relaxed notion of nonuniform CPAC learning.