Goto

Collaborating Authors

 opt


The Rules-and-Facts Model for Simultaneous Generalization and Memorization in Neural Networks

Farné, Gabriele, Boncoraglio, Fabrizio, Zdeborová, Lenka

arXiv.org Machine Learning

A key capability of modern neural networks is their capacity to simultaneously learn underlying rules and memorize specific facts or exceptions. Yet, theoretical understanding of this dual capability remains limited. We introduce the Rules-and-Facts (RAF) model, a minimal solvable setting that enables precise characterization of this phenomenon by bridging two classical lines of work in the statistical physics of learning: the teacher-student framework for generalization and Gardner-style capacity analysis for memorization. In the RAF model, a fraction $1 - \varepsilon$ of training labels is generated by a structured teacher rule, while a fraction $\varepsilon$ consists of unstructured facts with random labels. We characterize when the learner can simultaneously recover the underlying rule - allowing generalization to new data - and memorize the unstructured examples. Our results quantify how overparameterization enables the simultaneous realization of these two objectives: sufficient excess capacity supports memorization, while regularization and the choice of kernel or nonlinearity control the allocation of capacity between rule learning and memorization. The RAF model provides a theoretical foundation for understanding how modern neural networks can infer structure while storing rare or non-compressible information.





2e2c4bf7ceaa4712a72dd5ee136dc9a8-Supplemental.pdf

Neural Information Processing Systems

Most notably, we obtain the first dimension-independent generalization bounds formulti-pass SGD inthenonsmooth case. Inaddition, our bounds allow us to derive a new algorithm for differentially private nonsmooth stochastic convex optimization withoptimal excess population risk.




Agnostic Learning of a Single Neuron with Gradient Descent

Neural Information Processing Systems

We consider the problem of learning the best-fitting single neuron as measured by the expected square loss $\E_{(x,y)\sim \mathcal{D}}[(\sigma(w^\top x)-y)^2]$ over some unknown joint distribution $\mathcal{D}$ by using gradient descent to minimize the empirical risk induced by a set of i.i.d.


Anthropic Will Use Claude Chats for Training Data. Here's How to Opt Out

WIRED

Anthropic is starting to train its models on new Claude chats. If you're using the bot and don't want your chats used as training data, here's how to opt out. Anthropic is prepared to repurpose conversations users have with its Claude chatbot as training data for its large language models--unless those users opt out. Previously, the company did not train its generative AI models on user chats. When Anthropic's privacy policy updates on October 8 to start allowing for this, users will have to opt out, or else their new chat logs and coding tasks will be used to train future Anthropic models. "All large language models, like Claude, are trained using large amounts of data," reads part of Anthropic's blog explaining why the company made this policy change.


Tutorial: $φ$-Transductions in OpenFst via the Gallic Semiring

Cognetta, Marco, Allauzen, Cyril

arXiv.org Artificial Intelligence

OpenFst, a popular finite-state transducer library, supports $φ$-transitions but, due to an implementation constraint, they cannot be used with transducers in a straightforward way. In this short tutorial, we describe how one can use other functionality provided by OpenFst (namely, the Gallic semiring) to correctly implement $φ$-transductions and demonstrate it by implementing the MaxMatch (WordPiece) tokenization algorithm (Devlin et al., 2019; Song et al., 2021). Accompanying self-contained code examples are provided. https://www.openfst.org/twiki/pub/Contrib/FstContrib/phi_transduction_tutorial_code.tgz