Goto

Collaborating Authors

 Rote Learning


Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

arXiv.org Artificial Intelligence

Despite their wide adoption, the underlying training and memorization dynamics of very large language models is not well understood. We empirically study exact memorization in causal and masked language modeling, across model sizes and throughout the training process. We measure the effects of dataset size, learning rate, and model size on memorization, finding that larger language models memorize training data faster across all settings. Surprisingly, we show that larger models can memorize a larger portion of the data before over-fitting and tend to forget less throughout the training process. We also analyze the memorization dynamics of different parts of speech and find that models memorize nouns and numbers first; we hypothesize and provide empirical evidence that nouns and numbers act as a unique identifier for memorizing individual training examples. Together, these findings present another piece of the broader puzzle of trying to understand what actually improves as models get bigger.


Continual Variational Autoencoder Learning via Online Cooperative Memorization

arXiv.org Artificial Intelligence

Due to their inference, data representation and reconstruction properties, Variational Autoencoders (VAE) have been successfully used in continual learning classification tasks. However, their ability to generate images with specifications corresponding to the classes and databases learned during Continual Learning (CL) is not well understood and catastrophic forgetting remains a significant challenge. In this paper, we firstly analyze the forgetting behaviour of VAEs by developing a new theoretical framework that formulates CL as a dynamic optimal transport problem. This framework proves approximate bounds to the data likelihood without requiring the task information and explains how the prior knowledge is lost during the training process. We then propose a novel memory buffering approach, namely the Online Cooperative Memorization (OCM) framework, which consists of a Short-Term Memory (STM) that continually stores recent samples to provide future information for the model, and a Long-Term Memory (LTM) aiming to preserve a wide diversity of samples. The proposed OCM transfers certain samples from STM to LTM according to the information diversity selection criterion without requiring any supervised signals. The OCM framework is then combined with a dynamic VAE expansion mixture network for further enhancing its performance.


Generalization-Memorization Machines

#artificialintelligence

Firstly, we test the memorization ability and its influence of our HGMM on several small size datasets. The memory influence functions (i.e., formations (12), (13), (14) and (15)) were preloaded in our HGMM and evaluated by the m-fold cross validation (i.e., level-one-out validation, LOO for short). We set the baseline by setting the memory influence function be an identity matrix which is actually L2 loss SVM with decision (7) according to Theorem 4.3 (ii). Table II reports their highest LOO training and testing accuracies. From Table II, it is observed that our HGMM with either memory influence function has 100% training accuracies on all of these datasets.


Generalization-Memorization Machines

arXiv.org Artificial Intelligence

Classifying the training data correctly without over-fitting is one of the goals in machine learning. In this paper, we propose a generalization-memorization mechanism, including a generalization-memorization decision and a memory modeling principle. Under this mechanism, error-based learning machines improve their memorization abilities of training data without over-fitting. Specifically, the generalization-memorization machines (GMM) are proposed by applying this mechanism. The optimization problems in GMM are quadratic programming problems and could be solved efficiently. It should be noted that the recently proposed generalization-memorization kernel and the corresponding support vector machines are the special cases of our GMM. Experimental results show the effectiveness of the proposed GMM both on memorization and generalization.


Counterfactual Memorization in Neural Language Models

arXiv.org Artificial Intelligence

Modern neural language models widely used in tasks across NLP risk memorizing sensitive information from their training data. As models continue to scale up in parameters, training data, and compute, understanding memorization in language models is both important from a learning-theoretical point of view, and is practically crucial in real world applications. An open question in previous studies of memorization in language models is how to filter out "common" memorization. In fact, most memorization criteria strongly correlate with the number of occurrences in the training set, capturing "common" memorization such as familiar phrases, public knowledge or templated texts. In this paper, we provide a principled perspective inspired by a taxonomy of human memory in Psychology. From this perspective, we formulate a notion of counterfactual memorization, which characterizes how a model's predictions change if a particular document is omitted during training. We identify and study counterfactually-memorized training examples in standard text datasets. We further estimate the influence of each training example on the validation set and on generated texts, and show that this can provide direct evidence of the source of memorization at test time.


Personalized Federated Learning through Local Memorization

arXiv.org Machine Learning

Federated learning allows clients to collaboratively learn statistical models while keeping their data local. Federated learning was originally used to train a unique global model to be served to all clients, but this approach might be sub-optimal when clients' local data distributions are heterogeneous. In order to tackle this limitation, recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients. In this work, we exploit the ability of deep neural networks to extract high quality vectorial representations (embeddings) from non-tabular data, e.g., images and text, to propose a personalization mechanism based on local memorization. Personalization is obtained interpolating a pre-trained global model with a $k$-nearest neighbors (kNN) model based on the shared representation provided by the global model. We provide generalization bounds for the proposed approach and we show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.


On the Optimal Memorization Power of ReLU Neural Networks

arXiv.org Machine Learning

We study the memorization power of feedforward ReLU neural networks. We show that such networks can memorize any $N$ points that satisfy a mild separability assumption using $\tilde{O}\left(\sqrt{N}\right)$ parameters. Known VC-dimension upper bounds imply that memorizing $N$ samples requires $\Omega(\sqrt{N})$ parameters, and hence our construction is optimal up to logarithmic factors. We also give a generalized construction for networks with depth bounded by $1 \leq L \leq \sqrt{N}$, for memorizing $N$ samples using $\tilde{O}(N/L)$ parameters. This bound is also optimal up to logarithmic factors. Our construction uses weights with large bit complexity. We prove that having such a large bit complexity is both necessary and sufficient for memorization with a sub-linear number of parameters.


Memorization in Deep Neural Networks: Does the Loss Function matter?

arXiv.org Machine Learning

Deep Neural Networks, often owing to the overparameterization, are shown to be capable of exactly memorizing even randomly labelled data. Empirical studies have also shown that none of the standard regularization techniques mitigate such overfitting. We investigate whether the choice of the loss function can affect this memorization. We empirically show, with benchmark data sets MNIST and CIFAR-10, that a symmetric loss function, as opposed to either cross-entropy or squared error loss, results in significant improvement in the ability of the network to resist such overfitting. We then provide a formal definition for robustness to memorization and provide a theoretical explanation as to why the symmetric losses provide this robustness. Our results clearly bring out the role loss functions alone can play in this phenomenon of memorization.


On the Memorization Properties of Contrastive Learning

arXiv.org Machine Learning

However, data labeling is often time-consuming and costly, as it involves human expertise. Thus, it is common for computer vision to pretrain DNNs vate improvements to DNN training approaches. A pioneer on some large labeled dataset, e. g. ImageNet (Russakovsky work of Zhang et al. (2017) showed that the capacity of et al., 2015), and then to fine-tune the model to a specific modern DNNs is sufficient to fit perfectly even randomly downstream task. The self-supervised learning paradigm labeled data. According to classic learning theory, such a provides a human labeling-free alternative to the supervised huge capacity should lead to catastrophic overfitting, however, pretraining: recently developed contrastive self-supervised recent works (Nakkiran et al., 2020) show that in methods show results, comparable to ImageNet pretraining practice increasing DNN capacity further improves generalization.


Quran Memorization Course. A Proven System To Do It Easy NOW

#artificialintelligence

In this Course you will learn and gain 6 new habits. Each habit will make big change in your Memorization Ability. Many people who have taken this course before were able to memorize the whole holy Quran short Time. Even some of them were able to memorize the whole Quran in short Time. This course helped myself and when I noticed the amazing results, I have decided to do this course publicly to help million of Muslims around the world.