Automatic Pruning of Fine-tuning Datasets for Transformer-based Language Models

Tayaranian, Mohammadreza, Mozafari, Seyyed Hasan, Meyer, Brett H., Clark, James J., Gross, Warren J.

arXiv.org Artificial Intelligence 

Transformer-based language models have shown state-of-the-art performance on a variety of natural language understanding tasks. To achieve this performance, these models are first pre-trained on general corpus and then fine-tuned on downstream tasks. Previous work studied the effect of pruning the training set of the downstream tasks on the performance of the model on its evaluation set. In this work, we propose an automatic dataset pruning method for the training set of fine-tuning tasks. Our method is based on the model's success rate in correctly classifying each training data point. Unlike previous work which relies on user feedback to determine subset size, our method automatically extracts training subsets that are adapted for each pair of model and fine-tuning task. Our method provides multiple subsets for use in dataset pruning that navigate the trade-off between subset size and evaluation accuracy. Our largest subset, which we also refer to as the winning ticket subset, is on average 3 smaller than the original training set of the fine-tuning task. Our experiments on 5 downstream tasks and 2 language models show that, on average, fine-tuning on the winning ticket subsets results in a 0.1% increase in the evaluation performance of the model. Transformer-based language models have shown state-of-the-art performance in various natural language understanding tasks (Liu et al., 2019; Raffel et al., 2020). These models are commonly used in a transfer learning setup in which they are first pre-trained on general textual data and then transferred by fine-tuning their parameters on the training set of each downstream task. The goal of fine-tuning is to maximise the model's performance on the evaluation set However, different data points in the fine-tuning dataset have different contributions in achieving this goal (Katharopoulos & Fleuret, 2018).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found