RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language Models
–arXiv.org Artificial Intelligence
Pre-trained Language Models (PLMs) can be accurately fine-tuned for downstream text processing tasks. Recently, researchers have introduced several parameter-efficient fine-tuning methods that optimize input prompts or adjust a small number of model parameters (e.g LoRA). In this study, we explore the impact of altering the input text of the original task in conjunction with parameter-efficient fine-tuning methods. To most effectively rewrite the input text, we train a few-shot paraphrase model with a Maximum-Marginal Likelihood objective. Using six few-shot text classification datasets, we show that enriching data with paraphrases at train and test time enhances the performance beyond what can be achieved with parameter-efficient fine-tuning alone. The code used for our experiments can be found at https://github.com/SaeedNajafi/RIFF.
arXiv.org Artificial Intelligence
Jun-6-2024
- Country:
- Asia
- China > Hong Kong (0.04)
- Kazakhstan (0.05)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Croatia > Dubrovnik-Neretva County
- Dubrovnik (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Romania > Sud - Muntenia Development Region
- Giurgiu County > Giurgiu (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- Belgium > Brussels-Capital Region
- North America
- Canada
- Dominican Republic (0.04)
- United States
- California (0.16)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Michigan (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York > New York County
- New York City (0.04)
- Washington > King County
- Seattle (0.04)
- Asia
- Genre:
- Overview (1.00)
- Research Report > New Finding (0.66)
- Industry:
- Leisure & Entertainment > Sports (1.00)
- Technology: