Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Min, Bonan, Ross, Hayley, Sulem, Elior, Veyseh, Amir Pouran Ben, Nguyen, Thien Huu, Sainz, Oscar, Agirre, Eneko, Heinz, Ilana, Roth, Dan
–arXiv.org Artificial Intelligence
Large, pre-trained transformer-based language models such as BERT have drastically changed the Natural Language Processing (NLP) field. We present a survey of recent work that uses these large language models to solve NLP tasks via pre-training then fine-tuning, prompting, or text generation approaches. We also present approaches that use pre-trained language models to generate data for training augmentation or other purposes. We conclude with discussions on limitations and suggested directions for future research.
arXiv.org Artificial Intelligence
Nov-1-2021
- Country:
- Asia (1.00)
- Europe
- Italy (0.67)
- Switzerland > Zürich
- Zürich (0.14)
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York > New York County
- New York City (0.14)
- California > San Francisco County
- Genre:
- Overview (1.00)
- Research Report (1.00)
- Industry:
- Education > Educational Setting (0.67)
- Government
- Health & Medicine (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Inductive Learning (1.00)
- Neural Networks > Deep Learning (1.00)
- Natural Language
- Grammars & Parsing (1.00)
- Information Extraction (1.00)
- Large Language Model (1.00)
- Question Answering (0.93)
- Text Processing (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence