Using BERT for state-of-the-art pre-training for natural language processing
Javed Qadrud-Din was an Insight Fellow in Fall 2017. He is currently a machine learning engineer at Casetext where he works on natural language processing for the legal industry. Prior to Insight, he was at IBM Watson. BERT can be pre-trained on a massive corpus of unlabeled data, and then fine-tuned to a task for which you have a limited amount of data. This allows BERT to provide significantly higher performance than models that are only able to leverage a small task-specific dataset.
Jan-2-2020, 00:50:26 GMT
- Technology: