Connecting Pre-trained Language Models and Downstream Tasks via Properties of Representations

Neural Information Processing Systems 

Recently, researchers have found that representations learned by large-scale pre-trained language models are useful in various downstream tasks.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found