Understanding BERT
While BERT is a significant improvement in how computers'understand' human language, it is still far away from understanding language and context in the same way that humans do. We should, however, expect that BERT will have a significant impact on many understanding focused NLP initiatives. The General Language Understanding Evaluation benchmark (GLUE) is a collection of datasets used for training, evaluating, and analyzing NLP models relative to one another. The datasets are designed to test a model's language understanding and are useful for evaluating models like BERT. As the GLUE results show, BERT makes it possible to outperform humans even in comprehension tasks previously thought to be impossible for computers to outperform humans.
Jul-1-2020, 11:15:56 GMT
- Country:
- North America > United States > California > Los Angeles County > Los Angeles (0.15)
- Genre:
- Research Report > New Finding (0.35)
- Technology: