Huawei trained the Chinese-language equivalent of GPT-3

#artificialintelligence 

For the better part of a year, OpenAI's GPT-3 has remained among the largest AI language models ever created, if not the largest of its kind. Via an API, people have used it to automatically write emails and articles, summarize text, compose poetry and recipes, create website layouts, and generate code for deep learning in Python. But GPT-3 has key limitations, chief among them that it's only available in English. The 45-terabyte dataset the model was trained on drew exclusively from English-language sources. This week, a research team at Chinese company Huawei quietly detailed what might be the Chinese-language equivalent of GPT-3.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found