Harrando, Ismaïl
The Claire French Dialogue Dataset
Hunter, Julie, Louradour, Jérôme, Rennard, Virgile, Harrando, Ismaïl, Shang, Guokan, Lorré, Jean-Pierre
The overwhelming success of OpenAI's ChatGPT, whose first version was released one year ago, has led to an undeniable surge of excitement about large language models (LLMs) among researchers and the general public alike. OpenAI's anything-but-open approach to sharing its models or information about training them, however, has led to an equally passionate reaction among those who feel that AI development should be widely accessible and that data usage should be transparent in order to protect the rights of those who have contributed the data and that data - a resource crucial to the development and understanding of AI models - should be shared with the broader research community. The call for transparency has begun to bear fruit. High-profile language models like Falcon,[Almazrouei et al., 2023] LLaMa2 [Touvron et al., 2023] and MPT [MosaicML NLP Team, 2023] - to name just a few - come very close to a classic definition of open source. A central part of OpenLLM France's mission is to contribute to this momentum by building language models and remaining fully transparent about every step of model training, including the data used for training. Another objective, which we find equally important, is to increase the availability of language models and training data geared to languages other than English and to non-anglophone cultures. Indeed, the majority of the high-profile LLMs available today are trained primarily on English documents coming from anglophone cultures. Only 0.16% of the data used to train LLaMa2 comes from French, for example.