Nogueira, Rodrigo
TiEBe: A Benchmark for Assessing the Current Knowledge of Large Language Models
Almeida, Thales Sales, Bonás, Giovana Kerche, Santos, João Guilherme Alves, Abonizio, Hugo, Nogueira, Rodrigo
In a rapidly evolving knowledge landscape and the increasing adoption of large language models, a need has emerged to keep these models continuously updated with current events. While existing benchmarks evaluate general factual recall, they often overlook two critical aspects: the ability of models to integrate evolving knowledge through continual learning and the significant regional disparities in their performance. To address these gaps, we introduce the Timely Events Benchmark (TiEBe), a dataset containing over 11,000 question-answer pairs focused on globally and regionally significant events. TiEBe leverages structured retrospective data from Wikipedia, enabling continuous updates to assess LLMs' knowledge of evolving global affairs and their understanding of events across different regions. Our benchmark demonstrates that LLMs exhibit substantial geographic disparities in factual recall, emphasizing the need for more balanced global knowledge representation. Furthermore, TiEBe serves as a tool for evaluating continual learning strategies, providing insights into models' ability to acquire new information without forgetting past knowledge.
The interplay between domain specialization and model size: a case study in the legal domain
Junior, Roseval Malaquias, Pires, Ramon, Almeida, Thales Sales, Sakiyama, Kenzo, Romero, Roseli, Nogueira, Rodrigo
Scaling laws for language models so far focused on finding the compute-optimal model size and token count for training from scratch. However, achieving this optimal balance requires significant compute resources due to the extensive data demands when training models from randomly-initialized weights. Continual pre-training offers a cost-effective alternative, leveraging the compute investment from pre-trained models to incorporate new knowledge without requiring extensive new data. Recent findings suggest that data quality influences constants in scaling laws, thereby altering the optimal parameter-token allocation ratio. Building on this insight, we investigate the interplay between domain specialization and model size during continual pre-training under compute-constrained scenarios. Our goal is to identify a compute-efficient training regime for this scenario and, potentially, detect patterns in this interplay that can be generalized across different model sizes and domains. To compare general and specialized training, we filtered a web-based dataset to extract legal domain data. We pre-trained models with 1.5B, 3B, 7B and 14B parameters on both the unfiltered and filtered datasets, then evaluated their performance on legal exams. Results show that as model size increases, the compute-effectiveness gap between specialized and general models widens.
Sabi\'a-3 Technical Report
Abonizio, Hugo, Almeida, Thales Sales, Laitz, Thiago, Junior, Roseval Malaquias, Bonás, Giovana Kerche, Nogueira, Rodrigo, Pires, Ramon
This technical report presents the details of the development and evaluation of the Sabiá-3 and Sabiazinho-3 models. We trained them on a large corpus of documents written in Portuguese, with a special focus on Brazil-related resources. Through training, models were exposed to information relevant to Brazilian culture, history, and context. The main objective was to have a specialized model that is aware of the linguistic nuances, societal norms, and regional variations unique to the country. Throughout this report, we show that this specialization allows the models to perform better in knowledge-intensive tasks. We applied an approach of continual learning by leveraging a "generalist" model that already acquired some level of language understanding and reasoning abilities, and then further trained it on our corpus of high-quality data relevant to the Brazilian context. The development consisted of two main phases: (1) the pre-training phase, in which we further train a pre-trained model on specialized data following a self-supervised learning strategy optimizing for the next token prediction objective, and (2) the post-training phase where the model is tuned to follow instructions and align to human preferences. Compared to our previous release, Sabiá-2 [5], we have collected a significantly larger volume of data for pre-training.
MLissard: Multilingual Long and Simple Sequential Reasoning Benchmarks
Bueno, Mirelle, Lotufo, Roberto, Nogueira, Rodrigo
Language models are now capable of solving tasks that require dealing with long sequences consisting of hundreds of thousands of tokens. However, they often fail on tasks that require repetitive use of simple rules, even on sequences that are much shorter than those seen during training. For example, state-of-the-art LLMs can find common items in two lists with up to 20 items but fail when lists have 80 items. In this paper, we introduce MLissard, a multilingual benchmark designed to evaluate models' abilities to process and generate texts of varied lengths and offers a mechanism for controlling sequence complexity. Our evaluation of open-source and proprietary models show a consistent decline in performance across all models and languages as the complexity of the sequence increases. Surprisingly, the use of in-context examples in languages other than English helps increase extrapolation performance significantly. The datasets and code are available at https://github.com/unicamp-dl/Lissard
ptt5-v2: A Closer Look at Continued Pretraining of T5 Models for the Portuguese Language
Piau, Marcos, Lotufo, Roberto, Nogueira, Rodrigo
Despite advancements in Natural Language Processing (NLP) and the growing availability of pretrained models, the English language remains the primary focus of model development. Continued pretraining on language-specific corpora provides a practical solution for adapting models to other languages. However, the impact of different pretraining settings on downstream tasks remains underexplored. This work introduces $\texttt{ptt5-v2}$, investigating the continued pretraining of T5 models for Portuguese. We first develop a baseline set of settings and pretrain models with sizes up to 3B parameters. Finetuning on three Portuguese downstream tasks (assin2 STS, assin2 RTE, and TweetSentBR) yields SOTA results on the latter two. We then explore the effects of different pretraining configurations, including quality filters, optimization strategies, and multi-epoch pretraining. Perhaps surprisingly, their impact remains subtle compared to our baseline. We release $\texttt{ptt5-v2}$ pretrained checkpoints and the finetuned MonoT5 rerankers on HuggingFace at https://huggingface.co/collections/unicamp-dl/ptt5-v2-666538a650188ba00aa8d2d0 and https://huggingface.co/collections/unicamp-dl/monoptt5-66653981877df3ea727f720d.
Measuring Cross-lingual Transfer in Bytes
de Souza, Leandro Rodrigues, Almeida, Thales Sales, Lotufo, Roberto, Nogueira, Rodrigo
Multilingual pretraining has been a successful solution to the challenges posed by the lack of resources for languages. These models can transfer knowledge to target languages with minimal or no examples. Recent research suggests that monolingual models also have a similar capability, but the mechanisms behind this transfer remain unclear. Some studies have explored factors like language contamination and syntactic similarity. An emerging line of research suggests that the representations learned by language models contain two components: a language-specific and a language-agnostic component. The latter is responsible for transferring a more universal knowledge. However, there is a lack of comprehensive exploration of these properties across diverse target languages. To investigate this hypothesis, we conducted an experiment inspired by the work on the Scaling Laws for Transfer. We measured the amount of data transferred from a source language to a target language and found that models initialized from diverse languages perform similarly to a target language in a cross-lingual setting. This was surprising because the amount of data transferred to 10 diverse target languages, such as Spanish, Korean, and Finnish, was quite similar. We also found evidence that this transfer is not related to language contamination or language proximity, which strengthens the hypothesis that the model also relies on language-agnostic knowledge. Our experiments have opened up new possibilities for measuring how much data represents the language-agnostic representations learned during pretraining.
Juru: Legal Brazilian Large Language Model from Reputable Sources
Junior, Roseval Malaquias, Pires, Ramon, Romero, Roseli, Nogueira, Rodrigo
The high computational cost associated with pretraining large language models limits their research. Two strategies have emerged to address this issue: domain specialization and pretraining with high-quality data. To explore these strategies, we specialized the Sabi\'a-2 Small model with 1.9 billion unique tokens from reputable Brazilian legal sources and conducted few-shot evaluations on legal and general knowledge exams. Our model, Juru, demonstrates the benefits of domain specialization with a reduced amount of pretraining data. However, this specialization comes at the expense of degrading performance in other knowledge areas within the same language. This study contributes to the growing body of scientific evidence showing that pretraining data selection may enhance the performance of large language models, enabling the exploration of these models at a lower cost.
Sabi\'a-2: A New Generation of Portuguese Large Language Models
Almeida, Thales Sales, Abonizio, Hugo, Nogueira, Rodrigo, Pires, Ramon
We introduce Sabi\'a-2, a family of large language models trained on Portuguese texts. The models are evaluated on a diverse range of exams, including entry-level tests for Brazilian universities, professional certification exams, and graduate-level exams for various disciplines such as accounting, economics, engineering, law and medicine. Our results reveal that our best model so far, Sabi\'a-2 Medium, matches or surpasses GPT-4's performance in 23 out of 64 exams and outperforms GPT-3.5 in 58 out of 64 exams. Notably, specialization has a significant impact on a model's performance without the need to increase its size, allowing us to offer Sabi\'a-2 Medium at a price per token that is 10 times cheaper than GPT-4. Finally, we identified that math and coding are key abilities that need improvement.
Lissard: Long and Simple Sequential Reasoning Datasets
Bueno, Mirelle, Lotufo, Roberto, Nogueira, Rodrigo
The efficacy of language models, particularly in reasoning tasks, is significantly impacted by longer text lengths than those seen in training [19, 2, 15]. This phenomenon, referred to as "Length Generalization" or "Length Extrapolation" in the literature [25, 30], is also common in models based on the Transformer architecture [20, 16, 8, 32]. Notably, even Large Language Models (LLMs), known for their strong performance in a wide range of tasks and domains, are not immune to this problem [2, 5]. Recent research tried to address this challenge by modifications to the positional embeddings [25, 6, 7, 19, 13] or by using prompting strategies such as scratchpad [23] and chain-of-thought reasoning [28]. Nevertheless, there remains a lack of datasets specifically designed for the systematic evaluation of the problem.
ExaRanker-Open: Synthetic Explanation for IR using Open-Source LLMs
Ferraretto, Fernando, Laitz, Thiago, Lotufo, Roberto, Nogueira, Rodrigo
ExaRanker recently introduced an approach to training information retrieval (IR) models, incorporating natural language explanations as additional labels. The method addresses the challenge of limited labeled examples, leading to improvements in the effectiveness of IR models. However, the initial results were based on proprietary language models such as GPT-3.5, which posed constraints on dataset size due to its cost and data privacy. In this paper, we introduce ExaRanker-Open, where we adapt and explore the use of open-source language models to generate explanations. The method has been tested using different LLMs and datasets sizes to better comprehend the effective contribution of data augmentation. Our findings reveal that incorporating explanations consistently enhances neural rankers, with benefits escalating as the LLM size increases. Notably, the data augmentation method proves advantageous even with large datasets, as evidenced by ExaRanker surpassing the target baseline by 0.6 nDCG@10 points in our study.