Goto

Collaborating Authors

 Alfassy, Amit


Augmenting In-Context-Learning in LLMs via Automatic Data Labeling and Refinement

arXiv.org Artificial Intelligence

The past decade has seen a big renaissance in the Machine Learning (ML) domain with the rise of neural networks which continue to break all limits at a rapid pace. Until recently, the common training paradigm was based on task-specific models, each trained on a separate dataset for a given task, e.g classification [Krizhevsky et al., 2012], detection [Redmon et al., 2016], summarizing [Nallapati et al., 2016], translation [Vaswani et al., 2017], etc. Today, we see the rise of Foundation Models [Bommasani et al., 2021] largely based on Large Language Models (LLMs), which have several interesting emerging properties, including In-Context-Learning (ICL) and Chainof-Thought (CoT) inference. ICL is an approach where the model's behavior is modulated through the model's input, i.e. the context. This context can include information that is required to answer a desired query. This concept is extremely useful in several pipelines, for example Figure 1: From an input-output dataset in Retrieval-Augmented Generation (RAG) [Lewis with no intermediate steps (CoT/Executable et al., 2020] systems. In other cases, the context can include programs), ADLR generates examples several examples of input-output pairs that outline with such steps and retains the the models' expected behavior.


BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

arXiv.org Artificial Intelligence

Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.