Korat, Daniel
Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies
Timor, Nadav, Mamou, Jonathan, Korat, Daniel, Berchansky, Moshe, Pereg, Oren, Jain, Gaurav, Schwartz, Roy, Wasserblat, Moshe, Harel, David
Accelerating the inference of large language models (LLMs) is a critical challenge in generative AI. Speculative decoding (SD) methods offer substantial efficiency gains by generating multiple tokens using a single target forward pass. However, existing SD approaches require the drafter and target models to share the same vocabulary, thus limiting the pool of possible drafters, often necessitating the training of a drafter from scratch. We present three new SD methods that remove this shared-vocabulary constraint. All three methods preserve the target distribution (i.e., they are lossless) and work with off-the-shelf models without requiring additional training or modifications. Empirically, on summarization, programming, and long-context tasks, our algorithms achieve significant speedups over standard autoregressive decoding. By enabling any off-the-shelf model to serve as drafter and requiring no retraining, this work substantially broadens the applicability of the SD framework in practice.
Distributed Speculative Inference of Large Language Models
Timor, Nadav, Mamou, Jonathan, Korat, Daniel, Berchansky, Moshe, Pereg, Oren, Wasserblat, Moshe, Galanti, Tomer, Gordon, Michal, Harel, David
Accelerating the inference of large language models (LLMs) is an important challenge in artificial intelligence. This paper introduces distributed speculative inference (DSI), a novel distributed inference algorithm that is provably faster than speculative inference (SI) [Leviathan et al., 2023, Chen et al., 2023, Miao et al., 2023] and traditional autoregressive inference (non-SI). Like other SI algorithms, DSI works on frozen LLMs, requiring no training or architectural modifications, and it preserves the target distribution. Prior studies on SI have demonstrated empirical speedups (compared to non-SI) but require a fast and accurate drafter LLM. In practice, off-the-shelf LLMs often do not have matching drafters that are sufficiently fast and accurate. We show a gap: SI gets slower than non-SI when using slower or less accurate drafters. We close this gap by proving that DSI is faster than both SI and non-SI--given any drafters. By orchestrating multiple instances of the target and drafters, DSI is not only faster than SI but also supports LLMs that cannot be accelerated with SI. Our simulations show speedups of off-the-shelf LLMs in realistic settings: DSI is 1.29-1.92x
Dynamic Speculation Lookahead Accelerates Speculative Decoding of Large Language Models
Mamou, Jonathan, Pereg, Oren, Korat, Daniel, Berchansky, Moshe, Timor, Nadav, Wasserblat, Moshe, Schwartz, Roy
Speculative decoding is commonly used for reducing the inference latency of large language models. Its effectiveness depends highly on the speculation lookahead (SL)--the number of tokens generated by the draft model at each iteration. In this work we show that the common practice of using the same SL for all iterations (static SL) is suboptimal. We introduce DISCO (DynamIc SpeCulation lookahead Optimization), a novel method for dynamically selecting the SL. Our experiments Figure 1: An illustration of a single speculative decoding with four datasets show that DISCO reaches an iteration with Speculation Lookahead (SL) = 5. average speedup of 10% compared to the best Given a prompt t
ABSApp: A Portable Weakly-Supervised Aspect-Based Sentiment Extraction System
Pereg, Oren, Korat, Daniel, Wasserblat, Moshe, Mamou, Jonathan, Dagan, Ido
The system is interpretable and user friendly and does not require labeled training data, hence can be rapidly and cost-effectively used across different domains in applied setups. The system flow includes three stages: First, it generates domain-specific aspect and opinion lexicons based on an unlabeled dataset; second, it enables the user to view and edit those lexicons (weak supervision); and finally, it enables the user to select an unlabeled target dataset from the same domain, classify it, and generate an aspect-based sentiment report. ABSApp has been successfully used in a number of real-life use cases, among them movie review analysis and convention impact analysis.
Term Set Expansion based NLP Architect by Intel AI Lab
Mamou, Jonathan, Pereg, Oren, Wasserblat, Moshe, Eirew, Alon, Green, Yael, Guskin, Shira, Izsak, Peter, Korat, Daniel
We present SetExpander, a corpus-based system for expanding a seed set of terms into amore complete set of terms that belong to the same semantic class. SetExpander implements an iterative end-to-end workflow. It enables users to easily select a seed set of terms, expand it, view the expanded set, validate it, re-expand the validated set and store it, thus simplifying the extraction of domain-specific fine-grained semantic classes.SetExpander has been used successfully in real-life use cases including integration into an automated recruitment system and an issues and defects resolution system. A video demo of SetExpander is available at https://drive.google.com/open?id=1e545bB87Autsch36DjnJHmq3HWfSd1Rv (some images were blurred for privacy reasons)
Term Set Expansion based on Multi-Context Term Embeddings: an End-to-end Workflow
Mamou, Jonathan, Pereg, Oren, Wasserblat, Moshe, Dagan, Ido, Goldberg, Yoav, Eirew, Alon, Green, Yael, Guskin, Shira, Izsak, Peter, Korat, Daniel
We present SetExpander, a corpus-based system for expanding a seed set of terms into a more complete set of terms that belong to the same semantic class. SetExpander implements an iterative end-to end workflow for term set expansion. It enables users to easily select a seed set of terms, expand it, view the expanded set, validate it, re-expand the validated set and store it, thus simplifying the extraction of domain-specific fine-grained semantic classes. SetExpander has been used for solving real-life use cases including integration in an automated recruitment system and an issues and defects resolution system. A video demo of SetExpander is available at https://drive.google.com/open?id=1e545bB87Autsch36DjnJHmq3HWfSd1Rv (some images were blurred for privacy reasons).