Plotting

 Fang, Wenyi


Numerical Error Analysis of Large Language Models

arXiv.org Machine Learning

Large language models based on transformer architectures have become integral to state-of-the-art natural language processing applications. However, their training remains computationally expensive and exhibits instabilities, some of which are expected to be caused by finite-precision computations. We provide a theoretical analysis of the impact of round-off errors within the forward pass of a transformer architecture which yields fundamental bounds for these effects. In addition, we conduct a series of numerical experiments which demonstrate the practical relevance of our bounds. Our results yield concrete guidelines for choosing hyperparameters that mitigate round-off errors, leading to more robust and stable inference.


PLPP: Prompt Learning with Perplexity Is Self-Distillation for Vision-Language Models

arXiv.org Artificial Intelligence

Pre-trained Vision-Language (VL) models such as CLIP have demonstrated their excellent performance across numerous downstream tasks. A recent method, Context Optimization (CoOp), further improves the performance of VL models on downstream tasks by introducing prompt learning. CoOp optimizes a set of learnable vectors, aka prompt, and freezes the whole CLIP model. However, relying solely on CLIP loss to fine-tune prompts can lead to models that are prone to overfitting on downstream task. To address this issue, we propose a plug-in prompt-regularization method called PLPP (Prompt Learning with PerPlexity), which use perplexity loss to regularize prompt learning. PLPP designs a two-step operation to compute the perplexity for prompts: (a) calculating cosine similarity between the weight of the embedding layer and prompts to get labels, (b) introducing a language model (LM) head that requires no training behind text encoder to output word probability distribution. Meanwhile, we unveil that the essence of PLPP is inherently a form of self-distillation. To further prevent overfitting as well as to reduce the additional computation introduced by PLPP, we turn the hard label to soft label and choose top-$k$ values for calculating the perplexity loss. For accelerating model convergence, we introduce mutual self-distillation learning, that is perplexity and inverted perplexity loss. The experiments conducted on four classification tasks indicate that PLPP exhibits superior performance compared to existing methods.


Internally Stable Matchings and Exchanges

AAAI Conferences

Stability is a central concept in exchange-based mechanismdesign. It imposes a fundamental requirement that no subsetof agents could beneficially deviate from the outcome pre-scribed by the mechanism. However, deployment of stabilityin an exchange mechanism presents at least two challenges.First, it reduces social welfare and sometimes prevents themechanism from producing a solution. Second, it might incurcomputational cost to clear the mechanism.In this paper, we propose an alternative notion of stability,coined internal stability, under which we analyze the socialwelfare bounds and computational complexity. Our contribu-tions are as follows: for both pairwise matchings and limited-length exchanges, for both unweighted and weighted graph-s, (1) we prove desirable tight social welfare bounds; (2) weanalyze the computational complexity for clearing the match-ings and exchanges. Extensive experiments on the kidney ex-change domain demonstrate that the optimal welfare underinternal stability is very close to the unconstrained optimal.