Goto

Collaborating Authors

 corollary






Appendices ABernoulli-CRSProperties

Neural Information Processing Systems

Let us defineK Rn n a random diagonal sampling matrix whereKj,j Bernoulli(pj) for 1 j n. Therefore, Bernoulli-CRS will perform on average the same amount of computations as in the fixed-rankCRS. This formulation immediately hints atthe possibility tosample over the input channeldimension, similarly to sampling column-row pairs in matrices. Let ` be a β-Lipschitz loss function, and let the network be trained with SGD using properly decreasing learning rate. Let us denote the weight, bias and activation gradients with respect to a loss function` by Wl, bl, al respectively.


No-Regret Learning with Unbounded Losses: The Case of Logarithmic Pooling Supplementary Appendix May 10, 2023 A. Omitted Proofs

Neural Information Processing Systems

We now prove the first bullet. This is a contradiction, so in fact c κ. The first claim of the second bullet is analogous. To do so, we note the following technical lemma (proof below). To prove (#1), we proceed by induction on t.


AppendixOutline

Neural Information Processing Systems

Hence, we rely on subgradients defined in Equation 7. Since, many subgradient directions exist for the margin points, for consistency, we stick with xlγ(w;(x,y)) = {0}wheny w,x = γ. Note, that thesetofpoints inX satisfying this equality isazeromeasure set. For simplicity we shall treat the projection operation as just renormalizing w(t+1) to have unit norm,i.e., w(t+1) 2 = 1, t 0. This is not necessarily restrictive. A.1 TechnicalLemmas In this section we shall state some technical lemmas without proof, with references to works that contain the full proof. We shall use these in the following sections when proving our lemmas in Section5.



On Theoretical Interpretations of Concept-Based In-Context Learning

Tang, Huaze, Peng, Tianren, Huang, Shao-lun

arXiv.org Artificial Intelligence

In-Context Learning (ICL) has emerged as an important new paradigm in natural language processing and large language model (LLM) applications. However, the theoretical understanding of the ICL mechanism remains limited. This paper aims to investigate this issue by studying a particular ICL approach, called concept-based ICL (CB-ICL). In particular, we propose theoretical analyses on applying CB-ICL to ICL tasks, which explains why and when the CB-ICL performs well for predicting query labels in prompts with only a few demonstrations. In addition, the proposed theory quantifies the knowledge that can be leveraged by the LLMs to the prompt tasks, and leads to a similarity measure between the prompt demonstrations and the query input, which provides important insights and guidance for model pre-training and prompt engineering in ICL. Moreover, the impact of the prompt demonstration size and the dimension of the LLM embeddings in ICL are also explored based on the proposed theory. Finally, several real-data experiments are conducted to validate the practical usefulness of CB-ICL and the corresponding theory. With the great successes of large language models (LLMs), In-context learning (ICL) has emerged as a new paradigm for natural language processing (NLP) (Brown et al., 2020; Chowdhery et al., 2023; Achiam et al., 2023), where LLMs addresses the requested queries in context prompts with a few demonstrations.