Goto

Collaborating Authors

 logit bias


JaccDiv: A Metric and Benchmark for Quantifying Diversity of Generated Marketing Text in the Music Industry

Afzal, Anum, Mercier, Alexandre, Matthes, Florian

arXiv.org Artificial Intelligence

Online platforms are increasingly interested in using Data-to-Text technologies to generate content and help their users. Unfortunately, traditional generative methods often fall into repetitive patterns, resulting in monotonous galleries of texts after only a few iterations. In this paper, we investigate LLM-based data-to-text approaches to automatically generate marketing texts that are of sufficient quality and diverse enough for broad adoption. We leverage Language Models such as T5, GPT -3.5, GPT -4, and LLaMa2 in conjunction with fine-tuning, few-shot, and zero-shot approaches to set a baseline for diverse marketing texts. We also introduce a metric JaccDiv to evaluate the diversity of a set of texts. This research extends its relevance beyond the music industry, proving beneficial in various fields where repetitive automated content generation is prevalent.


Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models

Jain, Neel, Shrivastava, Aditya, Zhu, Chenyang, Liu, Daben, Samuel, Alfy, Panda, Ashwinee, Kumar, Anoop, Goldblum, Micah, Goldstein, Tom

arXiv.org Artificial Intelligence

A key component of building safe and reliable language models is enabling the models to appropriately refuse to follow certain instructions or answer certain questions. We may want models to output refusal messages for various categories of user queries, for example, ill-posed questions, instructions for committing illegal acts, or queries which require information past the model's knowledge horizon. Engineering models that refuse to answer such questions is complicated by the fact that an individual may want their model to exhibit varying levels of sensitivity for refusing queries of various categories, and different users may want different refusal rates. The current default approach involves training multiple models with varying proportions of refusal messages from each category to achieve the desired refusal rates, which is computationally expensive and may require training a new model to accommodate each user's desired preference over refusal rates. To address these challenges, we propose refusal tokens, one such token for each refusal category or a single refusal token, which are prepended to the model's responses during training. We then show how to increase or decrease the probability of generating the refusal token for each category during inference to steer the model's refusal behavior. Refusal tokens enable controlling a single model's refusal rates without the need of any further fine-tuning, but only by selectively intervening during generation.


Backtracking Improves Generation Safety

Zhang, Yiming, Chi, Jianfeng, Nguyen, Hailey, Upasani, Kartikeya, Bikel, Daniel M., Weston, Jason, Smith, Eric Michael

arXiv.org Artificial Intelligence

Text generation has a fundamental limitation almost by definition: there is no taking back tokens that have been generated, even when they are clearly problematic. In the context of language model safety, when a partial unsafe generation is produced, language models by their nature tend to happily keep on generating similarly unsafe additional text. This is in fact how safety alignment of frontier models gets circumvented in the wild (Andriushchenko et al., 2024), despite great efforts in improving their safety. Deviating from the paradigm of approaching safety alignment as prevention (decreasing the probability of harmful responses), we propose backtracking, a technique that allows language models to "undo" and recover from their own unsafe generation through the introduction of a special Our method can be incorporated into either SFT or DPO training to optimize helpfulness and harmlessness. We show that models trained to backtrack are consistently safer than baseline models: backtracking Llama-3-8B is four times more safe than the baseline model (6.1% 1.5%) in our evaluations without regression in helpfulness. Our method additionally provides protection against four adversarial attacks including an adaptive attack, despite not being trained to do so. Remarkable progress has been recently made in building capable and helpful large language models (Touvron et al., 2023). As capabilities become more powerful, these models also have more potential to cause real societal harms (Kumar et al., 2023). The de facto standard in safety alignment focuses on prevention: training language models to generate safe responses while minimizing the likelihood of unsafe ones, through techniques including SFT (Ouyang et al., 2022) and RLHF (Christiano et al., 2017). Prevention-based safety tuning goes a long way towards building safe language models (Bai et al., 2022a), and yet the safety of production models which have undergone substantial safety tuning (e.g., Claude 3.5 and GPT-4) can still be compromised in the wild by simple attacks (Andriushchenko et al., 2024). The core challenge of safety alignment seems to be that the attack surface induced by a text interface is practically infinite, and model developers have to rely on the generalization of safe behavior from a relatively small (often predominantly in English) safety tuning dataset to prevent every failure case.


AMU-Tuning: Effective Logit Bias for CLIP-based Few-shot Learning

Tang, Yuwei, Lin, Zhenyi, Wang, Qilong, Zhu, Pengfei, Hu, Qinghua

arXiv.org Artificial Intelligence

Recently, pre-trained vision-language models (e.g., CLIP) have shown great potential in few-shot learning and attracted a lot of research interest. Although efforts have been made to improve few-shot ability of CLIP, key factors on the effectiveness of existing methods have not been well studied, limiting further exploration of CLIP's potential in few-shot learning. In this paper, we first introduce a unified formulation to analyze CLIP-based few-shot learning methods from a perspective of logit bias, which encourages us to learn an effective logit bias for further improving performance of CLIP-based few-shot learning methods. To this end, we disassemble three key components involved in computation of logit bias (i.e., logit features, logit predictor, and logit fusion) and empirically analyze the effect on performance of few-shot classification. Based on analysis of key components, this paper proposes a novel AMU-Tuning method to learn effective logit bias for CLIP-based few-shot classification. Specifically, our AMU-Tuning predicts logit bias by exploiting the appropriate $\underline{\textbf{A}}$uxiliary features, which are fed into an efficient feature-initialized linear classifier with $\underline{\textbf{M}}$ulti-branch training. Finally, an $\underline{\textbf{U}}$ncertainty-based fusion is developed to incorporate logit bias into CLIP for few-shot classification. The experiments are conducted on several widely used benchmarks, and the results show AMU-Tuning clearly outperforms its counterparts while achieving state-of-the-art performance of CLIP-based few-shot learning without bells and whistles.