Gen-Z: Generative Zero-Shot Text Classification with Contextualized Label Descriptions
Kumar, Sachin, Park, Chan Young, Tsvetkov, Yulia
–arXiv.org Artificial Intelligence
Language model (LM) prompting--a popular paradigm for solving NLP tasks-- has been shown to be susceptible to miscalibration and brittleness to slight prompt variations, caused by its discriminative prompting approach, i.e., predicting the label given the input. The framework is multivariate, as label descriptions allow us to seamlessly integrate additional contextual information about the labels to improve task performance. On various standard classification benchmarks, with six open-source LM families, we show that zero-shot classification with simple contextualization of the data source of the evaluation set consistently outperforms both zero-shot and few-shot baselines while improving robustness to prompt variations. Further, our approach enables personalizing classification in a zero-shot manner by incorporating author, subject, or reader information in the label descriptions. Language models, trained only on raw text, have been shown to perform new tasks simply by conditioning on a handful of demonstrations (Brown et al., 2020). Furthermore, ICL has been shown to be very sensitive to the choice of training examples, their order and format in the prompt (Lu et al., 2022; Sorensen et al., 2022) requiring major human effort to achieve optimal performance. In this work, we ask, "If the right demonstrations are challenging to find and only serve to implicitly prime the model, can we achieve the same performance zero-shot if we prime the language model explicitly in a robust way?" Our approach consists of two key ideas. First, most text classification methods follow a discriminative setup, which involves estimating the probability of the labels given the input, which can be sensitive to prompt or verbalizer variations. Instead, we use a generative setup, which involves estimating the probability of generating the input given different labels, which has been shown to have better worst-case performance (Min et al., 2022a).
arXiv.org Artificial Intelligence
Nov-13-2023
- Country:
- Europe (1.00)
- North America > United States
- Minnesota (0.28)
- Washington > King County
- Seattle (0.46)
- Genre:
- Research Report (0.64)
- Industry:
- Education > Educational Setting (0.46)
- Leisure & Entertainment (0.46)