Fairness-guided Few-shot Prompting for Large Language Models

Neural Information Processing Systems 

However, prior research has shown that in-context learning can suffer from high instability due to variations in training examples, example order, and prompt formats. Therefore, the construction of an appropriate prompt is essential for improving the performance of in-context learning. In this paper, we revisit this problem from the view of predictive bias. Specifically, we introduce a metric to evaluate the predictive bias of a fixed prompt against labels or a given attributes.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found