Some things are more CRINGE than others: Preference Optimization with the Pairwise Cringe Loss
Xu, Jing, Lee, Andrew, Sukhbaatar, Sainbayar, Weston, Jason
–arXiv.org Artificial Intelligence
In particular the Cringe Loss is a Practitioners commonly align large language models method for binary feedback, which we show can be generalized using pairwise preferences, i.e., given labels to the pairwise preference case. The Cringe Loss works of the type response A is preferred to response B as follows: positive examples use the standard likelihood for a given input. Perhaps less commonly, methods training loss, while for a given negative example it contrasts have also been developed for binary feedback, each token in the negative sequence against other likely i.e. training models given labels of type tokens - to encourage the negative sequence to no longer response A is good or bad. We show how an existing be the top-ranked sequence. After training on the initial performant binary feedback method, the feedback data, the method is then iterated by labeling data Cringe Loss (Adolphs et al., 2022), can be generalized using the improved model, which was shown to improve to the pairwise preference setting using results further. Cringe Loss was shown to perform well with a simple soft margin extension. Pairwise Cringe binary feedback data compared to competing methods, such Loss is straightforward to implement and efficient as SFT, unlikelihood loss and best-of-N reranking (Adolphs to train, and we find it outperforms state-of-the-art et al., 2022) and for improving large-scale dialogue systems preference optimization algorithms such as PPO (Xu et al., 2023b).
arXiv.org Artificial Intelligence
Dec-27-2023