Weak-to-Strong Preference Optimization: Stealing Reward from Weak Aligned Model

Open in new window