Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits - Appendix

Neural Information Processing Systems 

A.1 Detailed Re-alignment T ask Formulation and Training Setup In Figure A1, we show the procedure for converting the data samples in the alignment datasets into training data of AEM (negative samples used in AIL are generated similarly). Then our decipher module will translate these special tokens into natural language. For AEM, we fine-tune the LM with the above-mentioned Source-CoE-Target data (as shown in Figure A1, "Input for AEM") with the common language modeling objective, which is to maximize the probability of generating ground truth tokens at each decoding step. We train with three epochs for each task by default but set an early-stopping condition when the evaluation loss does not decrease (i.e., plateaus) for five intermediate evaluation steps. LM can know the boundary between Context + Source and Chain-of-Edits (CoEs) + Target.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found