Learning to Reason and Memorize with Self-Notes

Neural Information Processing Systems 

Large language models have been shown to struggle with multi-step reasoning, and do not retain previous reasoning steps for future use. We propose a simple method for solving both of these problems by allowing the model to take Self-Notes . Unlike recent chain-of-thought or scratchpad approaches, the model can deviate from the input context at any time to explicitly think and write down its thoughts.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found