Appendix of ' Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation '

Neural Information Processing Systems 

We calculate it for each sequence x and average over the whole corpus. When decoding auto-regressively, the probabilities of the repetitive sentence loops also have a self-reinforcement effect. As shown in Figure 2, the probability of the token'located' increases almost The work was conducted in Apple. Here we use the end token to split sentences for ease of experiments. We present the probability of the token'located' ( y-axis) as the number of historical repetitions Best viewed in color and zoomed in a desktop monitor.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found