Appendix of ' Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation '
–Neural Information Processing Systems
We calculate it for each sequence x and average over the whole corpus. When decoding auto-regressively, the probabilities of the repetitive sentence loops also have a self-reinforcement effect. As shown in Figure 2, the probability of the token'located' increases almost The work was conducted in Apple. Here we use the end token to split sentences for ease of experiments. We present the probability of the token'located' ( y-axis) as the number of historical repetitions Best viewed in color and zoomed in a desktop monitor.
Neural Information Processing Systems
Oct-2-2025, 12:29:10 GMT
- Country:
- Antarctica (0.04)
- Asia > China
- Hong Kong (0.04)
- Europe > Ireland (0.04)
- North America > United States
- Illinois > Cook County
- Chicago (0.04)
- Indiana > Bartholomew County
- Columbus (0.04)
- Mississippi (0.04)
- Missouri > Buchanan County
- Saint Joseph (0.04)
- Ohio (0.04)
- Utah > Salt Lake County
- Salt Lake City (0.04)
- Illinois > Cook County
- Industry:
- Government (1.00)
- Leisure & Entertainment > Sports
- Basketball (0.46)
- Media > Film (1.00)
- Technology: