Goto

Collaborating Authors

 Mehta, Animesh


Controllable Neural Story Plot Generation via Reward Shaping

arXiv.org Artificial Intelligence

By themselves, large neural language models have Language-modeling-based approaches to story been shown to work well with a variety of short-term tasks, plot generation attempt to construct a plot by sampling such as understanding short children's stories [Radford et al., from a language model (LM) to predict the 2019]. However, while recurrent neural networks (RNNs) using next character, word, or sentence to add to the story. LSTM or GRU cells can theoretically maintain long-term LM techniques lack the ability to receive guidance context in their hidden layers, in practice RNNs only use a from the user to achieve a specific goal, resulting in relatively small part of the history of tokens [Khandelwal et stories that don't have a clear sense of progression al., 2018]. Consequently, stories or plots generated by RNNs and lack coherence. We present a reward-shaping tend to lose coherence as the generation continues.