Toward Automated Story Generation with Markov Chain Monte Carlo Methods and Deep Neural Networks

AAAI Conferences

In this paper, we introduce an approach to automated story generation using Markov Chain Monte Carlo (MCMC) sampling. This approach uses a sampling algorithm based on Metropolis-Hastings to generate a probability distribution which can be used to generate stories via random sampling that adhere to criteria learned by recurrent neural networks. We show the applicability of our technique through a case study where we generate novel stories using an acceptance criteria learned from a set of movie plots taken from Wikipedia. This study shows that stories generated using this approach adhere to this criteria 85%-86% of the time.


Event Representations for Automated Story Generation with Deep Neural Nets

AAAI Conferences

Automated story generation is the problem of automatically selecting a sequence of events, actions, or words that can be told as a story. We seek to develop a system that can generate stories by learning everything it needs to know from textual story corpora. To date, recurrent neural networks that learn language models at character, word, or sentence levels have had little success generating coherent stories. We explore the question of event representations that provide a mid-level of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity. We present a technique for preprocessing textual story data into event sequences. We then present a technique for automated story generation whereby we decompose the problem into the generation of successive events (event2event) and the generation of natural language sentences from events (event2sentence). We give empirical results comparing different event representations and their effects on event successor generation and the translation of events to natural language.


Stories for Images-in-Sequence by using Visual and Narrative Components

arXiv.org Artificial Intelligence

Recent research in AI is focusing towards generating narrative stories about visual scenes. It has the potential to achieve more human-like understanding than just basic description generation of images- in-sequence. In this work, we propose a solution for generating stories for images-in-sequence that is based on the Sequence to Sequence model. As a novelty, our encoder model is composed of two separate encoders, one that models the behaviour of the image sequence and other that models the sentence-story generated for the previous image in the sequence of images. By using the image sequence encoder we capture the temporal dependencies between the image sequence and the sentence-story and by using the previous sentence-story encoder we achieve a better story flow. Our solution generates long human-like stories that not only describe the visual context of the image sequence but also contains narrative and evaluative language. The obtained results were confirmed by manual human evaluation.


Evaluating Analogy-Based Story Generation: An Empirical Study

AAAI Conferences

Evaluation is one of the major open problems in computational narrative. In this paper, we present an empirical study of SAM, an analogy-based story generation (ASG) algorithm, that was created as part of our Riu interactive narrative system. Specifically, our study focuses on SAM's capability to retrieve and generate short non-interactive stories. Combining qualitative and quantitative methods from different disciplines, the methodology in this study can be extended to evaluating other computational narrative systems.