Not enough data to create a plot.
Try a different view from the menu above.
Riedl, Mark
Ambient Adventures: Teaching ChatGPT on Developing Complex Stories
Chen, Zexin, Zhou, Eric, Eaton, Kenneth, Peng, Xiangyu, Riedl, Mark
Imaginative play is an area of creativity that could allow robots to engage with the world around them in a much more personified way. Imaginary play can be seen as taking real objects and locations and using them as imaginary objects and locations in virtual scenarios. We adopted the story generation capability of large language models (LLMs) to obtain the stories used for imaginary play with human-written prompts. Those generated stories will be simplified and mapped into action sequences that can guide the agent in imaginary play. To evaluate whether the agent can successfully finish the imaginary play, we also designed a text adventure game to simulate a house as the playground for the agent to interact.
Dialogue Shaping: Empowering Agents through NPC Interaction
Zhou, Wei, Peng, Xiangyu, Riedl, Mark
One major challenge in reinforcement learning (RL) is the large amount of steps for the RL agent needs to converge in the training process and learn the optimal policy, especially in text-based game environments where the action space is extensive. However, non-player characters (NPCs) sometimes hold some key information about the game, which can potentially help to train RL agents faster. Thus, this paper explores how to interact and converse with NPC agents to get the key information using large language models (LLMs), as well as incorporate this information to speed up RL agent's training using knowledge graphs (KGs) and Story Shaping.
Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer
Xie, Kaige, Yu, Tong, Wang, Haoliang, Wu, Junda, Zhao, Handong, Zhang, Ruiyi, Mahadik, Kanak, Nenkova, Ani, Riedl, Mark
In real-world scenarios, labeled samples for dialogue summarization are usually limited (i.e., few-shot) due to high annotation costs for high-quality dialogue summaries. To efficiently learn from few-shot samples, previous works have utilized massive annotated data from other downstream tasks and then performed prompt transfer in prompt tuning so as to enable cross-task knowledge transfer. However, existing general-purpose prompt transfer techniques lack consideration for dialogue-specific information. In this paper, we focus on improving the prompt transfer from dialogue state tracking to dialogue summarization and propose Skeleton-Assisted Prompt Transfer (SAPT), which leverages skeleton generation as extra supervision that functions as a medium connecting the distinct source and target task and resulting in the model's better consumption of dialogue state information. To automatically extract dialogue skeletons as supervised training data for skeleton generation, we design a novel approach with perturbation-based probes requiring neither annotation effort nor domain knowledge. Training the model on such skeletons can also help preserve model capability during prompt transfer. Our method significantly outperforms existing baselines. In-depth analyses demonstrate the effectiveness of our method in facilitating cross-task knowledge transfer in few-shot dialogue summarization.
Why Don't You Do Something About It? Outlining Connections between AI Explanations and User Actions
Mansi, Gennie, Riedl, Mark
A core assumption of explainable AI systems is that explanations change what users know, thereby enabling them to act within their complex socio-technical environments. Despite the centrality of action, explanations are often organized and evaluated based on technical aspects. Prior work varies widely in the connections it traces between information provided in explanations and resulting user actions. An important first step in centering action in evaluations is understanding what the XAI community collectively recognizes as the range of information that explanations can present and what actions are associated with them. In this paper, we present our framework, which maps prior work on information presented in explanations and user action, and we discuss the gaps we uncovered about the information presented to users.
Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems
Lin, Zhiyu, Ehsan, Upol, Agarwal, Rohan, Dani, Samihan, Vashishth, Vidushi, Riedl, Mark
Generative Artificial Intelligence systems have been developed for image, code, story, and game generation with the goal of facilitating human creativity. Recent work on neural generative systems has emphasized one particular means of interacting with AI systems: the user provides a specification, usually in the form of prompts, and the AI system generates the content. However, there are other configurations of human and AI coordination, such as co-creativity (CC) in which both human and AI systems can contribute to content creation, and mixed-initiative (MI) in which both human and AI systems can initiate content changes. In this paper, we define a hypothetical human-AI configuration design space consisting of different means for humans and AI systems to communicate creative intent to each other. We conduct a human participant study with 185 participants to understand how users want to interact with differently configured MI-CC systems. We find out that MI-CC systems with more extensive coverage of the design space are rated higher or on par on a variety of creative and goal-completion metrics, demonstrating that wider coverage of the design space can improve user experience and achievement when using the system; Preference varies greatly between expertise groups, suggesting the development of adaptive, personalized MI-CC systems; Participants identified new design space dimensions including scrutability -- the ability to poke and prod at models -- and explainability.
Story Shaping: Teaching Agents Human-like Behavior with Stories
Peng, Xiangyu, Cui, Christopher, Zhou, Wei, Jia, Renee, Riedl, Mark
Reward design for reinforcement learning agents can be difficult in situations where one not only wants the agent to achieve some effect in the world but where one also cares about how that effect is achieved. For example, we might wish for an agent to adhere to a tacit understanding of commonsense, align itself to a preference for how to behave for purposes of safety, or taking on a particular role in an interactive game. Storytelling is a mode for communicating tacit procedural knowledge. We introduce a technique, Story Shaping, in which a reinforcement learning agent infers tacit knowledge from an exemplar story of how to accomplish a task and intrinsically rewards itself for performing actions that make its current environment adhere to that of the inferred story world. Specifically, Story Shaping infers a knowledge graph representation of the world state from observations, and also infers a knowledge graph from the exemplar story. An intrinsic reward is generated based on the similarity between the agent's inferred world state graph and the inferred story world graph. We conducted experiments in text-based games requiring commonsense reasoning and shaping the behaviors of agents as virtual game characters.
Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning
Castricato, Louis, Havrilla, Alexander, Matiana, Shahbuland, Pieler, Michael, Ye, Anbang, Yang, Ian, Frazier, Spencer, Riedl, Mark
Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences. Existing methods to control for story preference utilize prompt engineering which is labor intensive and often inconsistent. They may also use logit-manipulation methods which require annotated datasets to exist for the desired attributes. To address these issues, we first train a contrastive bi-encoder model to align stories with corresponding human critiques, named CARP, building a general purpose preference model. This is subsequently used as a reward function to fine-tune a generative language model via reinforcement learning. However, simply fine-tuning a generative language model with a contrastive reward Figure 1: Illustration of our technique for generating model does not always reliably result in story content controlled by preferences. A language a story generation system capable of generating model generates candidates, which are ranked stories that meet user preferences. To increase by the CARP model to produce scores. The scores are story generation robustness we further used to fine-tune the language model to produce higher fine-tune the contrastive reward model using a scoring--and thus more aligned with preferences-- prompt-learning technique.
Machine Learning Approaches for Principle Prediction in Naturally Occurring Stories
Nahian, Md Sultan Al, Frazier, Spencer, Harrison, Brent, Riedl, Mark
Value alignment is the task of creating autonomous systems whose values align with those of humans. Past work has shown that stories are a potentially rich source of information on human values; however, past work has been limited to considering values in a binary sense. In this work, we explore the use of machine learning models for the task of normative principle prediction on naturally occurring story data. To do this, we extend a dataset that has been previously used to train a binary normative classifier with annotations of moral principles. We then use this dataset to train a variety of machine learning models, evaluate these models and compare their results against humans who were asked to perform the same task. We show that while individual principles can be classified, the ambiguity of what "moral principles" represent, poses a challenge for both human participants and autonomous systems which are faced with the same task.
Creative Wand: A System to Study Effects of Communications in Co-Creative Settings
Lin, Zhiyu, Agarwal, Rohan, Riedl, Mark
Recent neural generation systems have demonstrated the potential for procedurally generating game content, images, stories, and more. However, most neural generation algorithms are "uncontrolled" in the sense that the user has little say in creative decisions beyond the initial prompt specification. Co-creative, mixed-initiative systems require user-centric means of influencing the algorithm, especially when users are unlikely to have machine learning expertise. The key to co-creative systems is the ability to communicate ideas and intent from the user to the agent, as well as from the agent to the user. Key questions in co-creative AI include: How can users express their creative intentions? How can creative AI systems communicate their beliefs, explain their moves, or instruct users to act on their behalf? When should creative AI systems take initiative? The answer to such questions and more will enable us to develop better co-creative systems that make humans more capable of expressing their creative intents. We introduce CREATIVE-WAND, a customizable framework for investigating co-creative mixed-initiative generation. CREATIVE-WAND enables plug-and-play injection of generative models and human-agent communication channels into a chat-based interface. It provides a number of dimensions along which an AI generator and humans can communicate during the co-creative process. We illustrate the CREATIVE-WAND framework by using it to study one dimension of co-creative communication-global versus local creative intent specification by the user-in the context of storytelling.
Training Value-Aligned Reinforcement Learning Agents Using a Normative Prior
Nahian, Md Sultan Al, Frazier, Spencer, Harrison, Brent, Riedl, Mark
As more machine learning agents interact with humans, it is increasingly a prospect that an agent trained to perform a task optimally, using only a measure of task performance as feedback, can violate societal norms for acceptable behavior or cause harm. Value alignment is a property of intelligent agents wherein they solely pursue non-harmful behaviors or human-beneficial goals. We introduce an approach to value-aligned reinforcement learning, in which we train an agent with two reward signals: a standard task performance reward, plus a normative behavior reward. The normative behavior reward is derived from a value-aligned prior model previously shown to classify text as normative or non-normative. We show how variations on a policy shaping technique can balance these two sources of reward and produce policies that are both effective and perceived as being more normative. We test our value-alignment technique on three interactive text-based worlds; each world is designed specifically to challenge agents with a task as well as provide opportunities to deviate from the task to engage in normative and/or altruistic behavior.