Goto

Collaborating Authors

 interactive fiction


Inherently Explainable Reinforcement Learning in Natural Language

Neural Information Processing Systems

Observation: Up a tree Beside you on the branch is a small birds nest In the birds nest is a large egg encrusted with precious jewels, scavenged by a childless songbird... Explanation: I am in the Forest Path now.


Code to Joy: Why Everyone Should Learn a Little Programming – Interview with Michael Littman

AIHub

Code to Joy: Why Everyone Should Learn a Little Programming is a new book from Michael Littman, Professor of Computer Science at Brown University and a founding trustee of AIhub. We spoke to Michael about what the book covers, what inspired it, and how we are all familiar with many programming concepts in our daily lives, whether we realize it or not. The intended audience is not computer scientists, although I have been getting a very warm reception from computer scientists, which I appreciate. The idea behind the book is to try to help people understand that telling machines what to do (which is how I view much of computer science and AI) is something that is really accessible to everyone. It builds on skills and practices that people already have.

  computer, interactive fiction, michael littman, (11 more...)
  Genre: Summary/Review (0.35)

JECC: Commonsense Reasoning Tasks Derived from Interactive Fictions

Yu, Mo, Gu, Yi, Guo, Xiaoxiao, Feng, Yufei, Zhu, Xiaodan, Greenspan, Michael, Campbell, Murray, Gan, Chuang

arXiv.org Artificial Intelligence

Commonsense reasoning simulates the human ability to make presumptions about our physical world, and it is an essential cornerstone in building general AI systems. We propose a new commonsense reasoning dataset based on human's Interactive Fiction (IF) gameplay walkthroughs as human players demonstrate plentiful and diverse commonsense reasoning. The new dataset provides a natural mixture of various reasoning types and requires multi-hop reasoning. Moreover, the IF game-based construction procedure requires much less human interventions than previous ones. Different from existing benchmarks, our dataset focuses on the assessment of functional commonsense knowledge rules rather than factual knowledge. Hence, in order to achieve higher performance on our tasks, models need to effectively utilize such functional knowledge to infer the outcomes of actions, rather than relying solely on memorizing facts. Experiments show that the introduced dataset is challenging to previous machine reading models as well as the new large language models with a significant 20% performance gap compared to human experts.


RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text

Zhou, Wangchunshu, Jiang, Yuchen Eleanor, Cui, Peng, Wang, Tiannan, Xiao, Zhenxin, Hou, Yifan, Cotterell, Ryan, Sachan, Mrinmaya

arXiv.org Artificial Intelligence

The fixed-size context of Transformer makes GPT models incapable of generating arbitrarily long text. In this paper, we introduce RecurrentGPT, a language-based simulacrum of the recurrence mechanism in RNNs. RecurrentGPT is built upon a large language model (LLM) such as ChatGPT and uses natural language to simulate the Long Short-Term Memory mechanism in an LSTM. At each timestep, RecurrentGPT generates a paragraph of text and updates its language-based long-short term memory stored on the hard drive and the prompt, respectively. This recurrence mechanism enables RecurrentGPT to generate texts of arbitrary length without forgetting. Since human users can easily observe and edit the natural language memories, RecurrentGPT is interpretable and enables interactive generation of long text. RecurrentGPT is an initial step towards next-generation computer-assisted writing systems beyond local editing suggestions. In addition to producing AI-generated content (AIGC), we also demonstrate the possibility of using RecurrentGPT as an interactive fiction that directly interacts with consumers. We call this usage of generative models by ``AI As Contents'' (AIAC), which we believe is the next form of conventional AIGC. We further demonstrate the possibility of using RecurrentGPT to create personalized interactive fiction that directly interacts with readers instead of interacting with writers. More broadly, RecurrentGPT demonstrates the utility of borrowing ideas from popular model designs in cognitive science and deep learning for prompting LLMs. Our code is available at https://github.com/aiwaves-cn/RecurrentGPT and an online demo is available at https://www.aiwaves.org/recurrentgpt.


Deriving Commonsense Inference Tasks from Interactive Fictions

Yu, Mo, Guo, Xiaoxiao, Feng, Yufei, Zhu, Xiaodan, Greenspan, Michael, Campbell, Murray

arXiv.org Artificial Intelligence

For example, most benchmarks When playing an Interactive Fiction (IF) game, we focus on collocation, association or other relations explore and progress through a fantasy world by observing (e.g., ConceptNet (Speer et al., 2016) relations) between textual descriptions and sending text commands words or concepts (Levesque et al., 2012; to control the protagonist. While in pure Talmor et al., 2019; Mullenbach et al., 2019; Jiang texts, we relate the implicit knowledge of these fantasy et al., 2020). Other examples include temporal commonsense worlds with those in our physical world. For (Zhou et al., 2019), physical interactions example, we explore unvisited regions by planning between action and objects (Bisk et al., 2020), emotions over the mentioned locations (spatial relations); we and behaviors of people under the given situation eat apples to recover health and attach the enemies (Sap et al., 2019b), and cause-effects between with swords, but not vice versa (physical interaction events and states (Sap et al., 2019a; Bhagavatula relations); we retrospect the poor choice of et al., 2019; Huang et al., 2019). Second, the task breaking the lantern when we find the protagonist form makes them more likely commonsense validation, in a dangerous dark wood (cause and effects). Plentiful i.e., validation between a commonsense fact and diverse commonsense knowledge from and a text statement, but neglecting hops among our physical world is encoded in our game playing multiple facts.


The "Bandersnatch" Episode of "Black Mirror" and the Pitfalls of Interactive Fiction

The New Yorker

When he was a young man, the English video-game designer Peter Molyneux programmed a pixel to slide across the screen of his Acorn Atom computer. He described the thrill as being "as close to sexual satisfaction as you could possibly get." The feeling was shared, seemingly, by many youths in the Britain of the early nineteen-eighties, when there was little economic opportunity for the working class. Before the industrialization of video games--the great American software factories and their nameless workers--these teens staged a quiet revolution from their bedrooms, designing games on home microprocessors. Those who managed to place their games into high-street retailers, such as WHSmith, became rich.


Detroit: Become Human review – meticulous multiverse of interactive fiction

The Guardian

With a gargantuan 4,000-page script, it is a near-miracle that Detroit: Become Human manages to tell a single coherent story at all. Gradual, ever-developing and interwoven tales lead towards a myriad of endings. So vast is the scope of this mammoth work of interactive fiction that each person who plays it may have a close to unique experience. Indeed, the way that the story bends and morphs around the player is much more interesting than the story itself. Decisions have weight: do you pursue a rogue android across a busy highway, or let it escape?


Games console: the indie designer pouring his grief into interactive art

The Guardian

Scrolling through Twitter on his phone before going to sleep on 22 May 2017, Dan Hett saw a few vague mentions of an accident of some sort in Manchester: "no details, no actual news, just busybodies speculating." He rubbed his eyes, removed his glasses and lay down without thinking about it any further. It wasn't until he picked up his phone the following morning and saw hundreds of notifications that he realised something real had happened, that there had been an explosion, and that his brother Martyn was missing. "The messages, the ones you read … they were right, and you went to sleep," said a voice in his head. "You went to fucking sleep."


Skyrim rendered in text – Filip Hracek – Medium

#artificialintelligence

Going frame-by-frame in our naive start was obviously the wrong move. And going with "kill bandit" obviously made the level of abstraction too high, no matter whether the fight was described in text or represented through a minigame. Let's descend just a little bit from "kill bandit" into a tactics-based approach.


Interactive fiction for smart speakers is the BBC's latest experiment

Engadget

Smart home speakers have quickly become the hot gadget people didn't know they wanted. They can answer your movie trivia questions, call a cab, turn your heating on and do your shopping for you. They're gaining new features every day, but are more than just a utility product. These speakers are a ripe platform for all kinds of screen-free entertainment, and I'm not just talking about streaming a Spotify playlist. Earplay is a popular Alexa skill that tells interactive stories, for example, and never one to be late to a fledgling medium, the BBC has taken note.