Goto

Collaborating Authors

 Jhala, Arnav


Mid-Scale Shot Classification for Detecting Narrative Transitions in Movie Clips

AAAI Conferences

This paper examines classification of shots in video streams for indexing and semantic analysis. We describe an approach to obtain shot motion by making use of motion estimation algorithms to estimate camera movement. We improve prior work by using the four edge regions of a frame to classify No Motion shots. We analyze a neighborhood of shots and provide a new concept, middle-scale classification. This approach relies on automated labeling of frame transitions in terms of motion across adjacent frames. These annotations form a sequential scene-group that correlates with narrative events in the videos. We introduce six middle-scale classes and the corresponding likely sequence content from three clips of the movie The Lord of the Rings : The Return of the King , demonstrate that the middle-scale classification approach successfully extracts a summary of the salient aspects of the movie. We also show direct comparison with prior work on the full movie Matrix .


Reinforcement Learning for Spatial Reasoning in Strategy Games

AAAI Conferences

One of the major weaknesses of current real-time strategy (RTS) game agents is handling spatial reasoning at a high level. One challenge in developing spatial reasoning modules for RTS agents is to evaluate the ability of a given agent for this competency due to the inevitable confounding factors created by the complexity of these agents. We propose a simplified game that mimics spatial reasoning aspects of more complex games, while removing other complexities. Within this framework, we analyze the effectiveness of classical reinforcement learning for spatial management in order to build a detailed evaluative standard across a broad set of opponent strategies. We show that against a suite of opponents with fixed strategies, basic Q-learning is able to learn strategies to beat each. In addition, we demonstrate that performance against unseen strategies improves with prior training from other distinct strategies. We also test a modification of the basic algorithm to include multiple actors, to speed learning and increase scalability. Finally, we discuss the potential for knowledge transfer to more complex games with similar components.


The Eighth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment

AI Magazine

The Eighth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) was held October 8-12, 2012, at Stanford University in Palo Alto, California. The conference included a research and industry track as well as a demonstration program. The conference featured 16 technical papers, 16 posters, and one demonstration, along with invited speakers, the StarCraft Ai competition, a newly-introduced Doctoral Consortium, and 5 workshops.


The Eighth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment

AI Magazine

The Eighth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) was held October 8-12, 2012, at Stanford University in Palo Alto, California. The conference included a research and industry track as well as a demonstration program. The conference featured 16 technical papers, 16 posters, and one demonstration, along with invited speakers, the StarCraft Ai competition, a newly-introduced Doctoral Consortium, and 5 workshops. This report summarizes the activities of the conference.


RoleModelVis: A Visualization of Logical Story Models

AAAI Conferences

In this demo we present a visualization of formalized representations of story. Introducing the interactive to storytelling requires the management of experiences that a user creates by their decisions. These sorts of variations can have impact on not only the user, but also the retrievable content appropriate to present to the user. The overall contribution of this work is to identify the player impact of story variation by modeling supplementary variations, and systematically responding to player interaction through supplementary variation, while respecting the authorโ€™s intentions by maintaining the integrity of the core story skeleton.


Learning from Demonstration for Goal-Driven Autonomy

AAAI Conferences

Goal-driven autonomy (GDA) is a conceptual model for creating an autonomous agent that monitors a set of expectations during plan execution, detects when discrepancies occur, builds explanations for the cause of failures, and formulates new goals to pursue when planning failures arise. While this framework enables the development of agents that can operate in complex and dynamic environments, implementing the logic for each of the subtasks in the model requires substantial domain engineering. We present a method using case-based reasoning and intent recognition in order to build GDA agents that learn from demonstrations. Our approach reduces the amount of domain engineering necessary to implement GDA agents and learns expectations, explanations, and goals from expert demonstrations. We have applied this approach to build an agent for the real-time strategy game StarCraft. Our results show that integrating the GDA conceptual model into the agent greatly improves its win rate.


Recap of the Seventh AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE)

AI Magazine

The Seventh AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment was held from October 11โ€“14, 2011, on the campus of Stanford University near Palo Alto, California. This report summarizes the conference and related activities.


Recap of the Seventh AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE)

AI Magazine

This report summarizes the conference and related activities. For the first time in AIIDE's history, the main program of the conference was preceded by three workshops: Intelligent Narrative Technologies workshop, the workshop on Nonplayer Character AI, and the Artificial Intelligence in the Game Design Process workshop. All three attracted a substantial audience and led to exciting debates and fruitful discussions (figure 1). In total, 24 papers were presented in the three workshops. The Intelligent Narrative Technologies workshop included papers on story representation, dialogue generation, narrative visualization, and authoring interfaces for interactive narrative, and a panel on corpus-based approaches to modeling narrative.


Building Human-Level AI for Real-Time Strategy Games

AAAI Conferences

Video games are complex simulation environments with many real-world properties that need to be addressed in order to build robust intelligence. In particular, real-time strategy games provide a multi-scale challenge which requires both deliberative and reactive reasoning processes. Experts approach this task by studying a corpus of games, building models for anticipating opponent actions, and practicing within the game environment. We motivate the need for integrating heterogeneous approaches by enumerating a range of competencies involved in gameplay and discuss how they are being implemented in EISBot, a reactive planning agent that we have applied to the task of playing real-time strategy games at the same granularity as humans.


A Particle Model for State Estimation in Real-Time Strategy Games

AAAI Conferences

A big challenge for creating human-level game AI is building agents capable of operating in imperfect information environments. In real-time strategy games the technological progress of an opponent and locations of enemy units are partially observable. To overcome this limitation, we explore a particle-based approach for estimating the location of enemy units that have been encountered. We represent state estimation as an optimization problem, and automatically learn parameters for the particle model by mining a corpus of expert StarCraft replays. The particle model tracks opponent units and provides conditions for activating tactical behaviors in our StarCraft bot. Our results show that incorporating a learned particle model improves the performance of EISBot by 10% over baseline approaches.