Goto

Collaborating Authors

 Georgia Institute of Technology


Evaluating Visual Conversational Agents via Cooperative Human-AI Games

AAAI Conferences

As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game — GuessWhich — to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend – that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.


Pre-Learning Experiences with Co-Creative Agents in Museums

AAAI Conferences

Co-creative agents, or artificially intelligent computer agents that can collaborate creatively in real-time with human partners, have proven successful in being both creatively engaging and fun to interact with. Prior research in museum experience design also indicates that due to their incorporation of embodied interaction, creative narrative construction, and personal identity, co-creative agents have potential to drive pre-learning experiences that motivate participants to learn more about technology in museum settings. However, many co-creative agents fall short in effectively communicating technology-related educational outcomes. My work aims to explore how museum experiences involving co-creative agents can be designed and evaluated such that they both foster creative engagement and facilitate pre-learning experiences, using two interactive installation projects (LuminAI and TuneTable) as technical probes.


Turn-Taking with Improvisational Co-Creative Agents

AAAI Conferences

Turn-taking is the ability for agents to lead or follow in social interactions. Turn-taking between humans and intelligent agents has been studied in human-robot interaction but has not been applied to improvisational, dance-based interactions. User understanding and experience of turn-taking in an improvisational, dance-based system known as LuminAI was investigated in a preliminary study of 11 participants. The results showed a trend towards users understanding the difference between turn-taking and non-turn-taking versions of LuminAI but reduced user experience in the turn-taking version.


A General Level Design Editor for Co-Creative Level Design

AAAI Conferences

In this paper we describe a level design editor designed as an interface to allow different AI agents to creatively collaborate on level design problems with human designers. We intend to investigate the comparative impacts of different AI techniques on user experience in this context.


Toward Automated Story Generation with Markov Chain Monte Carlo Methods and Deep Neural Networks

AAAI Conferences

In this paper, we introduce an approach to automated story generation using Markov Chain Monte Carlo (MCMC) sampling. This approach uses a sampling algorithm based on Metropolis-Hastings to generate a probability distribution which can be used to generate stories via random sampling that adhere to criteria learned by recurrent neural networks. We show the applicability of our technique through a case study where we generate novel stories using an acceptance criteria learned from a set of movie plots taken from Wikipedia. This study shows that stories generated using this approach adhere to this criteria 85%-86% of the time.




Editorial: AI Education for the World

AI Magazine

The focus of AI education in general has been on training small numbers of students for research and teaching responsibilities in academe and research and development positions in industry and government. Emphasis typically has been on cultivating depth of understanding of AI concepts and methods and rigor in AI methodologies of analysis, modeling, design, experiment, and so on. The need for this kind of deep and rigorous education in AI will not only continue but also grow. Nevertheless, several factors are converging to change fundamentally some aspects of AI education in the 21st century. First, there is a growing demand for expertise in AI in industry, business, and commerce.


Using AI to Teach AI: Lessons from an Online AI Class

AI Magazine

In fall 2014, we launched a foundational course in artificial intelligence (CS7637: Knowledge-Based AI) as part of the Georgia Institute of Technology's Online Master of Science in Computer Science program. We incorporated principles and practices from the cognitive and learning sciences into the development of the online AI course. We also integrated AI techniques into the instruction of the course, including embedding 100 highly focused intelligent tutoring agents in the video lessons. By now, more than 2000 students have taken the course. Evaluations have indicated that OMSCS students enjoy the course compared to traditional courses, and more importantly, that online students have matched residential students' performance on the same assessments. In this article, we present the design, delivery, and evaluation of the course, focusing on the use of AI for teaching AI. We also discuss lessons we learned for scaling the teaching and learning of AI.


Ask Me Anything about MOOCs

AI Magazine

In this article, ten questions about MOOCs (crowdsourced from the recipients of the AAAI and SIGCSE mailing lists) were posed by editors Michael Wollowski, Todd Neller, James Boerkoel to Douglas H. Fisher, Charles Isbell Jr., and Michael Littman — educators with unique, relevant experiences to lend their perspective on those issues.