Goto

Collaborating Authors

 Goel, Ashok


Machine Teaching for Building Modular AI Agents based on Zero-shot Learners

arXiv.org Artificial Intelligence

The recent advances in large language models (LLMs) have led to the creation of many modular AI agents. These agents employ LLMs as zero-shot learners to perform sub-tasks in order to solve complex tasks set forth by human users. We propose an approach to enhance the robustness and performance of modular AI agents that utilize LLMs as zero-shot learners. Our iterative machine teaching method offers an efficient way to teach AI agents over time with limited human feedback, addressing the limit posed by the quality of zero-shot learning. We advocate leveraging the data traces from initial deployments and outputs or annotations from the zero-shot learners to train smaller and task-specific substitute models which can reduce both the monetary costs and environmental impact. Our machine teaching process avails human expertise to correct examples with a high likelihood of misannotations. Results on three tasks, common to conversational AI agents, show that close-to-oracle performance can be achieved with supervision on 20-70% of the dataset depending upon the complexity of the task and performance of zero-shot learners.


Using Analytics on Student Created Data to Content Validate Pedagogical Tools

arXiv.org Artificial Intelligence

Conceptual and simulation models can function as useful pedagogical tools, however it is important to categorize different outcomes when evaluating them in order to more meaningfully interpret results. VERA is a ecology-based conceptual modeling software that enables users to simulate interactions between biotics and abiotics in an ecosystem, allowing users to form and then verify hypothesis through observing a time series of the species populations. In this paper, we classify this time series into common patterns found in the domain of ecological modeling through two methods, hierarchical clustering and curve fitting, illustrating a general methodology for showing content validity when combining different pedagogical tools. When applied to a diverse sample of 263 models containing 971 time series collected from three different VERA user categories: a Georgia Tech (GATECH), North Georgia Technical College (NGTC), and ``Self Directed Learners'', results showed agreement between both classification methods on 89.38\% of the sample curves in the test set. This serves as a good indication that our methodology for determining content validity was successful.


Designing a Communication Bridge between Communities: Participatory Design for a Question-Answering AI Agent

arXiv.org Artificial Intelligence

How do we design an AI system that is intended to act as a communication bridge between two user communities with different mental models and vocabularies? Skillsync is an interactive environment that engages employers (companies) and training providers (colleges) in a sustained dialogue to help them achieve the goal of building a training proposal that successfully meets the needs of the employers and employees. We used a variation of participatory design to elicit requirements for developing AskJill, a question-answering agent that explains how Skillsync works and thus acts as a communication bridge between company and college users. Our study finds that participatory design was useful in guiding the requirements gathering and eliciting user questions for the development of AskJill. Our results also suggest that the two Skillsync user communities perceived glossary assistance as a key feature that AskJill needs to offer, and they would benefit from such a shared vocabulary.


Explanation as Question Answering based on Design Knowledge

arXiv.org Artificial Intelligence

Explanation of an AI agent requires knowledge of its design and operation. An open question is how to identify, access and use this design knowledge for generating explanations. Many AI agents used in practice, such as intelligent tutoring systems fielded in educational contexts, typically come with a User Guide that explains what the agent does, how it works and how to use the agent. However, few humans actually read the User Guide in detail. Instead, most users seek answers to their questions on demand. In this paper, we describe a question answering agent (AskJill) that uses the User Guide for an interactive learning environment (VERA) to automatically answer questions and thereby explains the domain, functioning, and operation of VERA. We present a preliminary assessment of AskJill in VERA.


AI-Powered Learning: Making Education Accessible, Affordable, and Achievable

arXiv.org Artificial Intelligence

We have developed an AI-powered socio-technical system for making online learning in higher education more accessible, affordable and achievable. In particular, we have developed four novel and intertwined AI technologies: (1) VERA, a virtual experimentation research assistant for supporting inquiry-based learning of scientific knowledge, (2) Jill Watson Q&A, a virtual teaching assistant for answering questions based on educational documents including the VERA user reference guide, (3) Jill Watson SA, a virtual social agent that promotes online interactions, and (4) Agent Smith, that helps generate a Jill Watson Q&A agent for new documents such as class syllabi. The results are positive: (i) VERA enhances ecological knowledge and is freely available online; (ii) Jill Watson Q&A has been used by >4,000 students in >12 online classes and saved teachers >500 hours of work; (iii) Jill Q&A and Jill Watson SA promote learner engagement, interaction, and community; and (iv). Agent Smith helps generate Jill Watson Q&A for a new syllabus within ~25 hours. Put together, these innovative technologies help make online learning simultaneously more accessible (by making materials available online), affordable (by saving teacher time), and achievable (by providing learning assistance and fostering student engagement).


Accentuating the Magazine in AI Magazine

AI Magazine

A magazine, Moshe informed me, is a collection of miscellaneous pieces, with emphasis on "collection" and "miscellaneous." Thus, starting with this spring 2018 issue, we are accentuating the "magazine" in AI Magazine. Most issues of AI Magazine in the past have been special issues containing a series of technical articles on specific topics. While we will continue to have special issues from time to time, most issues going forward will contain expository articles on a variety of topics. This issue, for example, contains a letter from AAAI Fellow Edwina Rissland, two articles based on award-winning papers at AAAI 2017, two articles on deployed AI applications selected from IAAI 2017, one article based on an award-winning classic AAAI paper, two competition reports, an AI in Industry column, and a conference report, among several other items.


The Structural Affinity Method for Solving the Raven's Progressive Matrices Test for Intelligence

AAAI Conferences

Graphical models offer techniques for capturing the structure of many problems in real-world domains and provide means for representation, interpretation, and inference. The modeling framework provides tools for discovering rules for solving problems by exploring structural relationships. We present the Structural Affinity method that uses graphical models for first learning and subsequently recognizing the pattern for solving problems on the Raven's Progressive Matrices Test of general human intelligence. Recently there has been considerable work on computational models of addressing the Raven's test using various representations ranging from fractals to symbolic structures. In contrast, our method uses Markov Random Fields parameterized by affinity factors to discover the structure in the geometric analogy problems and induce the rules of Carpenter et al.'s cognitive model of problem-solving on the Raven's Progressive Matrices Test. We provide a computational account that first learns the structure of a Raven's problem and then predicts the solution by computing the probability of the correct answer by recognizing patterns corresponding to Carpenter et al.'s rules. We demonstrate that the performance of our model on the Standard Raven Progressive Matrices is comparable with existing state of the art models.



Editorial: AI Education for the World

AI Magazine

The focus of AI education in general has been on training small numbers of students for research and teaching responsibilities in academe and research and development positions in industry and government. Emphasis typically has been on cultivating depth of understanding of AI concepts and methods and rigor in AI methodologies of analysis, modeling, design, experiment, and so on. The need for this kind of deep and rigorous education in AI will not only continue but also grow. Nevertheless, several factors are converging to change fundamentally some aspects of AI education in the 21st century. First, there is a growing demand for expertise in AI in industry, business, and commerce.


Editorial: Expository AI Applications

AI Magazine

At AI Magazine, we are incrementally moving towards expository articles that are accessible to the broader AI community. It is important that the AI community at large has access to serious AI research but in a language it can understand.