Goto

Collaborating Authors

 feedback


How Taylor Swift is helping botany gain celebrity status

New Scientist

Feedback is delighted to learn that researchers have discovered what Taylor Swift is accidentally doing to rescue the science of plants from mid-ness. We never miss a beat, so Feedback, prompted by assistant news editor and Swiftie Alexandra Thompson, has been taking a close look at a major paper in the Annals of Botany, published in August. It is called "Dance with plants: Taylor Swift's music videos as advance organizers for meaningful learning in botany" . The thesis is that high school students exhibit "a general low interest in plants", leading to "plant blindness". Teachers struggling to convey the magic of botany are repeating material and are getting sick of it.


Robot Planning

AI Magazine

Drew McDermott Research on planning for robots is in such a state of flux that there is disagreement about what planning is and whether it is necessary. We can take planning to be the optimization and debugging of a robot's program by reasoning about possible courses of execution. It is necessary to the extent that fragments of robot programs are combined at run time. There are several strands of research in the field; I survey six: (1) attempts to avoid planning; (2) the design of flexible plan notations; (3) theories of time-constrained planning; (4) planning by projecting and repairing faulty plans; (5) motion planning; and (6) the learning of optimal behaviors from reinforcements. More research is needed on formal semantics for robot plans.


The 2006 AAAI/SIGART Doctoral Consortium

AI Magazine

Another popular event at the DC was the student-mentor dinner, held this year at Elephant Walk, which provided an opportunity for students and researchers to interact in an informal setting. We report on the eleventh annual SIGART/AAAI Doctoral Consortium, held in conjunction with the National Conference on Artificial Intelligence (AAAI-06). We discuss highlights and innovations of this year's consortium and include pointers to the consortium website. At the DC, Ph.D. students in artificial intelligence presented their proposed research and received feedback from a panel of researchers and other students. The primary goal of the DC is to give students feedback on their proposed dissertation research at a critical time, by independent, knowledgeable reviewers external to their institutions.


Mechanix: A Sketch-Based Tutoring and Grading System for Free-Body Diagrams

AI Magazine

In this article, we introduce Mechanix, a sketch-based deployed tutoring system for engineering students enrolled in statics courses. Our system not only allows students to enter planar truss and free-body diagrams into the system, just as they would with pencil and paper, but our system also checks the student's work against a hand-drawn answer entered by the instructor, and then returns immediate and detailed feedback to the student. Students are allowed to correct any errors in their work and resubmit until the entire content is correct and thus all of the objectives are learned. Since Mechanix facilitates the grading and feedback processes, instructors are now able to assign more free-response questions, increasing teacher's knowledge of student comprehension. Furthermore, the iterative correction process allows students to learn during a test, rather than simply display memorized information.


Power to the People: The Role of Humans in Interactive Machine Learning

AI Magazine

However, potential users of such applications, who are often domain experts for the application, have limited involvement in the process of developing them. The intricacies of applying machine-learning techniques to everyday problems have largely restricted their use to skilled practitioners. In the traditional applied machine-learning workflow, these practitioners collect data, select features to represent the data, preprocess and transform the data, choose a representation and learning algorithm to construct the model, tune parameters of the algorithm, and finally assess the quality of the resulting model. This assessment often leads to further iterations on many of the previous steps. Typically, any end-user involvement in this process is mediated by the practitioners and is limited to providing data, answering domain-related questions, or giving feedback about the learned model.


Guest Editors ' Introduction

AI Magazine

IAAI seeks out applications of artificial intelligence that either demonstrate new technology or use previously known technology in innovative ways. IAAI particularly seeks out examples of deployments of AI technology that tackle the problems of demonstrating value and planning for long-term deployment. The five articles we have selected for this special issue are extended versions of papers that appeared in the conference. Two of the articles are deployed applications that have already demonstrated practical value. The remaining three articles are particularly innovative emerging applications.


Embodied Conversational Agents

AI Magazine

How do we decide how to represent an intelligent system in its interface, and how do we decide how the interface represents information about the world and about its own workings to a user? The rubric representation covers at least three topics in this context: (1) how a computational system is represented in its user interface, (2) how the interface conveys its representations of information and the world to human users, and (3) how the system's internal representation affects the human user's interaction with the system. I argue that each of these kinds of representation (of the system, information and the world, the interaction) is key to how users make the kind of attributions of intelligence that facilitate their interactions with intelligent systems. In this vein, it makes sense to represent a systmem as a human in those cases where social collaborative behavior is key and for the system to represent its knowledge to humans in multiple ways on multiple modalities. I demonstrate these claims by discussing issues of representation and intelligence in an embodied conversational agent--an interface in which the system is represented as a person, information is conveyed to human users by multiple modalities such as voice and hand gestures, and the internal representation is modality independent and both propositional and nonpropositional.


DynaLearn -- An Intelligent Learning Environment for Learning Conceptual Knowledge

AI Magazine

Articulating thought in computerbased media is a powerful means for humans to develop their understanding of phenomena. We have created DynaLearn, an intelligent learning environment that allows learners to acquire conceptual knowledge by constructing and simulating qualitative models of how systems behave. DynaLearn uses diagrammatic representations for learners to express their ideas. The environment is equipped with semantic technology components that are capable of generating knowledge-based feedback and virtual characters that enhance the interaction with learners. Teachers have created course material, and successful evaluation studies have been performed.


Designing for Human-Agent Interaction

AI Magazine

Interacting with a computer requires adopting some metaphor to guide our actions and expectations. Most human-computer interfaces can be classified according to two dominant metaphors: (1) agent and (2) environment. Interactions based on an agent metaphor treat the computer as an intermediary that responds to user requests. In the environment metaphor, a model of the task domain is presented for the user to interact with directly. The term agent has come to refer to the automation of aspects of human-computer interaction (HCI), such as anticipating commands or autonomously performing actions.


1672

AI Magazine

In this article, we describe a deployed educational technology application: the Criterion Online Essay Evaluation Service, a web-based system that provides automated scoring and evaluation of student essays. Criterion has two complementary applications: (1) Critique Writing Analysis Tools, a suite of programs that detect errors in grammar, usage, and mechanics, that identify discourse elements in the essay, and that recognize potentially undesirable elements of style, and (2) e-rater version 2.0, an automated essay scoring system. Critique and e-rater provide students with feedback that is specific to their writing in order to help them improve their writing skills and is intended to be used under the instruction of a classroom teacher. Both applications employ natural language processing and machine learning techniques. All of these capabilities outperform baseline algorithms, and some of the tools agree with human judges in their evaluations as often as two judges agree with each other. Unfortunately, this puts an enormous load on the classroom teacher, who is faced with reading and providing feedback for perhaps 30 essays or more every time a topic is assigned. As a result, teachers are not able to give writing assignments as often as they would wish. With this in mind, researchers have sought to develop applications that automate essay scoring and evaluation. Work in automated essay scoring began in the early 1960s and has been extremely productive (Page 1966; Burstein et al. 1998; Foltz, Kintsch, and Landauer 1998; Larkey 1998; Rudner 2002; Elliott 2003). Detailed descriptions of most of these systems appear in Shermis and Burstein (2003). Pioneering work in the related area of automated feedback was initiated in the 1980s with the Writer's Workbench (MacDonald et al. 1982). The Criterion Online Essay Evaluation Service combines automated essay scoring and diagnostic feedback. The feedback is specific to the student's essay and is based on the kinds of evaluations that teachers typically provide when grading a student's writing. Criterion is intended to be an aid, not a replacement, for classroom instruction. Its purpose is to ease the instructor's load, thereby enabling the instructor to give students more practice writing essays. Criterion contains two complementary applications that are based on natural language processing (NLP) methods. Critique is an application that is comprised of a suite of programs that evaluate and provide feedback for errors in grammar, usage, and mechanics, that identify the essay's discourse structure, and that recognize potentially undesirable stylistic features. The companion scoring application, e-rater version 2.0, extracts linguistically-based features from an essay and uses a statistical model of how these features are related to overall writing quality to assign a holistic score to the essay. Figure 1 shows Criterion's interface for submit-