Supporting Feedback and Assessment of Digital Ink Answers to In-Class Exercises

AAAI Conferences

Effective teaching involves treating the presentation of new material and the assessment of students' mastery of this material as part of a seamless and continuous feedback cycle. We have developed a computer system, called Classroom Learning Partner (CLP), that supports this methodology, and we have used it in teaching an introductory computer science course at MIT over the past year. Through evaluation of controlled classroom experiments, we have demonstrated that this approach reaches students who would have otherwise been left behind, and that it leads to greater attentiveness in class, greater student satisfaction, and better interactions between the instructor and student. The current CLP system consists of a network of Tablet PCs, and software for posing questions to students, interpreting their handwritten answers, and aggregating those answers into equivalence classes, each of which represents a particular level of understanding or misconception of the material. The current system supports a useful set of recognizers for specific types of answers, and employs AI techniques in the knowledge representation and reasoning necessary to support interpretation and aggregation of digital ink answers.

Mechanix: A Sketch-Based Tutoring and Grading System for Free-Body Diagrams

AI Magazine

Introductory engineering courses within large universities often have annual enrollments which can reach up to a thousand students. It is very challenging to achieve differentiated instruction in classrooms with class sizes and student diversity of such great magnitude. Professors can only assess whether students have mastered a concept by using multiple choice questions, while detailed homework assignments, such as planar truss diagrams, are rarely assigned because professors and teaching assistants would be too overburdened with grading to return assignments with valuable feedback in a timely manner. In this paper, we introduce Mechanix, a sketch-based deployed tutoring system for engineering students enrolled in statics courses. Our system not only allows students to enter planar truss and free body diagrams into the system just as they would with pencil and paper, but our system checks the student's work against a hand-drawn answer entered by the instructor, and then returns immediate and detailed feedback to the student. Students are allowed to correct any errors in their work and resubmit until the entire content is correct and thus all of the objectives are learned. Since Mechanix facilitates the grading and feedback processes, instructors are now able to assign free response questions, increasing teacher's knowledge of student comprehension. Furthermore, the iterative correction process allows students to learn during a test, rather than simply displaying memorized information.

Sketch Recognition Algorithms for Comparing Complex and Unpredictable Shapes

AAAI Conferences

In an introductory engineering course with an annual enrollment of over 1000 students, a professor has little option but to rely on multiple choice exams for midterms and finals. Furthermore, the teaching assistants are too overloaded to give detailed feedback on submitted homework assignments. We introduce Mechanix, a computer-assisted tutoring system for engineering students. Mechanix uses recognition of freehand sketches to provide instant, detailed, and formative feedback as the student progresses through each homework assignment, quiz, or exam. Free sketch recognition techniques allow students to solve free-body diagram and static truss problems as if they were using a pen and paper. The same recognition algorithms enable professors to add new unique problems simply by sketching out the correct answer. Mechanix is able to ease the burden of grading so that instructors can assign more free response questions, which provide a better measure of student progress than multiple choice questions do.

Teaching computers to see -- by learning to see like computers

AITopics Original Links

They comb through databases of previously labeled images and look for combinations of visual features that seem to correlate with particular objects. Then, when presented with a new image, they try to determine whether it contains one of the previously identified combinations of features. Even the best object-recognition systems, however, succeed only around 30 or 40 percent of the time -- and their failures can be totally mystifying. Researchers are divided in their explanations: Are the learning algorithms themselves to blame? Or are they being applied to the wrong types of features?

Beauty is in the AI of the beholder: Young blokes teach computer to judge women by their looks


Chinese researchers claim to have taken facial recognition to the next level – by predicting the personality traits of women from their photos alone. Or rather, given the labels on the training data, predicting the personality traits young guys expect women to have from their looks alone. Undeterred by all the flak they received for their earlier machine-learning system that tried to predict a person's propensity for criminal behavior from their appearance, the eggheads have come up with a sequel. Their latest study, titled Automated Inference on Sociopsychological Impressions of Attractive Female Faces, was published by arXiv, the online open-sourced pre-print journal – the paper has not been accepted by an official journal yet. The basis for their research lies on shaky grounds.