AI tool streamlines feedback on coding homework

Stanford HAI 

This past spring, Stanford University computer scientists unveiled their pandemic brainchild, Code In Place, a project where 1,000 volunteer teachers taught 10,000 students across the globe the content of an introductory Stanford computer science course. Students in Code In Place evaluated the feedback they received using this carefully designed user interface. While the instructors could share their knowledge with hundreds, even thousands, of students at a time during lectures, when it came to homework, large-scale and high-quality feedback on student assignments seemed like an insurmountable task. "It was a free class anyone in the world could take, and we got a whole bunch of humans to help us teach it," said Chris Piech, assistant professor of computer science and co-creator of Code In Place. "But the one thing we couldn't really do is scale the feedback. To solve this problem, Piech worked with Chelsea Finn, assistant professor of computer science and of electrical engineering, and PhD students Mike Wu and Alan Cheng to develop and test a first-of-its-kind artificial intelligence teaching tool capable of assisting educators in grading and providing meaningful, constructive feedback for a high volume of student assignments. Their innovative tool, which is detailed in a Stanford AI Lab blogpost, exceeded their expectations. In education, it can be difficult to get lots of data for a single problem, like hundreds of instructor comments on one homework question. Companies that market online coding courses are often similarly limited, and therefore rely on multiple-choice questions or generic error messages when reviewing students' work. "This task is really hard for machine learning because you don't have a ton of data.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found