Education has mostly followed the same structure for centuries -- e.g., the "sage on a stage" and "assembly line" models. As AI continues to disrupt industries like consumer electronics, ecommerce, media, transportation, and healthcare, is education the next big opportunity? Given that education is the foundation that prepares people to pursue advancements in all the other fields, it has the potential to be the most impactful application of AI. The three segments of the education market -- K-12, higher education, and corporate training -- are going through transitions. In the K-12 market, we are seeing the effect of the newer, more rigorous academic standards (Common Core, Next Generation Science Standards) shifting the focus toward measuring students' critical thinking and problem-solving skills and preparing them for college and career success in the 21st century.
What's the best way to prove you "know" something? A. Multiple choice tests B. Essays C. Interviews D. None of the above Go ahead: argue with the premise of the question. Oh yeah, you can't do that on multiple-choice tests. Essays can often better gauge what you know. Writing is integral to many jobs. But despite the fact that everyone can acknowledge that they're a more useful metric, we don't demand students write much on standardized tests because it's daunting to even imagine grading millions of essays.
There is a new optical illusion sweeping the web - and it could well be the toughest yet. Hidden in the vintage illustration of a dog is the head of his master - but can you spot him? The image, posted by Playbuzz, dates back to the turn of the century and was the face of a trade card used as an early advertising gimmick. How fast can you find the dog's master in this vintage illustration from the turn of the century? If you look closely, you can spot the dog's owner in the middle of the picture - with Spot's ear acting as his hat.
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
Modern educational and psychological measurements are governed by models that do not allow for identification of patterns of student thought. However, in many situations, including diagnostic assessment, it is more important to understand student thought than to score it. We propose using entropy-based clustering to group responses to both a standard achievement test and a test specifically designed to reveal different facets of student thinking. We show that this approach is able to identify patterns of thought in these domains, although there are limitations to what information can be obtained from multiple choice responses alone. .