ByteDdance's subsidiary Beijing Diandiankankan Technology announced yesterday to release a new AI English learning App named KaiYanJianDanXue(开言简单学, literally translated as Open Language Easy Learning), which is regarded as the beginner-friendly version of Open Language. According to the introduction of the product in the App Store, the functions of this new product mainly include providing scenario learning videos and online courses from North American teachers, improving pronunciation via AI technology, and offering learners individualized learning and reviewing plans. Zhang Yiming, the founder and CEO of ByteDance, regards that the combination with technology will be an inevitable trend in future education sector. From 2017 onwards, ByteDance started to launch educational products in succession such as Learning app Haohao Xuexi (means study well), online English learning platforms GoGoKid and aiKID, English learning app Tangyuan English, and AI English learning product for children from 2 to 8-year-old named GuaGuaLong.
In countries like the US, artificial intelligence is already being used at a large scale to evaluate student essays, saving educational institutes money and time. According to reports, at least 21 states in America have deployed some type of automated scoring, from middle school to college level. Students are being graded on their essays using such AI systems designed by different vendors for highly important tests like the Graduate Record Examinations (GRE). While educators in the US say they are not going back to using human teachers for essay grading, it has received major backlash from parents particularly those from state school systems. But, it's not all great when it comes to automated grading.
On today's episode of the podcast, I got to chat with software engineer Jackson Bates who lives and works in Melbourne, Australia. Jackson used to be a high school English teacher, but gradually taught himself to code and landed a pretty sweet gig as a React dev, partly by chance. Today he works part time as a developer, part time as a stay at home dad, and volunteers his time with various open source projects. Jackson grew up in England, and studied English in school. Although going into education seemed a logical choice, he dabbled in other fields - like working at a prison cafeteria - for a while before landing a teaching job.
Algorithms are grading student essays across the country. So can artificial intelligence really teach us to write better? Todd Feathers, who wrote about AI essay grading for Motherboard, called up every state in the country and found that at least 21 states use some form of automated scoring. "The algorithms are prone to a couple of flaws. One is that they can be fooled by any kind of nonsense gibberish sophisticated words. It looks good from afar but it doesn't actually mean anything. And the other problem is that some of the algorithms have been proven by the testing vendors themselves to be biased against people from certain language backgrounds."
Every year, millions of students sit down for standardized tests that carry weighty consequences. National tests like the Graduate Record Examinations (GRE) serve as gatekeepers to higher education, while state assessments can determine everything from whether a student will graduate to federal funding for schools and teacher pay. Traditional paper-and-pencil tests have given way to computerized versions. And increasingly, the grading process--even for written essays--has also been turned over to algorithms. Natural language processing (NLP) artificial intelligence systems--often called automated essay scoring engines--are now either the primary or secondary grader on standardized tests in at least 21 states, according to a survey conducted by Motherboard.
Writing a good essay typically involves students revising an initial paper draft after receiving feedback. We present eRevise, a web-based writing and revising environment that uses natural language processing features generated for rubric-based essay scoring to trigger formative feedback messages regarding students' use of evidence in response-to-text writing. By helping students understand the criteria for using text evidence during writing, eRevise empowers students to better revise their paper drafts. In a pilot deployment of eRevise in 7 classrooms spanning grades 5 and 6, the quality of text evidence usage in writing improved after students received formative feedback then engaged in paper revision.
Manually grading the Response to Text Assessment (RTA) is labor intensive. Therefore, an automatic method is being developed for scoring analytical writing when the RTA is administered in large numbers of classrooms. Our long-term goal is to also use this scoring method to provide formative feedback to students and teachers about students' writing quality. As a first step towards this goal, interpretable features for automatically scoring the evidence rubric of the RTA have been developed. In this paper, we present a simple but promising method for improving evidence scoring by employing the word embedding model. We evaluate our method on corpora of responses written by upper elementary students.
Today's classroom isn't just a place for education – it's also a laboratory, and teachers are expected to collect huge amounts of data, with the goal of improving learning outcomes. Despite the best intentions, however, this emphasis on educational data is especially onerous for already overworked teachers, meaning they need better tools to assist with collecting that data. That's where new recording strategies can help. Colleges were among the first to place a heavy emphasis on analytics because of their greater resources and research-driven agendas; and as such, they were the first to realize the value of educational data. For example, facing low graduation rates, colleges examined student records and discovered that students were struggling with English classes, even as they were thriving in other subject areas.
Tokyo: The government of Japan is planning to introduce English-speaking Artificial Intelligence (AI) robots in classrooms to help children improve their English speaking skills, considered one of the worst in the world. The Japanese education ministry would be launching a pilot programme to test the effectiveness of the initiative in April 2019, reports Efe news. The initiative will be initially rolled out in 500 schools throughout the country with the aim of fully implementing it in two years, public broadcaster NHK reported Saturday. The programme also includes study apps and online conversation sessions with native English speakers. Japan has proposed improving English skills ahead of the surge in tourists expected during the 2020 Summer Olympics in Tokyo.
Japan's education ministry is planning to place English-speaking artificial intelligence robots in schools to help children improve their English oral communication skills. Japanese students are generally not good at writing in English or speaking the language. Curriculum guidelines that are due to be fully implemented in 2 years will focus on nurturing those skills. In April, the ministry will launch the robot initiative on a trial basis at about 500 schools nationwide. Some schools have already adopted similar robots to enable students to have fun while honing their English pronunciation and conversation skills.