Morris, Oscar
Transfer Learning for Automated Feedback Generation on Small Datasets
Morris, Oscar
Feedback is a very important part the learning process. However, it is challenging to make this feedback both timely and accurate when relying on human markers. This is the challenge that Automated Feedback Generation attempts to address. In this paper, a technique to train such a system on a very small dataset with very long sequences is presented. Both of these attributes make this a very challenging task, however, by using a three stage transfer learning pipeline state-of-the-art results can be achieved with qualitatively accurate but unhuman sounding results. The use of both Automated Essay Scoring and Automated Feedback Generation systems in the real world is also discussed.
Automated Feedback Generation for a Chemistry Database and Abstracting Exercise
Morris, Oscar, Morris, Russell
Timely feedback is an important part of teaching and learning. Here we describe how a readily available neural network transformer (machine-learning) model (BERT) can be used to give feedback on the structure of the response to an abstracting exercise where students are asked to summarise the contents of a published article after finding it from a publication database. The dataset contained 207 submissions from two consecutive years of the course, summarising a total of 21 different papers from the primary literature. The model was pre-trained using an available dataset (approx. 15,000 samples) and then fine-tuned on 80% of the submitted dataset. This fine tuning was seen to be important. The sentences in the student submissions are characterised into three classes - background, technique and observation - which allows a comparison of how each submission is structured. Comparing the structure of the students' abstract a large collection of those from the PubMed database shows that students in this exercise concentrate more on the background to the paper and less on the techniques and results than the abstracts to papers themselves. The results allowed feedback for each submitted assignment to be automatically generated.
The Effectiveness of a Dynamic Loss Function in Neural Network Based Automated Essay Scoring
Morris, Oscar
Automated Essay Scoring (AES) is the task of assigning a score to free-form text (throughout this paper essay will be defined loosely to include short answers) using a computational system. The goal of AES is to mimic human scoring as closely as possible. The development of the Transformer in [1] has significantly improved the performance of Natural Language Processing (NLP) models to a point where it is achievable to use a purely neural approach to AES [2], [3]. This has created the possibility for many task-agnostic architectures and pre-training approaches which then allows for greater flexibility in the implementation of these models. This also makes the cutting-edge performance of these NLP models available for simple implementation in real world situations.