Towards Pedagogical LLMs with Supervised Fine Tuning for Computing Education
Vassar, Alexandra, Renzella, Jake, Ross, Emily, Taylor, Andrew
–arXiv.org Artificial Intelligence
This paper investigates supervised fine-tuning of large language models (LLMs) to improve their pedagogical alignment in computing education, addressing concerns that LLMs may hinder learning outcomes. The project utilised a proprietary dataset of 2,500 high quality question/answer pairs from programming course forums, and explores two research questions: the suitability of university course forums in contributing to fine-tuning datasets, and how supervised fine-tuning can improve LLMs' alignment with educational principles such as constructivism. Initial findings suggest benefits in pedagogical alignment of LLMs, with deeper evaluations required.
arXiv.org Artificial Intelligence
Nov-3-2024
- Genre:
- Research Report
- Experimental Study (0.35)
- New Finding (0.55)
- Research Report
- Industry:
- Education > Curriculum (0.39)
- Technology: