Facilitating Holistic Evaluations with LLMs: Insights from Scenario-Based Experiments
–arXiv.org Artificial Intelligence
Workshop courses designed to foster creativity are gaining popularity. However, achieving a holistic evaluation that accommodates diverse perspectives is challenging, even for experienced faculty teams. Adequate discussion is essential to integrate varied assessments, but faculty often lack the time for such deliberations. Deriving an average score without discussion undermines the purpose of a holistic evaluation. This paper explores the use of a Large Language Model (LLM) as a facilitator to integrate diverse faculty assessments. Scenario-based experiments were conducted to determine if the LLM could synthesize diverse evaluations and explain the underlying theories to faculty. The results were noteworthy, showing that the LLM effectively facilitated faculty discussions. Additionally, the LLM demonstrated the capability to generalize and create evaluation criteria from a single scenario based on its learned domain knowledge.
arXiv.org Artificial Intelligence
May-27-2024
- Genre:
- Instructional Material > Course Syllabus & Notes (0.89)
- Research Report (1.00)
- Industry:
- Education > Assessment & Standards (0.47)
- Technology: