faculty member
- North America > United States > Michigan (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States > Michigan (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
Teaching at Scale: Leveraging AI to Evaluate and Elevate Engineering Education
Chamberland, Jean-Francois, Carlisle, Martin C., Jayaraman, Arul, Narayanan, Krishna R., Palsole, Sunay, Watson, Karan
Evaluating teaching effectiveness at scale remains a persistent challenge for large universities, particularly within engineering programs that enroll tens of thousands of students. Traditional methods, such as manual review of student evaluations, are often impractical, leading to overlooked insights and inconsistent data use. This article presents a scalable, AI-supported framework for synthesizing qualitative student feedback using large language models. The system employs hierarchical summarization, anonymization, and exception handling to extract actionable themes from open-ended comments while upholding ethical safeguards. Visual analytics contextualize numeric scores through percentile-based comparisons, historical trends, and instructional load. The approach supports meaningful evaluation and aligns with best practices in qualitative analysis and educational assessment, incorporating student, peer, and self-reflective inputs without automating personnel decisions. We report on its successful deployment across a large college of engineering. Preliminary validation through comparisons with human reviewers, faculty feedback, and longitudinal analysis suggests that LLM-generated summaries can reliably support formative evaluation and professional development. This work demonstrates how AI systems, when designed with transparency and shared governance, can promote teaching excellence and continuous improvement at scale within academic institutions.
- North America > United States > Texas (0.05)
- Asia > Middle East > Jordan (0.04)
- Asia > Nepal (0.04)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- Education > Curriculum > Subject-Specific Education (0.89)
- Education > Educational Setting > Higher Education (0.89)
AI Education in a Mirror: Challenges Faced by Academic and Industry Experts
As Artificial Intelligence (AI) technologies continue to evolve, the gap between academic AI education and real-world industry challenges remains an important area of investigation. This study provides preliminary insights into challenges AI professionals encounter in both academia and industry, based on semi-structured interviews with 14 AI experts - eight from industry and six from academia. We identify key challenges related to data quality and availability, model scalability, practical constraints, user behavior, and explainability. While both groups experience data and model adaptation difficulties, industry professionals more frequently highlight deployment constraints, resource limitations, and external dependencies, whereas academics emphasize theoretical adaptation and standardization issues. These exploratory findings suggest that AI curricula could better integrate real-world complexities, software engineering principles, and interdisciplinary learning, while recognizing the broader educational goals of building foundational and ethical reasoning skills.
- North America > United States > Pennsylvania (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Information Technology > Security & Privacy (0.68)
- Education > Curriculum > Subject-Specific Education (0.46)
Forging the digital future
To that end, the college now encompasses multiple existing labs and centers, including the Computer Science and Artificial Intelligence Laboratory (CSAIL), and multiple academic units, including the Department of Electrical Engineering and Computer Science. At the same time, the college has embarked on a plan to hire 50 new faculty members, half of whom will have shared appointments in other departments across all five schools to create a true Institute-wide entity. Those faculty members--two-thirds of whom have already been hired--will conduct research at the boundaries of advanced computing and AI. "We want to do two things: ensure that MIT stays at the forefront of computer science, AI research, and education and infuse the forefront of computing into disciplines across MIT." The new faculty members have already begun helping the college respond to an undeniable reality facing many students: They've been overwhelmingly drawn to advanced computing tools, yet computer science classes are often too technical for nonmajors who want to apply those tools in other disciplines.
- North America > United States > Michigan (0.06)
- North America > United States > Connecticut (0.06)
Faculty Perspectives on the Potential of RAG in Computer Science Higher Education
The emergence of Large Language Models (LLMs) has significantly impacted the field of Natural Language Processing and has transformed conversational tasks across various domains because of their widespread integration in applications and public access. The discussion surrounding the application of LLMs in education has raised ethical concerns, particularly concerning plagiarism and policy compliance. Despite the prowess of LLMs in conversational tasks, the limitations of reliability and hallucinations exacerbate the need to guardrail conversations, motivating our investigation of RAG in computer science higher education. We developed Retrieval Augmented Generation (RAG) applications for the two tasks of virtual teaching assistants and teaching aids. In our study, we collected the ratings and opinions of faculty members in undergraduate and graduate computer science university courses at various levels, using our personalized RAG systems for each course. This study is the first to gather faculty feedback on the application of LLM-based RAG in education. The investigation revealed that while faculty members acknowledge the potential of RAG systems as virtual teaching assistants and teaching aids, certain barriers and features are suggested for their full-scale deployment. These findings contribute to the ongoing discussion on the integration of advanced language models in educational settings, highlighting the need for careful consideration of ethical implications and the development of appropriate safeguards to ensure responsible and effective implementation.
- Asia > India (0.04)
- North America > United States > Texas > Smith County > Tyler (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Questionnaire & Opinion Survey (1.00)
- Instructional Material (1.00)
- Research Report > New Finding (0.89)
- Education > Educational Setting > Higher Education (1.00)
- Education > Curriculum > Subject-Specific Education (1.00)
Large Language Models as Partners in Student Essay Evaluation
Ishida, Toru, Liu, Tongxi, Wang, Hailong, Cheung, William K.
As the importance of comprehensive evaluation in workshop courses increases, there is a growing demand for efficient and fair assessment methods that reduce the workload for faculty members. This paper presents an evaluation conducted with Large Language Models (LLMs) using actual student essays in three scenarios: 1) without providing guidance such as rubrics, 2) with pre-specified rubrics, and 3) through pairwise comparison of essays. Quantitative analysis of the results revealed a strong correlation between LLM and faculty member assessments in the pairwise comparison scenario with pre-specified rubrics, although concerns about the quality and stability of evaluations remained. Therefore, we conducted a qualitative analysis of LLM assessment comments, showing that: 1) LLMs can match the assessment capabilities of faculty members, 2) variations in LLM assessments should be interpreted as diversity rather than confusion, and 3) assessments by humans and LLMs can differ and complement each other. In conclusion, this paper suggests that LLMs should not be seen merely as assistants to faculty members but as partners in evaluation committees and outlines directions for further research.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.15)
- Asia > China > Hong Kong > Kowloon (0.05)
- North America > United States > New York (0.04)
- (2 more...)
- Instructional Material > Course Syllabus & Notes (0.68)
- Research Report > New Finding (0.46)
- Education > Assessment & Standards > Student Performance (1.00)
- Education > Curriculum > Subject-Specific Education (0.72)
Facilitating Holistic Evaluations with LLMs: Insights from Scenario-Based Experiments
Workshop courses designed to foster creativity are gaining popularity. However, achieving a holistic evaluation that accommodates diverse perspectives is challenging, even for experienced faculty teams. Adequate discussion is essential to integrate varied assessments, but faculty often lack the time for such deliberations. Deriving an average score without discussion undermines the purpose of a holistic evaluation. This paper explores the use of a Large Language Model (LLM) as a facilitator to integrate diverse faculty assessments. Scenario-based experiments were conducted to determine if the LLM could synthesize diverse evaluations and explain the underlying theories to faculty. The results were noteworthy, showing that the LLM effectively facilitated faculty discussions. Additionally, the LLM demonstrated the capability to generalize and create evaluation criteria from a single scenario based on its learned domain knowledge.
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (0.89)
Generative AI in Education: A Study of Educators' Awareness, Sentiments, and Influencing Factors
Ghimire, Aashish, Prather, James, Edwards, John
The rapid advancement of artificial intelligence (AI) and the expanding integration of large language models (LLMs) have ignited a debate about their application in education. This study delves into university instructors' experiences and attitudes toward AI language models, filling a gap in the literature by analyzing educators' perspectives on AI's role in the classroom and its potential impacts on teaching and learning. The objective of this research is to investigate the level of awareness, overall sentiment towardsadoption, and the factors influencing these attitudes for LLMs and generative AI-based tools in higher education. Data was collected through a survey using a Likert scale, which was complemented by follow-up interviews to gain a more nuanced understanding of the instructors' viewpoints. The collected data was processed using statistical and thematic analysis techniques. Our findings reveal that educators are increasingly aware of and generally positive towards these tools. We find no correlation between teaching style and attitude toward generative AI. Finally, while CS educators show far more confidence in their technical understanding of generative AI tools and more positivity towards them than educators in other fields, they show no more confidence in their ability to detect AI-generated work.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > Finland > Southwest Finland > Turku (0.04)
- Oceania > Australia (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study > Negative Result (0.88)
- Education > Curriculum > Subject-Specific Education (0.94)
- Education > Educational Setting > Higher Education (0.75)
chatGPT for generating questions and assessments based on accreditations
This research aims to take advantage of artificial intelligence techniques in producing students assessment that is compatible with the different academic accreditations of the same program. The possibility of using generative artificial intelligence technology was studied to produce an academic accreditation compliant test the National Center for Academic Accreditation of Kingdom of Saudi Arabia and Accreditation Board for Engineering and Technology. A novel method was introduced to map the verbs used to create the questions introduced in the tests. The method allows a possibility of using the generative artificial intelligence technology to produce and check the validity of questions that measure educational outcomes. A questionnaire was distributed to ensure that the use of generative artificial intelligence to create exam questions is acceptable by the faculty members, as well as to ask about the acceptance of assistance in validating questions submitted by faculty members and amending them in accordance with academic accreditations. The questionnaire was distributed to faculty members of different majors in the Kingdom of Saudi Arabias universities. one hundred twenty responses obtained with eight five percentile approval percentage for generate complete exam questions by generative artificial intelligence . Whereas ninety eight percentage was the approval percentage for editing and improving already existed questions.
- North America > United States > Ohio (0.04)
- North America > United States > New York (0.04)
- Asia > Middle East > Saudi Arabia > Mecca Province > Jeddah (0.04)
- Questionnaire & Opinion Survey (0.95)
- Instructional Material (0.68)
- Research Report > Promising Solution (0.34)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.93)
- Education > Educational Setting > Higher Education (0.47)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)