Goto

Collaborating Authors

 Drushchak, Nazarii


Towards Responsible AI in Education: Hybrid Recommendation System for K-12 Students Case Study

arXiv.org Artificial Intelligence

--The growth of Educational T echnology (EdT ech) has enabled highly personalized learning experiences through Artificial Intelligence (AI)-based recommendation systems tailored to each student's needs. However, these systems can unintentionally introduce biases, potentially limiting fair access to learning resources. This study presents a recommendation system for K-12 students, combining graph-based modeling and matrix factorization to provide personalized suggestions for extracurricular activities, learning resources, and volunteering opportunities. T o address fairness concerns, the system includes a framework to detect and reduce biases by analyzing feedback across protected student groups. This work highlights the need for continuous monitoring in educational recommendation systems to support equitable, transparent, and effective learning opportunities for all students. I NTRODUCTION The rapid advancement of Educational Technology (EdTech) has significantly reshaped traditional learning environments, enabling the delivery of personalized educational experiences tailored to individual students' needs. According to the U.S. Department of Education Office of Educational Technology, leveraging AI-based modern educational technologies has been pivotal in providing personalized pathways for learning, supporting adaptive and individualized instruction, and enhancing student engagement through innovative digital solutions 1 . This trend toward personalization in education underscores the importance of leveraging advanced recommendation systems to support student exploration and growth.


The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice

arXiv.org Artificial Intelligence

The ``impossibility theorem'' -- which is considered foundational in algorithmic fairness literature -- asserts that there must be trade-offs between common notions of fairness and performance when fitting statistical models, except in two special cases: when the prevalence of the outcome being predicted is equal across groups, or when a perfectly accurate predictor is used. However, theory does not always translate to practice. In this work, we challenge the implications of the impossibility theorem in practical settings. First, we show analytically that, by slightly relaxing the impossibility theorem (to accommodate a \textit{practitioner's} perspective of fairness), it becomes possible to identify a large set of models that satisfy seemingly incompatible fairness constraints. Second, we demonstrate the existence of these models through extensive experiments on five real-world datasets. We conclude by offering tools and guidance for practitioners to understand when -- and to what degree -- fairness along multiple criteria can be achieved. For example, if one allows only a small margin-of-error between metrics, there exists a large set of models simultaneously satisfying \emph{False Negative Rate Parity}, \emph{False Positive Rate Parity}, and \emph{Positive Predictive Value Parity}, even when there is a moderate prevalence difference between groups. This work has an important implication for the community: achieving fairness along multiple metrics for multiple groups (and their intersections) is much more possible than was previously believed.