Exploring Fairness in Educational Data Mining in the Context of the Right to be Forgotten

Qian, Wei, Chen, Aobo, Zhao, Chenxu, Li, Yangyi, Huai, Mengdi

arXiv.org Artificial Intelligence 

Student data, which is a critical component in EDM research, can contain personal information, such as age and gender, as well as academic performance and activity data from online learning systems [24]. By offering valuable insights into student learning, EDM supports the development of more effective educational practices and policies, ultimately improving student outcomes. One of the most popular techniques in the previous works is incorporating machine learning techniques, which has achieved remarkable success in discovering intricate structures within educational datasets. However, in recent years, concerns about the fairness of deploying algorithmic decision-making in the educational context have emerged [2, 22, 27, 49]. Particularly, machine learning models can produce biased and unfair outcomes for certain student groups, significantly affecting their educational opportunities and achievements. Given that the data empowering EDM research can often contain personally identifiable and other sensitive information, there has been increased attention to privacy protection in recent years [37, 43]. Additionally, privacy legislation such as the California Consumer Privacy Act [39] and the former Right to be Forgotten [17] has granted users the right to erase the impact of their sensitive information from the trained models to protect their privacy. One approach to protecting users' privacy involves enabling the trained machine learning model to entirely forget Both authors contributed equally to this research.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found