ABROCA Distributions For Algorithmic Bias Assessment: Considerations Around Interpretation
Borchers, Conrad, Baker, Ryan S.
Algorithmic bias is of critical concern within education as it could undermine the effectiveness of learning analytics. While different definitions and conceptualizations of algorithmic bias and fairness exist [2], their common denominator typically revolves around systematic unfairness or unequal treatment of groups caused by algorithms. This bias occurs when an algorithm produces results that disproportionately disadvantage or favor particular groups of people based on non-malleable characteristics like race, gender, or socioeconomic status [7]. Recent learning analytics research argued that although the vast majority of published papers investigating algorithmic bias in education find evidence of bias [2], some predictive models appear to achieve fairness, with minimal difference in model quality across demographic groups. For example, Zambrano et al. [18] evaluated careless detectors and Bayesian knowledge tracing models, finding near-equal performance across groups defined by race, gender, socioeconomic status, special needs, and English language learner status. Similarly, Jiang and Pardos [10] compared accuracies of grade prediction models across ethnic groups, concluding that an adversarial learning approach led to the fairest models but did not engage in the question of whether their fairest model was sufficiently fair.
Nov-28-2024
- Country:
- North America > United States > Pennsylvania (0.68)
- Genre:
- Research Report
- Experimental Study (0.69)
- New Finding (1.00)
- Research Report
- Industry:
- Education (0.46)
- Technology: