MLOps: A Primer for Policymakers on a New Frontier in Machine Learning

Henry, Jazmia

arXiv.org Artificial Intelligence 

Jazmia Henry July 18, 2022 Summary Discussions about reducing the bias present in algorithms have been on the rise since the mid 2010s. AI ethicists, DEI practitioners, Sociologists, Data Scientists and Social Justice Advocates have decried the lack of understanding of the harms that algorithms pose to people who belong to historically marginalized groups. These cries have become increasingly accepted in industry since 2020, but little is understood of how algorithm and Machine Learning (ML) model builders should go about mitigating bias in models that are intended for deployment. This chapter is written with the Data Scientist or MLOps professional in mind but can be used as a resource for policy makers, reformists, AI Ethicists, sociologists, and others interested in finding methods that help reduce bias in algorithms. I will take a deployment centered approach with the assumption that the professionals reading this work have already read the amazing work on the implications of algorithms on historically marginalized groups by Gebru, Buolamwini, Benjamin and Shane to name a few. If you have not read those works, I refer you to the "Important Reading for Ethical Model Building " list at the end of this paper as it will help give you a framework on how to think about Machine Learning models more holistically taking into account their effect on marginalized people. In the Introduction to this chapter, I root the significance of their work in real world examples of what happens when models are deployed without transparent data collected for the training process and are deployed without the practitioners paying special attention to what happens to models that adapt to exploit gaps between their training environment and the real world. The rest of this chapter builds on the work of the aforementioned researchers and discusses the reality of models performing post production and details ways ML practitioners can identify bias using tools during the MLOps lifecycle to mitigate bias that may be introduced to models in the real world. Introduction "Whether AI will help us reach our aspirations or reinforce the unjust inequalities is ultimately up to us." - Joy Buolowini, 'Facing the Coded Gaze' AI: More than Human Whether you're driving your car using a GPS system, call on Alexa or Siri to turn on your favorite tune, go on social media to perform a well-earned scroll down memory lane, or go to Google search to find a gift to buy for a friend, you have encountered a Machine Learning model.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found