A Human Rights-Based Approach to Responsible AI

Prabhakaran, Vinodkumar, Mitchell, Margaret, Gebru, Timnit, Gabriel, Iason

arXiv.org Artificial Intelligence 

On the other hand, these research insights are meant to intervene on platforms that are globally present, serving a global population from diverse societies, cultures and values, with their own forms of injustices. A core concern in this arrangement is that of value imposition, where local values, i.e., values that are local to the regions where the interventions are built, implicitly shape and inform global systems without any or much room for discussion or contestation from those affected by those interventions. More specifically, interventions designed to address FATE failures necessarily impart a normative value system, but the values that guide the proposed solutions are rarely recognized as sites of contestation. This is problematic because while there may be ethical principles for ML that garner a degree of consensus across different value systems, in a pluralistic world this consensus is not something that should be assumed. Instead, we need to be explicit about the values that underpin the quest for ethical and just AI, and to cultivate an active debate about those values, critically examining and evaluating claims about them[28]. Another shortcoming of not being explicit about what normative value systems shape the interventions is the vagueness it entails, making it harder to arrive at a common vocabulary and shared understanding between computer scientists and civil society. Such a shared understanding is crucial to bridge the gap between research and practice, especially in a way that effectively supports the priorities of the latter constituency.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found