Google's ML-fairness-gym lets researchers study the long-term effects of AI's decisions

#artificialintelligence 

Determining whether an AI system is maintaining fairness in its predictions requires an understanding of models' short- and long-term effects, which might be informed by disparities in error metrics on a number of static data sets. In some cases, it's necessary to consider the context in which the AI system operates in addition to the aforementioned error metrics, which is why Google researchers developed ML-fairness-gym, a set of components for evaluating algorithmic fairness in simulated social environments. ML-fairness-gym -- which was published in open source on Github this week –is designed to be used to research the long-term effects of automated systems by simulating decision-making using OpenAI's Gym framework. AI-controlled agents interact with digital environments in a loop, and at each step, an agent chooses an action that affects the environment's state. The environment then reveals an observation that the agent uses to inform its next actions, so that the environment models the system and dynamics of a problem and the observations serve as data.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found